return to table of content

In Defense of Simple Architectures (2022)

zer00eyz
126 replies
9h35m

Early in my career one of the IT guys told me that one of the people on staff was "a technical magpie". I looked at him with a raised eyebrow and he said "He has to grab every shiny piece of tech that shows up and add it to the pile".

This is where we are.

I can't tell you how many times I have seen projects get done just to pad a PM or developers resume. Just because it was the lastest and greatest hot shit thing to use. No sense of if it would be better, faster, cheaper, useful.

When cloud was the hot new thing the company that I worked with launched a replatform on AWS. It gave us the ability to get through the initial scaling and sizing with ease. We left right away, because even then the costs did not make sense. Now we see folks crying about "exit fees" that were always there. That assumes that your institution even has the gall to own years of pissing away money.

Workman like functionality isnt sexy, it wont be the hot bullet point on your resume, it wont get you your next job, but it is dam effective.

TeMPOraL
65 replies
7h29m

Workman like functionality isnt sexy, it wont be the hot bullet point on your resume, it wont get you your next job, but it is dam effective.

So, not fun, not rewarding, no intellectual challenge, no career benefit. Why exactly should I want to do it? This isn't the goddamn United Federation of Planets, nor is the company a church - why exactly should I go above and beyond what I agreed to in exchange for my salary? It's not like the bosses go above and beyond either, nor do they believe in company "mission".

To be clear: I understand the importance of actually doing your job right, and benefits of using boring tech, but you are not selling that well here. Employees need food and shelter and creature comforts, and so do their families. They are going to think beyond the current job, because if they won't, nobody else will.

JAlexoid
15 replies
6h19m

In my experience the boring "I delivered a project very fast with X,Y and Z and saved the company $100mil" will win over "I rearchitected a massive system to run on microservices"

At a certain point in your career, you'll realize that the business manager can override any technical hiring manager. Because at the end of the day delivering results is sexier, than bells and whistles in your resume.

loup-vaillant
5 replies
3h58m

In my experience the boring "I delivered a project very fast with X,Y and Z and saved the company $100mil" will win over "I rearchitected a massive system to run on microservices

Good luck having the opportunity to work in a project where you have even the faintest idea how much money your contribution will make or save. I don't know about you, but never in my 17 year career have I had enough information to even attempt computing these numbers. And even if I could have, it was never part of my job description.

So how did you know your numbers? Or if you didn't, how did you made them up for your interviews?

aprdm
2 replies
2h4m

It's crazy that you don't know. I've been in this industry for 20y and apart from when I was extremely junior I always had a sense of the business impact of my work.

loup-vaillant
1 replies
1h11m

Yeah, a sense. A guess. A gut feeling. Based on what exactly? I sure do get a sense of what will require less effort in the long run, or what will makes the user's life easier, or even what is likely to decrease the defect rate… but I dare 95% of programmers, even the subset active here on HN, to reliably assess the monetary impacts of those decisions within one order of magnitude, especially compared to the alternatives.

Not to mention the monetary impacts of decisions totally outside my control. I can tell the architect "you suggest A, but B is simpler to use and makes the API 3 times simpler at no loss of functionality", what's the point of estimating the impact of such a no-brainer when the architect answers is "you're correct, but we'll do it my way" (real story)? And how do you expect me to estimate the monetary impact of pointing out that their TPM provisioning is missing a verification step? That stuff happens inside the factory, a problem at this stage is unlikely anyways. And even if I could somehow divine my monetary impact, the best I can say now is "I did good work for this company, they didn't listen to me, and now they're going under". Not even kidding, they are going under. I just ended my gig there because I couldn't take it any more.

What are those wonderful places you worked at where you could estimate your impact with reasonable accuracy?

aprdm
0 replies
1h4m

Napkin math and ROI, no one is asking for the cents.

For example, build system improvement on 10% of build time across 200 developers who on average get paid 300k a year - that's a very easy math, no ?

Same for time to deploy, improvements on time to fix a bug, etc. etc.

You can extrapolate and compare initiatives and projects on t-shirt sizes and ROIs. Knowing where yours sit as well.

What places I've worked at ? Mostly business that made some money and were stable.. apart from a start up that was VC funded and made no money

mlhpdx
1 replies
2h24m

Honest question: if you’ve never known the tangible value of your work, how did you decide what to do? It’s an uncomfortable question to ask, but I genuinely don’t understand how that would be possible.

TeMPOraL
0 replies
1h16m

Your manager tells you? Or, higher up the career ladder, whatever is most urgent for the best-paying customer? Like, I know what's on the list to finish for the next multi-million-dollar payout from a major customer, but how my work contributes to it, compared to work done by 20+ other people involved in dev, operations, qa, deployment, customer negotiations, etc.? Who the fuck knows? Best I can estimate is how much it'll cost the company if I fail to deliver my piece on time.

Spivak
4 replies
5h31m

delivered a project very fast with X,Y and Z and saved the company $100mil

The problem is that $100mil is all pixie fairy dust when you're working on a new project. I wish this wasn't true but it works out better for you to implement it as costly and complex as possible, show off how smart you are, then simplify it during a cost cutting initiative (wow they must be so smart to make such an obviously complex system so simple).

The secret is that while you think you're getting away with something playing this game you're actually doing exactly what the business wants.

heyodai
3 replies
3h14m

...while you think you're getting away with something playing this game you're actually doing exactly what the business wants.

How so? I would think the business wants to spend as little money as possible.

wholinator2
0 replies
2h51m

Well maybe not what it wants, but probably (depending on culture) what it _rewards_.

Spivak
0 replies
2h15m

Nah, they want to bring in as much money as possible, subtle difference. High complexity (tech debt) and high costs (paying for expensive managed services) in service to time-to-ship is actually great. If it turns out that the market they predicted doesn't pan out they find out faster and just shut it down chalk it up to r&d costs for the tax break, and if it's so successful it costs them an arm and a let it's "good problems to have."

HeyLaughingBoy
0 replies
2h10m

A bit of an aside, but one of the most important things that I've learned over my career is that the business wants to make as much money as possible. This may seem similar to "wants to spend as little money as possible," but there's a big difference.

Your floor is limited because you can only drop your costs to zero, but there's no ceiling on how much revenue you can make.

tomaskafka
0 replies
5h45m

That depends on interest rates - right now it's a rare time when saved millions suddenly appear worth more than freshly rewritten pile of microservices.

swader999
0 replies
3h29m

Yes, and add in a couple, I saved the project or successfully competled the previously failing project...

mkl95
0 replies
5h33m

"I rearchitected a massive system to run on microservices"

Saving a company from political mayhem is a pretty good achievement to have on your resume. It's also impressive because most engineering teams give up early on.

hliyan
0 replies
5h4m

There's an old IEEE article about the billions of dollars lost due to software project failures: https://spectrum.ieee.org/why-software-fails

We don't hear of such failures any more because software projects (or products) no longer "fail" in the traditional sense -- they turn into endless money sinks of re-architectures, re-platforming, tech debt repayment or expensive maintenance, that can continue as long as the company has cash. When the company does run out of cash, it is difficult to say to what extent tech expenses or lack of revenue due to slow software delivery played a part.

rizzom5000
9 replies
5h29m

no intellectual challenge

I tend to think that cargo cult programming and resume-driven development are the intellectual path of least resistance. Perhaps it's analogous to, "I'd rather rewrite this than understand how it works", because that requires less intellectual effort. Quality engineering is not achieved by the intellectually lazy, from what I've seen.

isoprophlex
6 replies
5h12m

You're not wrong, but when you're inheriting a convoluted 50 file React shitfest that could have been a single HTML page and 20 lines of javascript... what are you going to do? Invest time in understanding that, or radically simplify in 20% of the time it takes to grok what you get thrown at you?

waynesonfire
2 replies
4h18m

strawman. why do you even have a 50 file react shitfest to begin with? Hint: perhaps because someone want to pad their resume?

groestl
0 replies
4h17m

I've seen this. Usually a combination of no economical constraints and technical curiosity on the engineers side.

TeMPOraL
0 replies
4h11m

Hint: because almost every web developer is a junior who doesn't know what they're doing.

Proof: that's literally what a significant positive growth rate of an occupation means - if the doubling period is N years, then at any given moment, half the workforce has N years of experience or less. I don't remember the estimate for webdev, but I think N was something between 3 to 5 years.

groestl
0 replies
4h19m

No, a single HTML page and 20 lines of Javascript is clear cut. But there's a _lot_ of instances where it's not that way, and still rewrites are being proposed.

codelobe
0 replies
3h37m

Ah, I see you are also a coder of culture...

The trick is to get the Project Management to migrate to a hot new framework: Vanilla JS...

http://vanilla-js.com/

asalahli
0 replies
2h6m

Well I still need to understand what it is doing in order to radically simplify it and still have it do the exact same thing.

groestl
1 replies
4h18m

I'd rather rewrite this than understand how it works

Sounds like "how should I know what I think before I hear what I say" ;)

TeMPOraL
0 replies
4h14m

I mean yes, it works that way? Hence inner narrative, for those who have it, and/or talking to yourself via notebook or a text file.

Swizec
9 replies
6h49m

So, not fun, not rewarding, no intellectual challenge, no career benefit. Why exactly should I want to do it?

It does help you get the next job. You’re just pitching it wrong.

Instead of “Built boring tech” try “Delivered $5,000,000 return 2 months early”. Watch your inbox blow up. Business leaders don’t care about what you do, they care about results. What you do to get those results is just an unfortunate cost and obstacle to overcome on the way to the outcome.

loup-vaillant
4 replies
3h51m

Instead of “Built boring tech” try “Delivered $5,000,000 return 2 months early”.

How do I do that without lying through my teeth? 17 years on the job, I never had the data to even begin estimate that kind of things. It was never my job to know it (I'm a programmer, not an accountant), and it was often actively hidden from me.

And how did you do it? How did you get your numbers, and what did you tell recruiters when you didn't?

Swizec
2 replies
2h46m

Maybe I’ve been extraordinarily lucky, but I’ve always just asked and people were so excited that an engineer would actually care about things that are on their mind all day every day.

Might be more common in companies managed by OKR where you always know the business impact of your work. The business impact is your prime objective and you’re free to figure out the implementation.

HeyLaughingBoy
1 replies
1h58m

Right? I was going to ask OP "have you ever asked anyone?"

Because, IME, managers, etc. love it when you show an interest in how the business works and where your part fits in. It also makes their job easier if they can relate the crappy stuff they have to assign you to how much benefit the business gets from it.

loup-vaillant
0 replies
28m

I must be doing something wrong because most of the time, getting interested in the wider impact of my work is held against me. I just ask why they want the stuff, suggest alternatives, point out issues, and the next day I'm an uncontrollable Maverick that means to rewrite everything and waste a ton of time…

This also happens after explicit requests for feedback. Maybe they didn't actually meant it and I'm supposed to "take the hint" or some neurotypical bullshit, but when I hear such requests I tend to take them literally, and provide real feedback on the real issues. Unfortunately for all of us those tend to be stuff that ossified years ago and cannot (or will not) be fixed any time soon. Ever, in my experience. Quite the downer.

Last time this happened I ended up being axed from a wave of layoffs. Whether their short term workflow and subsequent technical debt finally caught up to them, or their parent company just wanted to cut costs, I will never know. I do know my technical expertise was highly praised, and middle management felt I wasn't quite aligned with the goals of the company, whatever those were. (I think one significant cause was that I only emailed them about significant problems, and kept to myself and my team lead when it was smooth. I think in the future I will stop trusting them with anything negative.)

So yeah, I can bet they love it when you act interested about their work, but start questioning their decisions (even just the one directly related to your own work and its likely impact on theirs), and the tone of the conversation changes very, very quickly. At least where I've worked.

hoosieree
0 replies
2h25m

The usual resume advice to quantify everything so each bullet point conveys "number go up" also falls apart when you invent something, create a new revenue stream, new product, etc. The previous value was nil, therefore I improved it by... infinity percent?

JAlexoid
2 replies
6h17m

Exactly. People forget that the final and most important decision for hiring will be at a less technical and much more bean-counting level.

That's the reason why CS graduates with only bells and whistles in their CV have hard times getting a relevant position - glitter over your resume doesn't deliver value at all.

ryandrake
1 replies
4h52m

That's the reason why CS graduates with only bells and whistles in their CV have hard times getting a relevant position - glitter over your resume doesn't deliver value at all.

If this is true, why does everyone still think that filling up their Technology Bingo card will get them their next job, rather than delivering business value?

Swizec
0 replies
4h27m

why does everyone still think that filling up their Technology Bingo card will get them their next job

It does for entry and mid level jobs. When you do this you’re advertising that that’s what you’re looking for – a grunt level job.

Unfortunately most job seeking advice out there is for people new to the industry. Because there’s more of them.

Think about it this way: Would you trust a CEO whose resume lists word, excel, powerpoint, and google docs under skills? Probably not, but you sure would expect a ceo knows how to use those.

reader_1000
0 replies
3h9m

Most companies out there want you to have certain technologies / keywords in your resume and will automatically reject you if you don't have them.

Yes, building a solid project with boring technology that delivers real business value sounds good in theory but not so good when applying for a new job. Maybe it can help after you somehow manage to pass some initial screening.

codingdave
7 replies
7h18m

Because your work is then stable. Easy to maintain, not getting paged for customer problems. Which leaves you the time to do more work that will be interesting and beneficial.

01HNNWZ0MV43FF
6 replies
7h1m

No it doesn't. If your work is easy, the scale gets ratcheted up until it's nearly impossible. That's why web devs have so much trouble with the seemingly-simple task of parsing and producing text

dartos
4 replies
6h56m

What scale?

How does easy work affect scale?

jc2jc
3 replies
6h45m

I believe he is saying they add expectations and responsibilities till you are back to equilibrium (i.e. over extended).

al_borland
2 replies
4h9m

If the equilibrium is always reached, then why wouldn’t I make it easy on myself by making things that are easy to maintain? I want fixing issues to be like blowing out a birthday candle, not dealing with a forest fire? I’d rather blow out 20 candles than deal with a single forest fire.

TeMPOraL
1 replies
3h27m

Of course you would want to do that. However:

1. It's hard to estimate what will or won't be easy to maintain down the line;

2. New tech becomes hot new tech because it promises to be easy to maintain down the line;

3. Most of the maintenance burden is due to choices of other people anyway, so you have limited control over that.

Trying new tech is often a bet on the promise of better maintainability turning out true. It usually doesn't, but the status quo is so bad already that people are grasping at straws.

al_borland
0 replies
1h15m

I tend to stop trusting people/companies/industries which break promise after promise. I want to go with solutions which have proven themselves to be true and stood the test of time. It needs to be worth people’s time to learn, not just today, but in 5 years.

A lot of times tech is so focused on the tech that they forget about real problem they’re trying to solve.

fHr
0 replies
4h27m

True, exactly witnessed this a dozen times.

mplewis
3 replies
5h47m

They don’t call it work because it’s fun.

Your goal isn’t to be intellectually stimulated at your job. If you want that, read a book. Your job is to deliver reliable, lasting value.

Overcomplicating the architecture for the sake of job security is a con you run on your employer.

TeMPOraL
1 replies
3h52m

Your goal isn’t to be intellectually stimulated at your job. If you want that, read a book.

And then people are surprised burnout rates are as high as they are. Lack of mental stimulation leading to burnout is the white-collar equivalent of repetitive stress injury at jobs that put strain on the body.

Your job is to deliver reliable, lasting value.

Nobody is actually paying you for that. In fact, it's probably counterproductive to the business goals.

Overcomplicating the architecture for the sake of job security is a con you run on your employer.

On the other hand, "work ethics" and professionalism in modern workforce is a con your employer runs on you. The further above and beyond you go, the more work they get out of you for the same pay.

Yes, I'm being a bit obtuse here. But my point is, there needs to be a balance. Or at least a mutual understanding of conflicting incentives. We can't demand facets of professionalism in the way that benefits employers short-term, but deny and scorn those that empower the professional. Independent learning and broadening one's experience is a part of what being a professional means.

aprdm
0 replies
1h50m

The fact that you're all the time in hackernews probably means that you're very bored in your actual work, as well with the "FoMO" on AI. I don't think you're on a good position to judge what you're judging, or to give business insights. I believe all of your takes are bad in this thread..

loup-vaillant
0 replies
3h27m

Your goal isn’t to be intellectually stimulated at your job. If you want that, read a book. Your job is to deliver reliable, lasting value.

That's not my goal. That's not even what my employer wants most of the time. Most of the time, it's just about a bunch of rich dudes (and fewer ladies) wanting me to make them even richer. That's how the system works, no need to write the "C" word, or call me with the other "C" word just because I say it so bluntly.

My goal is to enjoy my life. I have various ways of enjoying life, many selfish, some altruistic, very few aligned with the will of the rich people up top. My job takes about a fourth of my waking hours (if I worked full time it would be a third), valuable time that I'd rather spend for me and my loved ones, instead of giving it to people who already have way too much. The only reason I can sometimes tolerate unrewarding work is because I don't have a better way to pay the bills.

The reason I don't over-complicate architecture isn't because it will make more money for my employer (sometimes it means making them less money, especially in the short term). I keep it simple because I can't stand wasted effort.

paulddraper
1 replies
7h0m

Hopefully, hopefully your incentives are aligned with your team's success.

If they are not, I am truly sorry.

underdeserver
0 replies
6h46m

In almost every business setting, your incentives are _partially_ aligned with your employer's. For instance, you usually both want to build a good product; conversely, you want to get paid as much as possible while your employer wants to pay you as little as possible.

If it's all above board, and the non-aligned parts are agreed-to, all is well.

Philip-J-Fry
1 replies
7h7m

A boring and simple tech stack can mean you focus on delivering features rather than trying to work out which part of your complicated system is broken.

The career benefit to me is that a simple tech stack allows a company to move fast and prosper. A prosperous company is usually financially rewarding even if it's not the most mentally rewarding.

Getting tangled up in shiny new toys can harm your ability to move fast and it can have a negative effect on your career at that particular company. Especially since the shiny new toy today is old and rusty tomorrow, but boring stacks will always be old reliable.

evantbyrne
0 replies
1h34m

It is difficult to overestimate the value of being able to actually take time off because changes happen in a reasonable time and your software just works without any surprises. Give me a boring tech stack, please!

PH95VuimJjqBqy
1 replies
5h9m

if the pride of a good job done isn't enough motivation for you then you'll never understand because you simply don't have the ability to.

TeMPOraL
0 replies
3h41m

Unless you're working pro bono, the "pride of a good job done" isn't enough motivation for you either. Your employer may wish it was, though.

Point is, there is more to the equation. Employees don't exist in isolation, and when jobs actively refuse to take into account that the workers are real, living human beings, with needs and plans and dependents to feed, then resume-driven work will continue.

Jtsummers
1 replies
6h11m

no intellectual challenge

If it's not intellectually challenging, you're not working on interesting systems. If you have to find interesting tools to find intellectual stimulation, consider a different system to work on.

As an example, I got to learn astrodynamics as part of my last job. Maybe not intellectually stimulating to everyone, but it didn't require me to learn the latest tooling just an interesting area of math and physics. The tooling and the language for the software wasn't that interesting, but that's fine.

feoren
0 replies
5h29m

I use boring architectures: JS/TS hitting a single C# server hitting a single database. I have had to (and gotten the opportunity to) learn about:

- Environmental chemistry

- Mass balance simulations

- Wastewater treatment processes

- Geotechnical engineering

- Ecology

- Mass transit systems

And quite a bit more. I could not agree with you more. Even without the broad range of interesting subject matter, there's no end to intellectually stimulating work simply trying to architect a system well.

zer00eyz
0 replies
3h59m

>> but you are not selling that well here

I did not sell it well, that's fair.

> Why exactly should I want to do it? This isn't the goddamn United Federation of Planets, nor is the company a church - why exactly should I go above and beyond

HN doesn't want to hear this answer: you do it for the PEOPLE around you.

If you build sexy tech, and then get sexy job and I have to clean up your turds... well you can go fuck yourself. Hope that I'm never going to be the one answering the linked in request for a recommendation or sitting on the other side of the table when you come in for an interview.

The tech job market is bad, and getting worse. You're gonna need an advocate on the inside if you want or need work quickly. That means former co-workers and bosses. NO one is gonna hire clock puncher who did selfish resume building project and left. Dont be that guy.

rgrieselhuber
0 replies
7h20m

The problem is when you prioritize your future career over playing your position and achieving results for your current company. It ends up hurting both the company and your own future prospects because this mindset will inevitably get outed by an engineering manager who isn’t easily bamboozled by shiny objects.

olau
0 replies
6h1m

IMHO it's about society.

If you're asking on a personal level, I think that if you keep to the high ground, you're more likely to find long-lasting happiness. David Graeber spends a good deal of the pages in Bullshit Jobs on this topic.

muyuu
0 replies
3h15m

human factors like drive are more important than most project managers would like to believe

if you have people who are effective, allow them some space for fun and intellectual challenge even if it takes a bit away from the workload - if you disregard those human factors something will give up at the end, perhaps catastrophically as efforts are made to add "sexiness" to the core of the mission critical workload

mattgreenrocks
0 replies
7h9m

Why exactly should I want to do it?

Because you're a professional, and part of that means doing things to help your team succeed.

They are going to think beyond the current job, because if they won't, nobody else will.

This is also right, and a good thing to hold in the other hand.

Reconciling these two forces needs a healthy org that allows employees to grow, along with a recognition that sometimes the forces conflict, and sometimes they don't. All we can do is play the cards we're dealt in the best way.

If you really want to learn new tech, that's what the off hours are for. I say this as someone who has a lot of things that intrude into those hours. I'm (slowly) learning frontend after being a backend/compiler dev for a long time. It's...not easy, but I like it!

manicennui
0 replies
5h6m

Most jobs aren't a source of those things. Why should software development be any different? Introducing unnecessary technical challenges just to make your work interesting often has a negative impact on the end user/customer, which you should give a shit about. Do you think lawyers and architects complain if they aren't allowed to jump on every fad and make their work needlessly complex?

eadmund
0 replies
3h58m

So, not fun, not rewarding, no intellectual challenge, no career benefit. Why exactly should I want to do it? … why exactly should I go above and beyond what I agreed to in exchange for my salary?

I think that delivering a solution which works, even if it is not sexy, is exactly what one agreed to in exchange for one’s salary. It may not be fun, it may have no intellectual challenge, and it may have no career benefit, but it is rewarding: the reward is the salary.

danmaz74
0 replies
2h17m

not rewarding, no intellectual challenge

Don't forget that simple isn't easy. I find it very rewarding and intellectually stimulating to solve a problem with a solution which is as simple as possible - but no simpler.

abetusk
0 replies
4h47m

This is the kind of comment I come to HN for.

I think this is an absolutely right read on the situation. To put it in a slightly different context, the magpie developer is more akin to a "sociopath" from V. Rao's "Gervais Principle" [0], doing the least amount of work for the company while forging a path forward for their career. In this case, it just happens to not be within the same company.

[0] https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...

robertlagrant
9 replies
9h34m

I understand that's your opinion, but could you show me some badly designed user research results to make this conversation more data driven?

blueflow
8 replies
9h11m

more data driven

Any other engineering discipline. What are common practices in IT would be negligence in other disciplines and might get your permit/license removed.

IT is the only sector where companies like Cisco or SAP can exist despite the horrible reliability of their products.

JAlexoid
3 replies
6h4m

As one of my friends, an SAP consultant, said - "The value of SAP isn't that it's actually good, but it's predictably scalable"

As in - you can setup a process in Germany, then replicate it globally with predictable accuracy. And predictability matters a lot in stable low margin businesses. Walmart can't spend a few billion on a project that may have -100% to 400% return value, when. they have the option of having a reliable 20%-30% return value.

fHr
0 replies
4h16m

It is realy funny how SAP is the one single big software company from Europe and it is an absolute dumpster fire.

blueflow
0 replies
5h52m

Provided the transition to SAP doesn't bankrupt you.

FromOmelas
0 replies
4h10m

LIDL famously burned around 500M € on a SAP rollout before pulling the plug.

Hasu
2 replies
8h6m

IT is the only sector where companies like Cisco or SAP can exist despite the horrible reliability of their products

Come on, other industries have garbage companies putting out garbage products, too.

palata
1 replies
7h39m

Come on, other industries have garbage companies putting out garbage products, too.

That's correct, but we have to admit that the software industry excels at this.

marcosdumay
0 replies
6h35m

Software is full of monopolies. But monopolies' products are garbage in every industry.

robertlagrant
0 replies
6h21m

Can you explain it in a different way? I have no idea how it relates to my comment.

CuriouslyC
9 replies
9h9m

That guy was just optimizing for future employability, albeit in a short sighted way. Being able to talk in an interview about how you have professional experience with various tech stacks is valuable. That being said, optimizing for that at the cost of current job performance and coworker alienation is just dumb, since job performance and networking leads are more important for landing good jobs. I'm guessing this guy was a serial job hopper who had no expectation of being able to progress up the ladder at the company you were at.

zer00eyz
6 replies
9h0m

I'm guessing this guy was a serial job hopper who had no expectation of being able to progress up the ladder at the company you were at.

The magpie was practically furniture (Over a decade there). We speculated that he had buried a literal body for the CEO based on what he got away with. Shiny objects was an astute call on the part of IT guy (he was setting up another new MacBook for him)

TeMPOraL
5 replies
8h37m

On the other hand, at least someone was exploring new tech. In the exploration/exploitation problem, going 100% exploitation and only ever using the same boring old tech for everything is not the optimal choice either.

berkes
2 replies
8h6m

One reason why people hire me, is for my actual, production, experience in loads of stacks and architectures.

Actual, production experience, is, IMO, a requirement to make decisions. No-one will make a good decision to ditch or embrace, say, microservices, based on a HN conversation and a few blog-posts. Nor will they make such a decision based on papers in science journals.

But rather based on failures with monoliths, failures with microservices, successes in Rails, successes in Drupal, and failures in Rails or Drupal. (Or leptos, React, flask, whatnots). Actual felt pain, and drawn learnings. Actual celebrated successes and drawn learnings.

edit: I'll often bring that up in consultancy. "Yes, Rails is great because X and Y. But Z is a rather real danger for your case; we've been bitten by that when building FooBarLy..."

What I'm trying to say: yes, indeed, this person is also collecting experience and evidence to make future decisions on. That there's a real need, and actual benefit on trying and implementing new tech. If only because otherwise we'd still be maintaining COBOL mainframe spagetti (oh. wait...)

dartos
1 replies
6h51m

Be honest with me, how many jobs have you had that cared about your variety of experiences?

I’ve been applying to jobs for months and they’re all looking for go and python devs.

I have production experience with both languages, their common web stacks, and many others (ruby, js, php, c#, elixir, erlang, rust).

I’ve felt that even mentioning that I have experience with other stacks is a turn off to recruiters and EMs.

Nobody seems to care about breadth of experience nowadays.

berkes
0 replies
4h10m

All of them in the last decade.

But I guess we misunderstand each-other. None of them cared that I knew "a lot of stuff that isn't appropriate here".

For example, a recent gig, hired me because I'm not just another Rails "expert", but a Rails expert with Typescript experience, who built large CI/CD pipelines with containers and has built complex (is there another way?) AWS infrastructures etc.

Sometimes they need someone with that exact skill-set. In this case, they needed someone to help them move from yet another "upwork-delivered-rails-spagetti" to something that could actually be maintained.

I convinced them to drop the react/typescript frontend for now (it was terribly bolted on) and to forego building their own PaaS nightmare on AWS but instead just push to Heroku - for now.

My experience helped them make tough decisions.

Sometimes gigs hire me because I have a weird combination of experiences. But more often because my experience allows me to help them make decisions on architecture and such. Granted, I am often hired as "senior architect" or some such. And one of the first things I do, is convince them they should never again hire an "externalm interim architect", lol.

tw04
0 replies
6h40m

This is also part of the reason you find reliable reseller partners. They can burn cycles figuring out what new tech is useful and what is a waste of time so you can spend your cycles actually getting things done with cool new tech that works without wasting your company's time and money on things that have fatal flaws that aren't immediately obvious.

chefandy
0 replies
7h31m

Exploring tech is great! … for smaller projects for proofs of concept, prototypes, side projects, projects specifically for researching new technologies… heck yeah.

Just not for your prod architecture. Many late night beepers have beeped and phones lit up because the piece that holds together the two pieces that let that thing talk to the database queue monitor thing stopped working so the whole thing went down.

taeric
0 replies
5h10m

Maybe. Some people really are like collectors chasing the latest thing. You see this in all fields and things. Ever been to someone's house that always has the latest gear in whatever hobby they follow? There is no reason to think people won't do the same in settings other than hobbies.

crispyambulance
0 replies
8h17m

   > I'm guessing this guy was a serial job hopper who had no expectation of being able to progress up the ladder at the company you were at.
Sometimes folks find themselves stuck in a kind of typecast role: they're "the guy" who does "the thing" that the company needs right now-- until they don't.

In many places no one will invite typecast folks to transition to different, more interesting roles that align with their interests. Instead the person will simply be discarded when they're no longer needed for that thing they do. To get around this requires some initiative and that means not "asking for permission" to try new stuff. Sometimes it's better to just take a chance and do something new. There's a risk of cargo-culting, of course, but hey there are worse things that can happen.

Danluu, as he indicated many times, comes from workplaces where staff are paid in multiples of 100K. These are elite "end-game" jobs, not "dead-end" jobs. Such staff are very much tied-in to the performance of the company objectives (in a real sense ($$$$) not in a mission-statement sense), so yeah, these places ALREADY have resources and tech in place that are marketable in other places. There's no need for folks in those workplaces to desperately get out of some php dungeon run by a B.O.F.H petty tyrant.

zmgsabst
8 replies
9h30m

When has a decision that’s bad for the decision maker ever been popular?

We see it in the C-suite; we see it with engineers.

I think the travesty of so-called “principal engineers” and “engineering leaders” is their adamant refusal to make doing the Right Thing (TM) sexy.

Your employees are monkeys: act like it.

thimp
6 replies
9h10m

Yep. Microservices! AWS! Everything Gartner and Thoughtworks says! It'll look good on my resume...

..several years later..

Escalating cloud costs, high staffing cost, staff turnover, heavily reduced margins, decreased productivity, burnout, clients unsatisfied, C-suite paving over this by hiring more marketers...

CuriouslyC
3 replies
9h7m

I wonder how many early stage businesses went tits up because they drank the microservice kool-aid and burned valuable engineering cycles that should have been spent on features on docker spaghetti.

dullcrisp
2 replies
5h52m

Alternatively how many later stage business failed because all their features were in a Rails monolith that no number of engineers could maintain.

rqtwteye
1 replies
3h43m

The Rails monolith companies probably have a better chance at adapting than the 50 microservices maintained by 10 devs companies.

thimp
0 replies
3h12m

This. Just silo the monolith out into tenants.

Salesforce, not exactly a small monolith company, did this for a very very long time.

marcosdumay
1 replies
6h37m

Well, did it look good on the resume?

thimp
0 replies
3h12m

Someone had to stay behind and muck out the stables...

spinningslate
0 replies
8h1m

I think the value of so-called “principal engineers” and “engineering leaders” is their adamant refusal to unconditionally jump on all the latest bandwagons and instead make judicious selections of technology appropriate to the problem at hand.

FTFY.

bluGill
6 replies
8h20m

Just be careful not to go too far in the opposite direction. There are new things coming all the time. You probably don't want to be writing new COBOL anymore even though it was once a good idea (you might have to maintain it, but you should already know what you replace it with and what your interoperability strategy is)

yowlingcat
2 replies
7h41m

Isn't there a labor shortage for COBOL engineers to maintain the mainframe code that powers $3T of transaction volume in banking and healthcare enabling skilled COBOL contractors to name their price?

marcosdumay
1 replies
6h40m

Only at the salaries those banks want to pay, that aren't high.

bluGill
0 replies
4h4m

Depends on the bank and what the code is. I know of insurance jobs that pay very nice salaries. 9-5 job where if you are around at 5:01 they tell you to go home. Vacations are mandatory as well (all banks have mandatory vacations - they need you gone long enough that if you are embezzling money whatever scheme you had breaks and someone else figures it out investigating what is wrong). It is however boring coding that will suck your soul so many around me accept less pay for more interesting work.

rwmj
1 replies
7h23m

COBOL itself is pretty horrible, but if there's an old tech which I'm happy using and there's still high demand for it, why not?

bluGill
0 replies
4h7m

Using it is fine, but you need to know it is horrible and you should already have a this is what new stuff is done in plan in place. Or at least try to make that plan, there may not be a better alternative yet, but you should be looking.

PH95VuimJjqBqy
0 replies
5h3m

you can let the industry do the testing for you.

It's like changing to the new version of an OS on day 1 verses waiting 6 months.

duped
4 replies
6h47m

One of the best jobs I ever had was under "technical magpie." Did we get shit done? No. Did I get paid a lot of money and always have new stuff to do instead of shoveling CRUD? Absolutely. It was a blast.

JAlexoid
2 replies
6h10m

Yes, it's basically being in college - while being paid. If your resume is full of those kind of roles, I'd disregard your resume and many experienced managers will as well.

Remember that your resume will not hold much value, when you give off "we built this thing with friends in a garage" in your resume and little else.

Have you supported anything in production? No? Explain why should you be a candidate for anything other than an entry level position as a SwE.

BeFlatXIII
1 replies
5h24m

If your resume is full of those kind of roles, I'd disregard your resume and many experienced managers will as well.

That's why you lie about it.

chasd00
0 replies
5h12m

"lie" is a strong word but my resume is always optimized for the role i'm applying for. If I have experience in a technology that's not relevant then i leave it off and use the space/attention for something better matching the role.

al_borland
0 replies
4h6m

I had a job like this for a while. My boss always wanted to be involved in the new stuff and I was the one he threw it at to kick the tires.

Some stuff got done, but nothing too mission critical that kept me up at night and it was pretty relaxed.

phendrenad2
3 replies
7h36m

It's the job of engineering management to stop this. We're supposed to say "why do you need this? Justify the need for this". I.E. "Why do you need kafka here? Will we have enough traffic volume to warrant it? Make a proposal." And they need to follow up and ask "Was that needed? Show how we're using it".

But engineering management is so busy filling out TPS reports they don't have time to actually do any oversight.

rqtwteye
0 replies
3h45m

I have rarely seen engineering management that's of any help making these decisions. Either they resist any change or they jump on technology because they have read a LinkedIn article or Gartner report. I have never seen good, fact based technical decisions come from there.

free_bip
0 replies
6h53m

That would require that engineering management actually be competent technically. A shockingly large number aren't.

convolvatron
0 replies
1h55m

engineering management is equally likely to assert some kind of baseless 'best practices' position without really understanding whether or not its actually a good idea in this context

cmsefton
2 replies
6h55m

I'm immediately reminded of my favourite Kurt Vonnegut quote: "Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance."

I've always felt that the magpie syndrome you describe is because of the desire to build new things, rather than maintain what's there.

I watched a repair show recently where a craftsman repaired an old 70s bedside clock. The pride, time and patience he took in repairing the clock was wonderful to see, actively avoiding rebuilding parts if he could reuse what was there, even if there was a crack or blemish.

I've always respected engineers that maintained and fixed software well, and also knew when to reach for the right tool in the toolbox. Better yet, those that knew when not to build something new. Perhaps that's something you learn through experience and doing, but I wonder if it's actively taught, encouraged, or rewarded in the workplace. It really should help get you your next job.

greenie_beans
0 replies
2h36m

i'm glad i have "kurt vonnegut" notifications because this was nice to read.

TeMPOraL
0 replies
3h35m

Is it a flaw though? There's a lot of truth in that eCard: "a clean apartment is a sign of wasted life". How much of technological progress occured to ease the maintenance burden? Is it a flaw that the washing machine is saving people ridiculous amount of time (to the point of, arguably, allowing two-income households to exist in the first place)?

dakiol
1 replies
6h7m

Don’t we get paid the big bucks precisely because we have to fix stuff like this? I mean, if maintaining and fixing software were easy, I guess we wouldn’t be earning 6 figures.

In software engineering We have all these principles and patterns and whatnot precisely because we have to deal with a pile of things that don’t work well together.

fwip
0 replies
3h34m

I think our outsized compensation is less because it's hard, and more because of our industry. In tech companies, labor is a relatively small expenditure, so tripling your labor budget to get 5% more "output" can be a very rational thing to do. (Also, the Mythical Man Month means that a smaller, sharper team is often more useful to solve your task than a bigger one.)

tamiral
0 replies
3h56m

deal with a bunch of this right now, no considerations for future growth of system and also dump everything in json and itll be ok ....tech debt in architectural designs is real... and it takes a lot to trim it back and say ok now we are moving to XYZ tool that works and doesnt need to be shiny. Had a chat with a client once and they needed something and i said this looks like itll be a report and they wanted some super duper dashboard but all you needed was a small db + csv extract for charts etc.

smugglerFlynn
0 replies
3h28m

It’s not the tech, it’s the business - people pay for new and shiny things to be added, regardless of the actual value they bring. Engineering managers hire for shiny things on your resume precisely because of that business trend.

Tech trend will continue until this business mindset of burning money on shiny things changes.

phendrenad2
0 replies
2h38m

I wonder if this effect ultimately advantages Google, and other companies that create their own internal tools rather than using the new shiny buzzword.

olav
0 replies
7h3m

As a product manager, I am frequently confronted by UX people who declare something as „standard“, a feature that is supposed to be an absolute „must-have“, or else our organisation would loose even the last of our users. Unfortunately, developers tend to accept these things as interesting challenges and (knowingly or not) underestimate the effort and complexity needed to implement it.

My very lonesome role in these cases is to insist that this shiny thing is no standard at all and that our users would be quite happy without it.

nunez
0 replies
8h54m

to be fair, when you don't have any pathways for working on $COOL_TECH at your job, designing and justifying something overly complex makes sense

mlhpdx
0 replies
2h28m

Workman like functionality isnt sexy, it wont be the hot bullet point on your resume, it wont get you your next job, but it is dam effective.

When you see opportunities to do such work in a way that delivers something new and valuable, I recommend taking hold of them. I learned this somewhere along the line, but not until I’d missed a few from self-doubt and indulging the inner magpie.

Clear, simple and value focused engineering is _exactly_ what I’m looking for in candidates these days. Focus on the long term, not the short term — just like with investing.

mhh__
0 replies
3h38m

I'd argue it's something a little more fundamental than mere CV padding

Taking the CV aspect literally — this is a sleight of hand because it's a metaphor — I know lots of people who do this stuff that don't have a CV at all.

There's levels to it of course, but I don't really view it be any different to people who insist on using old machines as a bit (but in the other way obviously)

fatnoah
0 replies
6h30m

When cloud was the hot new thing the company that I worked with launched a replatform on AWS. It gave us the ability to get through the initial scaling and sizing with ease. We left right away, because even then the costs did not make sense.

Cases like this always fascinate me. I've led a "move from Data Center to AWS" effort twice, and both times it was at > 50% cost savings. However, I think both were probably small infra footprints compared to many cases like many others.

HeyLaughingBoy
0 replies
4h38m

It's a common behavior. When I started my last job as the software lead at a small company, I was warned that one of the senior engineers on my team was easily distracted by shiny things. They were not wrong. The dude was smart and effective, but I had to spend way too much time keeping him on task.

HeyLaughingBoy
0 replies
4h36m

I have seen projects get done just to pad a PM or developers resume

This reminds me of the time that I complained that a sensor on the machine we were developing was way too complicated and that there were far simpler ways to accomplish the same thing.

Someone with more history on the project explained. The engineer who designed that sensor expected, and received, a patent for his work. The company was quite generous with patent royalties and no one's getting a patent for an off the shelf float switch!

from-nibly
44 replies
8h17m

This is what I tell engineers. Microservices aren't a performance strategy. They are a POTENTIAL cost saving strategy against performance. And an engineering coordination strategy.

Theoretically If you have a monolith that can be scaled horizontally there isn't any difference between having 10 replicas of your monolith and having 5 replicas of two microservices with the same codebase. UNLESS you are trying to underscale part of your functionality. You can't underscale part of your app with a monolith. Your pipe has to be big enough for all of it. Generally speaking though if you are talking about 10 replicas of something there's very little money to be saved anywhere.

Even then though the cost savings only start at large scales. You need to have a minimum of 3 replicas for resiliency. If those 3 replicas are too big for your scale then you are just wasting money.

The place where I see any real world benefit for most companies is just engineering coordination. With a single repo for a monolith I can make 1 team own that repo and tell them it's their responsibility to keep it clean. In a shared monolith however 0 people own it because everyone owns it and the repo becomes a disaster faster than you can say "we need enterprise caching".

scient
19 replies
8h3m

Services, or even microservices, are more of a strategy to allow teams to scale than services or products to scale. I think thats one of the biggest misconceptions for engineers. On the other end you have the monorepo crew, who are doing it for the same reasons.

On your note about resiliency and scale - its always a waste of money until shit hits the fan. Then you really pay for it.

camgunz
9 replies
6h1m

Services, or even microservices, are more of a strategy to allow teams to scale than services or products to scale.

I've never really understood why you couldn't just break up your monolith into modules. So like if there's a "payments" section, why isn't that API stabilized? I think all the potential pitfalls (coupling, no commitment to compatibility) are there for monoliths and microservices, the difference is in the processes.

For example, microservices export some kind of API over REST/GraphQL/gRPC which they can have SDKs for, they can version them, etc. Why can't you just define interfaces to modules within your monolith? You can generate API docs, you can version interfaces, you can make completely new versions, etc.

I just feel like this would be a huge improvement:

- It's so much more engineering work to build the service handler scaffolding (validation, serialization/deserialization, defining errors)

- You avoid the runtime overhead of serialiation/deserialization and network latency

- You don't need to build SDKs/generate protobufs/generate clients/etc.

- You never have the problem of "is anyone using this service?" because you can use code coverage tools

- Deployment is much, much simpler

- You never have the problem of "we have to support this old--sometimes broken--functionality because this old service we can't modify depends on it". This is a really undersold point: maybe it's true that microservice architectures let engineers build things without regard for other teams, but they can't remove things without regard for other teams, and this dynamic is like a no limit credit card for tech debt. Do you keep that service around as it slowly accretes more and more code it can't delete? Do you fork a new service w/o the legacy code and watch your fleet of microservices grow ever larger?

- You never have the problem of "how do we update the version of Node on 50 microservices?"

mjr00
4 replies
5h45m

You never have the problem of "how do we update the version of Node on 50 microservices?"

And instead you have the problem of "how do we update the version of Node on our 10 million LOC codebase?" Which is, in my experience, an order of magnitude harder.

Ease of upgrading the underlying platform versions of Node, Python, Java, etc is one of the biggest benefits of smaller, independent services.

naasking
2 replies
5h19m

Upgrading the platform also happens at least 10x less frequently, so that math doesn't necessarily work out in your favour though.

mjr00
1 replies
5h15m

It's much easier to make smaller scope changes at higher frequency than it is to make large changes at lower frequency. This is the entire reason the software industry adopted CI/CD

naasking
0 replies
4h34m

I'm not sure that's measuring what you think. The CI pipeline is an incentive for a good test suite, and with a good test suite the frequency and scope of changes matters a lot less.

CI/CD is also an incentive to keep domain-level scope changes small (scope creep tends to be a problem in software development) in order to minimize disruptions to the pipeline.

These are all somewhat different problems than upgrading the platform you're running, which the test suite itself should cover.

camgunz
0 replies
32m

And instead you have the problem of "how do we update the version of Node on our 10 million LOC codebase?"

I think if you get to that scale everything is pretty hard. You'll have a hard time convincing me that it's any easier/harder to upgrade Node on 80 125K LOC microservices than a 10M LOC monolith. Both of those things feel like a big bag of barf.

lll-o-lll
1 replies
1h56m

I've never really understood why you couldn't just break up your monolith into modules

You can! We used to do this! Some of us still do this!

It is, however, much more difficult. Not difficult technically, but difficult because it requires discipline. The organisations I’ve worked at that have achieved this always had some form of dictator who could enforce the separation.

Look at the work done by John Lakos (and various books), to see how well this can work. Bloomberg did it, so can you!

Creating a network partition makes your system a distributed system. There are times you need this, but the tradeoff is at least an order of magnitude increase in complexity. These days we have a lot of tooling to help manage this complexity, but it’s still there. The combination of possible failure states is exponential.

Having said all this, the micro service architecture does have the advantage of being an easy way to enforce modularity and does not require the strict discipline required in a monolith. For some companies, this might be the better tradeoff.

Cthulhu_
0 replies
1h51m

some form of dictator who could enforce the separation.

Like a lead developer or architect? Gasp!

I wonder if the microservices fad is so that there can be many captains on a ship. Of course, then you need some form of dictator to oversee the higher level architecture and inter-service whatnots... like an admiral.

Tainnor
1 replies
2h10m

I've never really understood why you couldn't just break up your monolith into modules.

I think part of it is that many just don't know how.

Web developers deal with HTTP and APIs all the time, they understand this. But I suspect that a lot of people don't really understand (or want to understand) build systems, compilers, etc. deeply. "I just want to press the green button so that it runs".

Cthulhu_
0 replies
1h50m

Counterpoint, most monoliths are built like that; I wonder if they think that pressing a green button is too easy, like, it HAS to be more complicated, we HAVE to be missing something.

dns_snek
6 replies
7h38m

How do you square that with the fact that shit usually hits the fan precisely because of this complexity, not in spite of it? That's my observation & experience, anyway.

Added bits of "resiliency" often add brand new, unexplored failure points that are just ticking time bombs waiting to bring the entire system down.

Eridrus
2 replies
2h58m

Microservices almost always increase the amount of partial failures, but if used properly can reduce the amount of critical failures.

You can certainly misapply the architecture, but you can also apply it well. It's unsurprising that most people make bad choices in a difficult domain.

Tainnor
1 replies
2h5m

Fault tolerance doesn't necessarily require microservices (as in separate code bases) though, see Erlang. Or even something like Unison.

But for some reason it seems that few people are working on making our programming languages and frameworks fault tolerant.

Eridrus
0 replies
21m

Because path dependence is real so we're mostly building on top of a tower of shit. And as computers got faster, it became more reasonable to have huge amounts of overhead. Same reason that docker exists at all.

bluGill
1 replies
6h58m

Not adding that resiliency isn't the answer though - it just means known failures will get you. Is that better than the unknown failures because of your mitigation? I cannot answer that.

I can tell you 100% that eventually a disk will fail. I can tell you 100% that eventually the power will go out. I can tell you 100% that even if you have a computer with redundant power supplies each connected to separate grids, eventually both power supplies will fail at the same time - it just will happen a lot less often than if you have a regular computer not on any redundant/backup power. I can tell you that network cables do break from time to time. I can tell you that buildings are vulnerable to earthquakes, fires, floods, tornadoes and other such disasters). I can tell you that software is not perfect and eventually crashes. I can tell you that upgrades are hard if any protocol changed. I can tell you there is a long list of other known disasters that I didn't list, but a little research will discover.

I could look up the odds of the above. In turn this allows calculating the costs of each mitigation against the likely cost of not mitigating it - but this is only statistical you may decide something statistically cannot happen and it does anyway.

What I cannot tell you is how much you should mitigate. There is a cost to each mitigation that need to be compared to the value.

camgunz
0 replies
6h15m

Absolutely yeah, these things are hard enough to test in a controlled environment with a single app (e.g. FoundationDB) but practically impossible to test fully in a microservices architecture. It's so nice to have this complexity managed for you in the storage layer.

troupe
0 replies
5h44m

How do you square that with the fact that shit usually hits the fan precisely because of this complexity

The theoretical benefit may not be what most teams are going to experience. Usually the fact that microservices are seen as a solution to a problem that could more easily be solved in other much simpler ways, is a pretty good indication that any theoretical benefits are going to be lost through other poor decision making.

manicennui
0 replies
4h35m

Except that most people build microservices in a way that ignores the reality of cloud providers and the fact that they are building (more) distributed systems, and often end up with lower resiliency.

danielovichdk
0 replies
1h44m

Microservices is more about organisation than it is about technology.

And that is why developers have so much trouble getting it right. They can't without having the organisational fundamentals in place. It is simply not possible.

The architectural constraints of microservices will show the organisational weaknesses in a much higher rate because of the pressure it puts on having the organisation be very strict about ownership, communication and autonomy.

The takes a higher level of maturity as an organisation to enable the benefits of microservies, which is also why most organisations shouldn't even try.

Stop all the technical nonsense because it won't solve the root cause of the matter. It's the organisation. Not the technology

nijave
8 replies
7h56m

Microservices can help with performance by splitting off performance critical pieces and allowing you to rewrite in a different stack or language (Rust or go instead of Ruby or Python)

But yeah, they also tend to explode complexity

dns_snek
3 replies
7h45m

An important point that people seem to forget is that you don't need microservices to invoke performant native code, just a dynamic library and FFI support. The entire Python ecosystem is built around this idea.

nijave
1 replies
7h13m

You don't need it but you can also explode your repo, test, and build complexity when it'd be easier to keep them isolated.

For instance, you might not want to require all develops have a C tool chain installed with certain libraries for a tiny bit of performance optimized code that almost never gets updated.

dns_snek
0 replies
5h48m

I don't know, that seems like confusion of runtime and code organization boundaries. Adding a network boundary in production just to serve some code organization purpose during development seems completely unnecessary to me.

For development purposes you could build and distribute binary artifacts just like you would for any other library. Developers who don't touch native code can just fetch the pre-built binaries corresponding to current commit hash (e.g. from CI artifacts).

bunderbunder
0 replies
7h34m

This is why Python stole enterprise big data and machine learning from Java. It actually has a higher performance ceiling for certain specific situations because, almost uniquely among garbage collected high-level languages, it can call native code without marshaling or memory pinning.

rqtwteye
1 replies
7h7m

I am good with splitting off certain parts as services once there is a performance problem. But doing microservices from the start is just a ton of complexity for no benefit and most likely you'll get the service boundaries wrong to some degree so you still have to refactor when there is a need for performance.

bluGill
0 replies
6h55m

Often you can get your boundaries close enough. There will always be cases where if you knew then what you know now. However you don't need to be perfect, just close enough. Web apps have been around in all sizes for 30 years - there is a lot of culture knowledge. Do not prematurely pessimize just because we don't know what is perfect.

I'm not saying microservices are the right answer. They are a useful tool that sometimes you need and sometimes you don't. You should have someone on your team with enough experience to get the decisions close enough up front. This isn't anything new.

danmaz74
0 replies
1h26m

But you don't need micro services for that. You can always split things when useful, the issue with microservices is the idea that you should split things also when not necessary.

AlchemistCamp
0 replies
5h5m

Github scaled all the way to acquisition with Ruby on Rails, plus some C for a few performance critical modules and it was a monolith.

It doesn’t take a microservice to allow rewriting hot paths in a lower level language. Pretty much everything has a C-based FFI. This is what makes Python useable for ML—the libraries are written in C/C++.

jakey_bakey
3 replies
6h15m

Surely one could split a shared monolith into many internal libraries and modules, facilitating ownership?

troupe
0 replies
5h39m

Yes, but you are still dealing with situations where other teams are deploying code that you are responsible for. With microservices you can always say, "our microservice wasn't deployed, so it is someone else's problem."

But I think you are pointing out one of the reasons most places don't get benefits from microservices. If their culture doesn't let them do as you describe with a monolith, the overhead of microservices just bring in additional complexities to an already dysfunctional team/organization.

manicennui
0 replies
4h34m

It's crazy to me how opposed to building libraries everyone seems these days. We use a fuckton of libraries, but the smallest division of software we'll build is a service.

lmm
0 replies
18m

It's difficult to own a library end-to-end. Ownership needs to include deployment and runtime monitoring.

wg0
1 replies
5h53m

You can scale those things with libraries also. The very browser that you're using to read this comment is an example of it. FreeType, Hurfbaz, Pango, Cairo, Uniscribe, GDI, zlib this that and a deep deep dependency tree built by people who never have talked to each other directly other than the documentation of their libraries - works perfectly well.

I assure you 98% of the companies have simpler and shallower code base than that of a modern A class browser.

Microservices was a wrong turn in our industry for 98% of the use cases.

Tainnor
0 replies
2h13m

Hurfbaz

This made me chuckle. :) It's HarfBuzz. Yours sounds like a goblin.

jmull
1 replies
45m

In a shared monolith however 0 people own it because everyone owns it and the repo becomes a disaster faster than you can say "we need enterprise caching".

* I give two teams one repo each and tell each one: "This is your repo, keep it clean."

-or-

* I give two teams one folder each in a repo and tell each one: "This is your folder, keep it clean."

If you've got a repo or folder (or anything else) that no one is responsible for, that's a management problem, and micro services won't solve it.

Repos (and folders) don't really have anything to do with organizing a complex system of software -- they are just containers whose logical organization should follow from the logical organization of the system.

Microservices back you into position where when the code of one team to calls the component of another team, it has to be high-latency, fault-prone, low-granularity, config-heavy. Some stuff falls the way anyway, so no problem in those case, but why burn that limitation in to your software architecture from the start? Just so you don't have to assign teams responsibility for portions of a repo?

leeoniya
0 replies
29m

i think doing releases, deployments, and rollbacks is trickier in a monorepo. with multiple services/repos, each team can handle their own release cycles, on-call rotation, and only deal with the bits, tests, and failures they wrote.

you lose some CPU/RAM/IO efficiency, but you gain autonomy and resilience, and faster time to resolution.

https://blog.nelhage.com/post/efficiency-vs-resiliency/

e.g. at Grafana we're working through some of these decoupling-to-microservices challenges, because the SREs that deploy to our infra should not need to deal with investigating and rolling back a whole monolith due to some regression introduced by some specific team to a core datasource plugin, or frontend plugin/panel. the pain at scale is very real, and engineering hours are by far more expensive than the small efficiency loss you get by over-provisioning the metal a bit.

chasd00
1 replies
5h3m

And an engineering coordination strategy.

i've always felt the org chart defines the microservice architecture. It's a way to keep teams out of each other's hair. When you have the same dev working on more than one service then that's an indication you're headed for trouble.

wesselbindt
0 replies
3h55m

That's not just a feeling, it's a direct consequence of Conway's law. You feel correctly.

otabdeveloper4
0 replies
3h11m

Your pipe has to be big enough for all of it.

What do you mean by "pipe" here? It's easier to share CPU and network bandwidth across monolith threads than it is across microservice instances. (In fact, that is the entire premise of virtualization - a VM host is basically a way to turn lots of disparate services into a monolith.)

mhh__
0 replies
3h37m

I view micro services as a risk mainly in a Conway sense rather than technology.

Most companies can run on a VPS.

duped
0 replies
6h40m

Microservices aren't a performance strategy

Who thinks that more i/o and more frequent cold starts are more performant?

Where I see micoservices as very useful is for elasticity and modularization. It's probably slower than a monolith at your scale but you don't want to fallover when loads start increasing and you need to scale horizontally. Microservices with autoscaling can make that very useful.

But of course, updating services can be a nightmare. It's a game of tradeoffs.

danenania
0 replies
3h30m

I mostly agree. I'd add though that isolation can be a legitimate reason for a microservice. For example, if you have some non-critical logic that potentially uses a lot of CPU, splitting that out can make sense to be sure it's not competing for CPU with your main service (and bringing it down in the pathological case).

Similarly, if you have any critical endpoints that are read-only, splitting those out from the main service where writes occur can improve the reliability of those endpoints.

camgunz
0 replies
6h16m

Microservices aren't a performance strategy. They are a POTENTIAL cost saving strategy against performance.

Yeah they were a way to handle databases that couldn't scale horizontally. You could move business logic out of the database/SQL and into Java/Python/TypeScript app servers you could spin up. Now that we have databases like BigQuery and CockroachDB we don't need to do this anymore.

throwawayanothe
14 replies
9h56m

Writes simple and then drops GraphQL and K8s.

disgruntledphd2
6 replies
9h53m

Both of which are justified in the post. Like, complexity where it makes sense is a good thing, especially if that complexity brings benefits.

Complexity for the sake of complexity is foolish.

dawnofdusk
5 replies
9h35m

Enough of this strawman one-liner "complexity for the sake of complexity". Unnecessary complexity is introduced because someone once thought it was necessary. It may not be necessary anymore, or (even worse) it might have not even been necessary at the time and the person who introduced it was just wrong. But all complex architectures start because people think it will bring benefits.

disgruntledphd2
2 replies
8h41m

Agreed, but I rarely see this work out, even in the short term. It certainly seems like complexity for its own sake, and someone once told me that our system was embarrassingly simple so that's why they introduced a load of complexity. Sigh.

mrkeen
1 replies
7h9m

someone once told me that our system was embarrassingly simple so that's why they introduced a load of complexity.

This exceeds my ability to suspend disbelief.

djur
0 replies
2h28m

You can't imagine someone saying "look at this system, it's totally basic, programming 101, clown shoes, baby's first web service, probably made by a PHP coder, not enterprise-class at all, we need to put on our big-boy pants, I read some whitepapers from thought leaders at FAANG and we need to implement..."

jerf
1 replies
7h53m

"The person who introduced it was just wrong" is doing an awful lot of heavy lifting in your post. I've seen "this is just the only thing the programmer knows" many times, for instance, in which case one can argue whether an actual "decision" is being made. Certainly "this is the tech the engineer wants to learn" is a real problem in the world, in which case again it's rather debatable whether "necessity" ever crossed their mind as a concern. "I was taught in school this is the one and only way software is designed" is common with fresh grads and they aren't really making a decision because they don't see a decision to be made.

You seem to be implying that all complexity is the result of an informed decision made by someone but this does not match my experience in the field.

disgruntledphd2
0 replies
7h24m

You seem to be implying that all complexity is the result of an informed decision made by someone

My life would be much easier if this were true. One day, maybe....

nunez
2 replies
8h49m

you can absolutely use Kubernetes to implement a dead-simple three tier with minimal ops overhead, especially if you're using a managed cluster

rahen
0 replies
8h36m

Just like you can use a fully orchestrated API-driven cloud IaaS to run a simple binary.

Show me your problem, I'll show you how to solve it with the least amount of code and dependencies.

klabb3
0 replies
7h54m

And if something goes wrong, is it simple to debug and understand? Can I reproduce issues locally? Happy path simplicity is easy.

CuriouslyC
1 replies
9h4m

As someone who dissed GraphQL for a long time, Hasura/Apollo are in a really good place now, the amount of code you don't have to write and the performance/features you get for free are really compelling. The setup isn't conceptually simple, but it sure does keep your codebase lean.

mrkeen
0 replies
8h15m

Yes, but that isn't the thrust of the article.

"Simple architectures" and "boring technology" are slogans meant to keep technologies like GraphQL out of your stack. Writing less code and getting features for free is how one describes "exciting technology".

throwaway2037
0 replies
7h24m

<smh> Agree. No need for the throwaway on this one!

pharmakom
0 replies
8h5m

GraphQL is less complicated than querying dozens of interdependent REST queries.

Shrezzing
14 replies
9h34m

I'd argue that this article could be read more like "start as a monolith, and then move towards microservices when sensible", except the penny hasn't fully dropped for the author that the sensible time for their organisation is right now.

The company appears dogmatically locked into their idling python code, their SQL making unpredictable commits, and their SQL framework making it difficult to make schema migrations.

This is a financial services company describing data-integrity bugs in their production platforms.

wavemode
9 replies
8h53m

In what way does "moving towards microservices" relate to solving "data-integrity bugs"?

I would argue the opposite - the more distributed you make the system, the more subtle consistency bugs tend to creep in.

DonnyV
8 replies
8h29m

For a financial services company they should be using a compiled language. Something like C# or Java or Rust or Go with Postgres.

quaunaut
6 replies
7h59m

Why? What inherent advantage do those languages have with financial logic?

notpachet
3 replies
7h13m

Better static analysis, for one.

quaunaut
2 replies
5h13m

And that helps in financial analysis how?

notpachet
0 replies
4h46m

I didn't say anything about financial analysis. But static typing is a good defense mechanism against type errors at runtime (I don't think anyone would argue against that). When you're running a financial services product, the cost of a type error in production can be dramatically higher than in non-financial code. Speaking from firsthand experience.

IshKebab
0 replies
4h39m

I don't think it's specific to financial services. He just meant that financial services is an area where you really don't want bugs! And one of the easiest ways to eliminate entire classes of bugs is to use static typing.

DonnyV
1 replies
7h10m

Python will let you use float division on integers. Compiled type languages won't do that. This would be solved by using a double data type. But python doesn't have it.

quaunaut
0 replies
5h13m

You'd almost certainly just use a library dedicated to mitigating this problem though, of which there are many.

artimaeis
0 replies
6h39m

He's been talking about the problems/solutions around Wave for a while. Every time I can't help but think if they'd just started with C# or Java, maybe Go or Rust, they'd be in a better position.

Here's what I see them as providing:

- Simple async, no wasted CPU on I/O

- Strong typing defaults

- Reliable static analysis

Half of the "pros" he lists for using GraphQL are just provided out of the box using ASP.NET Core with NSwag to document the endpoints. If they want to keep the client-side composability, create some OData endpoints and you've got it.

> Self-documentation of exact return type

> Code generation of exact return type leads to safer clients

> Our various apps (user app, support app, Wave agent app, etc.) can mostly share one API, reducing complexity

> Composable query language allows clients to fetch exactly the data they need in a single packet roundtrip without needing to build a large number of special-purpose endpoints

Bias flag: I'm primarily a C#/.NET developer. I came up on python and ruby but have had more success in my region getting paid to work on .NET code.

mousetree
1 replies
9h6m

These are all specific problems that can be individually solved. Personally I don’t see how changing their architecture or programming language would solve those

nijave
0 replies
7h17m

In addition, they risk introducing new problems they already solved with their current setup

lstodd
0 replies
7h58m

Where does the "idling python code" come from? If it blocks, it blocks, that's not "idling". And I doubt they are running a process per core.

jakewins
0 replies
4h47m

The author is ex Microsoft, Google and was a senior staff eng at Twitter; i don’t know them but my experience with their blogging over the last decade is that they seem generally very well informed. To me it seems unlikely “the penny hasn’t dropped”, like you say.

On the actual criticism you’re raising: In what concrete way would moving to micro services help with reducing data integrity bugs? In my experience I’d expect the exact opposite - micro services systems generally being harder to test and generally having more possible interleavings and partial failure paths.

rugger
11 replies
9h43m

Monolith is fine if you have a fairly simple process for manipulating your data. Like posting an article, then a comment, with some moderation thrown in.

But when you start adding business rules, and start transforming your data and moving it around, then your monolith will become too complex and often too expensive to run. Lots of moving parts tightly coupled together, long-running transaction wrapping multiple joined tables etc. Rolling out new features will become very challenging as well.

Something like event sourcing is more complex to set upfront than a monolith, but at least it offers the way to add scale and features later without creating a combinatorial explosion...

rugger
2 replies
8h3m

A few comments point out that replacing a monolith with micro services doesn't reduce complexity. I agree 100%.

That's why I mentioned Event Sourcing pattern, not "microservices". Think of a single event log as a source of truth where all the data goes, and many consumer processes working in parallel alongside, picking only those events (and the embedded data) that concern them, reacting to them, then passing it on not knowing what happens later. Low coupled small self-sufficient components that you can keep on adding one next to another, without increasing the complexity of the overall system.

Maybe Event Sourcing/CQRS can be called "microservices done right", but that's definitely not those microservices (micro-monoliths?) everyone is talking about.

klabb3
0 replies
7h29m

ES has the potential but is too immature of a pattern to be simple. It’s a shame, but let’s not pretend.

For instance, an immutable event log is illegal in many cases (PII). So you have to either do compaction on the log or use an outside mutable store.

Another issue is code evolution: if you change your event processing logic at runtime, you get a different state if you replay it. Maybe some users or orders will not be created at all. How you deal with that? Prevent it with tooling/testing or generate new events for internal actions?

Also, all the derived state is eventually consistent (so far so good) but for non-toy apps you absolutely need to use derived state to process events, which naively breaks determinism (now your event processing depends on the cursor of the derived state).

Check out Rama[1]. They’re solving this problem, and it’s super interesting but again let’s not fool ourselves – we’re far from mature and boring now.

Something like it could hopefully become boring in the future. Many of these features could probably be simplified or skipped entirely in later iterations of these patterns.

[1]: https://redplanetlabs.com/learn-rama

PeterisP
0 replies
6h36m

"passing it on not knowing what happens later" often is fundamentally not acceptable - you may need proper transactions spanning multiple things, so that you can't finalize your action until/unless you're sure that the "later" part was also completed and finalized.

nickserv
2 replies
9h2m

Ironically, business rules are often much easier done in a monolith, since they tend to require access to basically the entire database, and have impact across your code base.

Not saying it needs to be spaghetti all over the code mind you. Just that it's easier to have a module within the monolith rather than a dedicated service.

metricspaces
0 replies
7h48m

The conversation in this (overall) thread is dancing around issues of experience,competence, and maturity. And the age ceiling forcefully pushed by people like Paul Graham of this very HN. When your entire engineering team are “senior developers” with 3 years of experience (lol) and most don’t even know what a “linker” does, the fundamental feature of the wunder architecture is obscure and not understood.

Building effective monoliths absolutely demands competent DB experts, schema designers & developers. The problem that microservices solved was the sparsity of this sort of talent when demand overshot supply by orders of magnitude.

(In a way the monoloth vs microservices debates are echoes of the famous impedance mismatch between object graph runtimes and relational tables and DBMSs.)

kaba0
0 replies
8h57m

Especially that it must be fun hunting down race conditions across microservices. Like, microservices throw away every benefit of a single code base in a single language (and its guarantees). They sometimes make sense, but arguably that sometimes is quite rare.

Cthulhu_
1 replies
8h54m

This is a fallacy. Adding a network boundary does not make your application less complex. If you can't make a "monolith" make sense, thinking you can do it in a microservices architecture is hubris. If you think long-running transactions / multiple tables are difficult, try doing that in a distributed fashion.

jerf
0 replies
7h37m

One of the main "problems" with proposing microservices is that trivially, there is nothing a microservice can do that can not be done by a monolith that is designed with discipline. Over the years my monoliths have grown to look at awful lot like a lot of microservices internally, except that they can still benefit from passing things around internally rather than over networks.

(Meaning generally that performance-wise, they clean the clock of any microservice-based system. Serializing a structure, shipping it over a network with compression and encryption, unserializing it on the other end, performing some operation, serializing the result, shipping it over the network with compression and encryption, deserializing the result, and possibly having to link it back up to internal data structures finds it hard to compete with "the data is already in L1, go nuts".)

I've even successfully extracted microservices from them when that became advantageous, and it was a matter of hours, not months, because I've learned some pretty solid design patterns for that.

If you can't design a well-structured monolith, you even more so can't design a microservice architecture.

It's not wise to try to learn too many lessons about what is good and bad from undisciplined, chaotic code bases. Chaos can be imposed on top of any nominal architecture. Chaotic microservices is not any more fun than a chaotic monolith, it's just unfun in a different way. The relevant comparison is a well-structured monolith versus a well-structured microservice architecture and that's a much more nuanced question.

kaba0
0 replies
8h59m

Why do we need "scale"? The 2nd cheapest Hetzner offering can probably serve a hundred thousand people a basic CRUD app just fine, with the DB running on the same machine. And you can just buy a slightly more expensive machine if you need scale, horizontal scaling is very rarely necessary actually.

Stackoverflow runs on a couple of (beefy) machines only.

gherkinnn
0 replies
9h32m

Most CRUD software is little more than the simple processes you describe. At least intrinsically. And if it isn't, it started off being that.

HumblyTossed
0 replies
7h48m

But when you start adding business rules, and start transforming your data and moving it around, then your monolith will become too complex and often too expensive to run. Lots of moving parts tightly coupled together, long-running transaction wrapping multiple joined tables etc. Rolling out new features will become very challenging as well.

Absolutely NONE of that has to happen if you structure your project/code well.

throwaway2037
10 replies
7h56m

For example, at a recent generalist tech conference, there were six talks on how to build or deal with side effects of complex, microservice-based, architectures and zero on how one might build out a simple monolith.

Queue my favourite talk about microservices: David Schmitz - 10 Tips for failing badly at Microservices [1]

This guy has amazing delivery -- so dry and funny. He spends 45 minutes talking about all of his microservices mistakes!

[1] https://www.youtube.com/watch?v=r8mtXJh3hzM

devjab
9 replies
7h17m

Monoliths aren’t very useful in many organisations where you need to build and connect 300+ systems. They also stop having simple architecture if you try. Most architecture conferences and talks tend to focus more on the enterprise side of things, and really, why would you need full time software focused architects if you’re building something like stackoverflow.

I do think things have gotten a little silly in many places with too much “building like we’re Netflix” because often your microservices can easily be what is essentially a bunch of containerised monoliths.

I think the main issue is that your IT architecture has or should have) very little to do with tech and everything to do with your company culture and business processes. Sometimes you have a very homogeneous focus maybe even on a single product, in which case microservices only begin to matter when you’re Netflix. Many times your business will consist of tens-thousands of teams with very different focuses and needs, and in these cases you should just never do monoliths unless you want to end up with a technical debt that will hinder your business from performing well down the line.

bluGill
4 replies
7h11m

Building like NetFlix is better than random unguided architectures that result from not thinking. It might not be the best for your problem though. If you don't need thousands of servers, then the complexity that Netflix has to put into their architecture to support that may not be worth the cost. However if you do scale that far you will be glad you choose an architecture proven to scale that large.

However I doubt Netflix has actually documented their architecture in enough detail that you could use it. Even if you hire Netflix architects they may not themselves know some important parts (they will of course know the parts they worked on)

renegade-otter
1 replies
3h45m

It's a very bold assumption that a team that cannot manage a monolith will somehow lay a robust groundwork that will be the foundation of a future Netflix-like architecture.

By the way - Netflix started as a monolith, and so did most other big services that are still around.

The rest faded away, crushed by the weight of complexity, trying to be "like Netflix".

bluGill
0 replies
3h9m

There are much better options above copying someone else. Copy is better than letting anything happen, but you should do better. You should learn from Netflix, stackoverflow and the like - no need to remake the same mistakes they did - but your situation is different so copy isn't right either.

devjab
1 replies
7h5m

I mostly use Netflix as somewhere you’ve reached a technical point where you need to spread horizontally. As StackOverflow you can scale rather far without doing so if your product isn’t streaming billions of gigabytes of video to the entire world through numerous platforms. So what I mean by it is that many of will never reach those technical requirements. Sorry that I wasn’t clear. I don’t disagree with what you say at all, but I do think you can very easily “over design” your IT landscape. Like we have a few Python services which aren’t build cleverly and run on docker containers without clever monitoring. But they’ve only failed once in 7 years and that was due to a hardware failure on a controller that died 5 years before it should’ve.

bluGill
0 replies
6h53m

That is how I use Netflix or stackoverflow. Choosing either (despite how different they are!) is better than random unstructured building code with no thought to the whole system.

CharlieDigital
3 replies
7h5m

   > Monoliths aren’t very useful in many organisations where you need to build and connect 300+ systems
Seems like the mistake is building 300+ systems instead of a handful of systems.

A Google team published a paper on this last year: https://dl.acm.org/doi/10.1145/3593856.3595909

    > When writing a distributed application, conventional wisdom says to split your application into separate services that can be rolled out independently. This approach is well-intentioned, but a microservices-based architecture like this often backfires, introducing challenges that counteract the benefits the architecture tries to achieve. Fundamentally, this is because microservices conflate logical boundaries (how code is written) with physical boundaries (how code is deployed). In this paper, we propose a different programming methodology that decouples the two in order to solve these challenges. With our approach, developers write their applications as logical monoliths, offload the decisions of how to distribute and run applications to an automated runtime, and deploy applications atomically. Our prototype implementation reduces application latency by up to 15× and reduces cost by up to 9× compared to the status quo.
Worth the read.

devjab
1 replies
5h13m

Seems like the mistake is building 300+ systems instead of a handful of systems.

But that’s not what happens on enterprise organisations. 90% of those are bought “finished” products, which then aren’t actually finished and you can be most certain that almost none of them are capable of sharing any sort of data without help.

Hell, sometimes you’ll even have 3 of the same system. You may think it’s silly, but it is what it is in non-tech enterprise where the IT department is viewed as a cost-center similar to HR but without the charisma and the fact that most managers think we do magic.

Over a couple of decades I’ve never seen an organisation that wasn’t like this unless it was exclusively focused on doing software development, and even in a couple of those it’s the same old story because they only build what they sell and not their internal systems.

One of the things I’m paid well to do is help transitions startups from their messy monoliths into something they can actually maintain. Often with extensive use of the cheaper external developers since the IT department is pure cost (yes it’s silly) and you just can’t do that unless you isolate software to specific teams and then set up a solid architecture for how data flows between systems. Not because you theoretically can’t, but because the teams you work with often barely know their own business processes. I currently know more about specific parts of EU energy tariffs than the dedicated financial team of ten people who work with nothing else, because I re-designed some of the tools they use and because they have absolutely no process documentation and a high (I’m not sure what it’s called in English, but they change employees all the time). Which is in all regards stupid, but it’s also the reality of sooo many places. Like, the company recently fired the only person who knows how HubSpot works for the organisation during down sizing… that’s the world you have to design systems for, and if you want it to have even a fraction of a chance to actually work for them, you need to build things as small and isolated as possible going all in on team topologies even if the business doesn’t necessarily understand what that is. Because if you don’t, you end up with just one person who knows how the HubSpot integrations and processes work.

It’s typically the same with monoliths, they don’t have to be complicated messes that nobody knows how work… in theory… but then they are build and maintained by a range of variously skilled people over 5 years and suddenly you have teams who hook directly into the massive mess of a DB with their Excel sheets. And what not.

SamuelAdams
0 replies
4h9m

a high (I’m not sure what it’s called in English, but they change employees all the time).

To help you out, the word is attrition or turnover. Turnover would be more appropriate if the roles are refilled, attrition if the roles are never replaced.

https://www.betterup.com/blog/employee-attrition

hcarvalhoalves
0 replies
45m

Full-circle back to mainframe programming model.

mrkeen
10 replies
8h21m

our architecture is a standard CRUD app architecture, a Python monolith on top of Postgres

The rest of the article goes into why the above is not the case.

The language was chosen because it was the CTO's pet, not for simplicity's sake. It wasn't the right choice: "its concurrency support, performance, and extensive dynamism make us question whether it’s the right choice for a large-scale backend codebase"

Synchronous/blocking wasn't chosen for simplicity's sake - the async libraries were buggy! To work around the performance issue:

1) a "custom protocol that runs on top of UDP" was written. No thanks.

2) putting work onto a queue. Event-sourcing, anyone?

we’re having to split our backend and deploy on-prem to comply with local data residency laws and regulations

It's good that the software was a monolith, otherwise it would have been difficult to split apart /s.

Software is incidental complexity and essential complexity. If you shun certain things as 'too complicated', but they end up being in the essential complexity bucket, you're just going to be building them yourself, slower and with more bugs.

Imagine how different the article would have been had they picked technology whose concurrency, performance and typing worked in their favour.

yowlingcat
9 replies
8h17m

Which language with those characteristics will have the same package ecosystem as Python so that your engineering team isn't left constantly reinventing the wheel? Do you think it's worth rebuilding stable packages that do a job well just because you don't think the language it was written in was perfect?

xnorswap
4 replies
7h51m

One answer to your question is C# / .Net.

Combines excellent concurrency primitives, very good performance, and a deep reservoir of packages and libraries.

Of course it's almost never worth switching languages for an existing product, you just get left with a mess of half-old and half-new tooling and invariably get left having to support both languages.

throwaway2037
3 replies
7h16m

Real question: From your specific comment, are there any advantages of C# vs Java here?

xnorswap
0 replies
6h48m

I'm not familiar enough with Java to answer that question, you may well feel the same about Java, and it may well be true.

neonsunset
0 replies
6h21m

Way worse building, packaging and dependency management experience. Worse ORM experience. Making simple single-file executables requires way more ceremony versus 1-3 toggles, sometimes not even possible at all. A lot of small and large papercuts in places where Java simply does worse, more expensive interop and reaching performance ceiling is way more difficult due to lack of low-level primitives. Terser syntax especially the functional parts of it and more lightweight tooling in the SDK, a lot of ceremony around dealing with just making sure you dev environment works does not exist in .NET (as long as you don’t start putting everything into custom paths). Building a complete product is dotnet publish, running an application is dotnet run, which requires more setup and scripting with Java.

bebop
0 replies
5h24m

Not the parent, but C# and Java are very similar in their feature set. One thing that I find interesting about the respective communities is, Java tends to develop JSR's where a standard is worked on by multiple vendors and published essentially as an interface. Many vendors then provide an implementation of the interface.

In C# Microsoft often provides the most popular tooling for common problems. For instance, in web development, Microsoft publishes ASP.net and Entity Framework. In Java there is Jax-rs (JSR 339) and JPA (JSR 317).

So depending on your preference for a strong centralized environment provided by a single vendor, or a decentralized standards based solution with many vendors, you might lean towards Java or C#.

jerf
2 replies
7h49m

Python has good library support, but sometimes it seems like Python advocates speak as if it is somehow uniquely equipped with libraries that nobody else has. There's a dozen other languages with well-established libraries for everything mrkeen mentioned, as well as healthy ecosystems that can be completely expected to support everything else a financial app may reasonable expect to need.

Python wasn't a bad choice but even several years ago it wouldn't even remotely have been the only viable choice, and if you ignore "what the people involved already knew" probably wasn't the best. In real life, you can't ignore that and it looms large in any real decision, so I'm not proposing that should have been the metric. But Python's disadvantages can be pretty significant. "Dynamic typing" and "deals with money" is a combination that would make me pretty nervous. YMMV.

throwaway2037
1 replies
7h17m

But Python's disadvantages can be pretty significant. "Dynamic typing" and "deals with money" is a combination that would make me pretty nervous. YMMV.

This is an interesting comment. I do not disagree. In your opinion, if not Python, then what languages would be acceptable to you?

jerf
0 replies
6h30m

Starting today: Go, C#, Rust, possibly TypeScript (not my personal choice but probably acceptable), Java (which despite being poorly done in a lot of places does have the tooling necessary for this if used by decent programmers). C++ with a super-strong static analysis tool like Coverity used from day one. (I consider C or C++ plus something like Coverity to be essentially a different language than using just a C or C++ compiler.)

And that's sticking to reasonably mainstream languages. F#, D, Scala, Haskell, Lisp with strong typing around the money parts (or Clojure), probably a couple of others as well. These have problems with having smaller communities and being harder to hire for, but I'd bet that pretty much all of them still have well-supported libraries for nearly everything a financial company would need to do. (And that you could expect general open source libraries for. Super-specific internal financial protocols may not have libraries in general, or may only be in Java, etc.)

Also I make no exclusivity claims that these are the only appropriate languages, so if you've got a favorite statically-typed language feel free to just mentally edit it in without asking me how I could possibly miss it. (I'm not familiar enough with the Objective-C family to have informed opinions about it, for instance.)

Dynamically-typed languages with progressive typing used rigidly around the money might be OK, but progressive typing makes me nervous and really seems like a bodge in a lot of places. Reading people's experiences with them, even when the person writing the experience is nominally positive on the experience, has not generally impressed me.

philote
0 replies
5h34m

Honestly IMO Python does a good job these days without engineers having to reinvent the wheel. I've used asyncio with Starlette and/or FastAPI in larger-volume systems without much issues. I'm curious what specific issues the author had with async Python, and when that was since I don't see any such issues in my experience.

2d8a875f-39a2-4
9 replies
8h38m

I agree with the general sentiment that simple architectures are better and monoliths are mostly fine.

But.

I've dealt with way too many teams whose shit is falling over due to synchronous IO even at laughably low volumes. Don't do that if you can avoid it.

"Subtle data-integrity bugs" are not something we should be discussing in a system of financial record. Avoiding them should have been designed in from the start.

lelanthran
6 replies
8h5m

shit is falling over due to synchronous IO even at laughably low volumes.

Like? I ask because even synchronous IO let's you serve millions of requests per month on a cheap VPS.

That's enough in the b2b space to keep a company of 1000 employees in business.

marginalia_nu
2 replies
6h54m

To be fair, a million requests per month is 20 requests per minute...

lelanthran
1 replies
6h39m

To be fair, a million requests per month is 20 requests per minute...

Which, in B2B, is insanely profitable. At 20 rqsts/min, for a paying customer paying you $200/m/user, those numbers are fantastic!

I can only dream of having those numbers!

marginalia_nu
0 replies
6h37m

Sure, but in terms of load, dealing with the requests, a single raspberry pi 2 will barely register that even if you deploy it with a CGI stack.

2d8a875f-39a2-4
1 replies
7h35m

Until one of your IO destinations develops some latency. Or your workflow adds a few more sync IOs into each request. Or you suddenly run outta threads.

Then even if you're only at millions per month you've probably got problems.

lelanthran
0 replies
6h41m

Then even if you're only at millions per month you've probably got problems.

Not in my experience. You may be using metrics for B2C websites which make $1 for each 1 million hits.

B2B works a little differently: you're not putting everyone on the same box, for starters.

I did some contract maintenance for a business recently (had no tech staff of their own, had contracted out their C# based appdev to someone else decades ago and just need some small changes now), and a busy internal app serving about 8000 employees was running just fine off a 4GB RAM VPS.

Their spend is under $100/m to keep this up. No async anywhere. No performance problems either.

So, sure, what you say makes sense if your business plan is "make $1 of each 1 million visitors". If you business plan is "sell painkillers, not vitamins" you need maybe 10k paying users to pay yourself a f/time salary.

philipbjorge
0 replies
4h23m

Recently I worked on a project that was using synchronous IO in an async framework -- That tanked performance immediately and effectively meant that the application could service one request at a time while subsequent requests started queuing.

(Agreed that synchronous IO can serve hundreds of requests per second with the right threading model)

renegade-otter
0 replies
3h41m

So they cannot get data integrity constraints done properly with a single database? Wait until they have to do seven. Also, sounds like not even proper indexes were in place, so database amateur hour.

from-nibly
0 replies
8h28m

Yeah I'm with this guy right up until that last statement. You should have 0 of these. Not an increasing rate of them.

gherkinnn
7 replies
9h57m

I'm going to agree with Dan Luu by asking where I can find more of these sane companies. I want to spend an honest 6h a working day improving a product (Terraform and Webpack are not the product) and spend the rest of my time tending to my garden.

zer00eyz
5 replies
9h31m

You aren't looking for a job.

You're looking for a lifestyle business.

They are great. You will get time in the garden, that's awesome.

Vacation someplace with no phones? Out of the question. Weekend in a 3rd world country with character and food and sketchy internet. Not gonna happen.

You want to optimize for free time in the garden by all means you can do it, but you loose out in other places, you pick up other work (taxes, systems etc).

Edit: Down vote away, I live this life now, my tomatoes are delicious and I make yogurt too!

lovebes
1 replies
6h54m

May I get some more information on how you went into lifestyle business? Looking to get into that as well.

zer00eyz
0 replies
4h18m

I know several folks with niche business that pay various levels of their bills.

Software for youth sports, photography, asset tracking, vendor tracking, niche issues in CC Processing, facets of insurance and billing (coding).

Niche businesses happen all over the place, and finding one (for me) was a lot of trial and error, that niche business pays my "bills" and I do consulting work (sporadic, interesting and high value) to round it out (and keep me on my game).

Dont think of it as a business right away. You're going to "play", you want to build them quickly, you want to host them cheaply, you want to toy with selling them. Your managing disappointment and failure, your learning lessons and keeping it FUN. The moment you start dreaming that it's going to "make it big" is the moment you have to reel yourself back to reality. If you can say "what did I learn" and have a list of things you got from it then it was a success. At some point you just find one that clicks and it grows.

nickserv
0 replies
9h8m

Not sure I understand your point. I do my work, doing development work and managing a small team.

Still never work weekends (no phone, no email, no slack), spend time in my garden and just came back from a no-phone vacation.

Salary is above the going rate where I live - the work is remote and salary is based on company headquarters. Taxes are in line with the country I live in.

Not really seeing any downsides here, and as expected morale is quite good overall at work... But finding this company was extremely lucky/difficult.

bluGill
0 replies
8h17m

I learned long ago that I cannot write quality code for 8 hours per day. I need to have several hours of meetings every day just to ensure I don't overdo the coding and write bad code.

Sure I can write code for 14 hours a day, but it will be bad code, sometimes even negative productivity as I introduce so many bugs.

PeterisP
0 replies
6h41m

Running a business, even a "lifestyle business" is so substantially different from a job - requiring all kinds of very different tasks, and risk-taking profile - that it doesn't seem reasonable to assume that someone who's looking for such a job is actually looking to run some business.

corpMaverick
0 replies
8h29m

I am with you. Complex architectures where you have to fight it even for simple changes is a recipe for burn out as you paddle and paddle and you are stuck in the same place.

prisenco
6 replies
7h4m

Arguing for simple architectures is hard.

Everyone wants to argue that yes, they ARE going to need it. Maybe not now but someday, therefore they should have it.

Always reminds me of this scene from Arrested Development.

https://www.youtube.com/watch?v=VVtOkX-gUxU&t=132s

bluGill
5 replies
6h39m

I've been in the situation where someone argued we didn't need something so go simple - turns out they were wrong (isn't hindsight nice) but now the system was based around this incorrect assumption and really hard to rework into the thing we needed.

This is why architecture is hard. It is about making the decisions in advance that you will regret getting wrong.

prisenco
4 replies
6h28m

now the system was based around this incorrect assumption

How big of a userbase and what made it particularly difficult to refactor?

bluGill
3 replies
4h9m

I work on embedded systems so userbase isn't really a concern.

However we have > 10 million lines of C++, and the change would have been a major refactoring of most of them.

prisenco
2 replies
4h3m

So I'm assuming this decision was made quite a while ago? Years, maybe even over a decade?

bluGill
1 replies
3h11m

It took a decade to find the data to show that this decision we made 10 years ago was bad - for those entire 10 years we all thought it was good until we we looking at a performance issue and realized the roots case what a decision made in the early days had an unexpected side effect.

prisenco
0 replies
2h44m

Congratulations on your success. Legacy code is never easy to work with, but it's a good problem to have because it means you survived where hardly anyone does.

palata
5 replies
7h27m

I believe it is as easy to over-engineer as it is to under-engineer. Architecture is the art of doing "just enough"; it must be as simple as possible, but as complex as necessary.

But that is hard to do, and it takes experience. And one thing that does not describe the IT industry well is "experience": don't most software developers have less than 5 years experience? You can read all the books you want and pass all the certifications you can like a modern Agile manager, but at the end of the day you will have the same problem they have: experience takes time.

Inexperienced engineers throw sexy tech at everything, inexperienced manager throw bullshit Agile metrics at everything. Same fight.

geodel
1 replies
6h54m

Well I have seen lot of people with 20 years of experience which is actually 2 year experience repeated 10 times.

palata
0 replies
6h21m

That's not what I am saying. I am saying that you cannot make 20 years of experience in 2 years.

You are saying that some people with 20 years of experience are worse architects than others.

valenterry
0 replies
4h57m

Well summarized.

Also, the area of experience matters. I myself for instance have become pretty good at designing systems and infrastructure both from scratch up to corporate level. However, that will not help me too much when it comes to mega-corp level (think: FAANG) and other things such as designing public opensource libraries. (libraries and systems/applications require very different tradeoffs).

bjornsing
0 replies
6h28m

Architecture is the art of doing "just enough"; it must be as simple as possible, but as complex as necessary.

Or as Albert Einstein phrased it: “Everything should be made as simple as possible, but not simpler!”

IshKebab
0 replies
4h44m

In my experience over-engineering is far less of a problem than badly engineering. I see way more total messes than "enterprise" engineering.

glenjamin
5 replies
8h23m

In my opinion the best antidote to overly complex architectures is to have engineering teams and the engineers in them be rewarded based on actual business outcomes.

I suspect the era of VC-money and ZIRP led to a large number of engineers who were disconnected from successful long-term business outcomes, so there was no incentive to simplify.

jedrek
2 replies
8h17m

This only makes sense if the engineering teams own the process from end to end, at which point they stop being engineering teams.

throwaway2037
0 replies
7h39m

I agree.

In my experience, this exists in a few, rare (investment bank) trading floor technology teams. From a distance, it looks like traders who are programmers.

glenjamin
0 replies
7h7m

A sales team gets measured on the sales they make - but they don't own the end to end process of building the thing that they're selling.

It's entirely possible to measure teams on business outcomes without having them own things end-to-end, but to be really effective in this they'll need to collaborate with the other teams across the company - which is generally a desirable outcome.

throwaway2037
1 replies
7h31m

What is this obsession on HN about "everything can explained with the end of ZIRP" (zero interest rate policy)? Really: Even with overnight rates at 5%, the returns are still awful compared to even moderatly successful VC firms. And, I do not write this as a fanboi/gurl for VCs. (Many have shitty, immoral business practices to enrich themselves at the expense of hard-working non-executive staff.) Also, "end of VC-money": No such thing. They are practically bursting at the seams with uninvested money gathered during ZIRP.

glenjamin
0 replies
7h8m

Are you suggesting that the macroeconomic environment in terms of funding and performance expectations of startups hasn't changed signficantly in the last few years?

VC-money and ZIRP is a convenient shorthand for what's changed, neither are terms I think particularly encapsulate what did change, but I think it's very hard to argue that it's still just as easy to get a bunch of funding without a business model. (aside from in AI perhaps?)

simonbarker87
4 replies
9h9m

Our stack is a NestJS server, Handlebars templates for SSR, htmx and a PostgresSQL database. Deployed as a container on GCP Cloud run with a Github action. It's delightful to work on and very responsive in a way that a React app with the same data load wouldn't be. At this point it would take a pretty massive reason to get me to adopt the SPA approach again.

namaria
2 replies
9h4m

I usually also use Cloud Run for running my containers. But I use Cloud Build instead of GH Actions. I'm curious, what's your reasoning? I prefer my repository to just hold code and run build actions close to where it's deployed.

simonbarker87
0 replies
8h27m

You’re right, I miss spoke, I use Cloud Build triggered by a GitHub merge to main or staging. So not an action, a trigger from GitHub to GCP.

infecto
0 replies
8h54m

For me, I had use GH Actions before so it was pretty quick to get up and running. I am sure it was not that much more effort to use a cloud build tool in GCP or AWS but honestly don't see that much of a pro/con.

infecto
0 replies
8h56m

FastAPI, Jinja, htmx and Postgres here. Simple to add features, not sure I would use a SPA or other similar JS heavy framework unless I had a feature requiring it.

bunderbunder
4 replies
7h39m

I think a lot of this comes from the additive cognitive bias, which is really deeply entrenched in most of our thinking.

Here's a concrete example, stolen from JD Long's great NormConf talk, "I'd have written a shorter solution but I didn't have the time", which is itself drawing from "People systematically overlook subtractive changes" by Adams, et al., 2021:

Give people a Lego construction consisting if a large, sturdy plinth with single small spire supporting a platform on top. The challenge is to get it to reliably support a brick. As is, it will collapse because that little spire is so flimsy. Ask participants to modify the structure so that it will support the brick. Incentivize simpler solutions by saying that each additional brick costs $0.50.

Given just this information, people universally try to fix the problem by adding a bunch more bricks to overbuild the structure, and they spend a lot of energy trying to come up with clever ways to reduce the number of bricks they add. But then the research team makes one small tweak to the instructions: they explicitly point out that removing bricks doesn't cost anything. With that nudge, people are much more likely to hit on the best solution.

perrygeo
2 replies
7h16m

In software, it's clear why we don't prefer subtractive changes - complexity. If there's any chance that the code in question has non-local effects that can't be reasoned about at the call site, it's risky to make subtractive changes.

Additive changes have the advantage of having a corresponding additive feature to test - you don't need to touch or even understand any of the existing complexity, just glue it on top.

So the cost structure is likely inverted from the study you describe. Additive changes are (locally) cheap. Subtractive changes are potentially expensive.

hiAndrewQuinn
1 replies
6h48m

This is a big factor in why Haskell is widely considered a really fun language to refactor in. The enforced purely functional paradigm makes adding nonlocal effects much more annoying, so later contributors can much more confidently reason about the state of their code (assuming you can find any later contributors in such a small community!).

I don't know whether the Lisps have a similar "fun to refactor" quality to them, since while they are in the functional camp, they're totally different and more flexible beasts architecturally.

packetlost
0 replies
5h39m

IMO Lisps are not nice to refactor in that sense, but a dream to modify in general. In the case of Haskell and Rust, the compiler creates really tight and more importantly, global, feedback loops that have pretty good guarantees (obviously not perfect, but they're stronger than most everything else out there), while Lisp has basically no guarantees and forces you to execute code to know if will even work at all. This doesn't become apparent until you try to do large scale refactors on a codebase and suddenly you actually need good test coverage to have any confidence that the refactor went well vs in Rust or Haskell if it compiles there's already a pretty reasonable guarantee you didn't miss something.

Fortunately, Lisp codebases IME tend to be smaller and use small, unlikely-to-need-to-be-modified macros and functions, so large refactors may be less common.

jodrellblank
0 replies
3h46m

I'm not convinced that challenge shows what you say it shows. If someone set me a challenge to "modify this structure to support a brick" and the solution was "take the structure away and put the brick on the ground" I would feel like it was a stupid trick. "Remove the spire and put the brick lower" is less clear, but it's along those lines; "make this church and tower reliably support a clock at the top where everyone can see it", solution: "take the tower away and put the clock on the church roof", no, it's a Captain Kirk Kobayashi-Maru cheat where the solution to the "engineering challenge" is to meta-change-the-goal.

Yes you might get to change the goal in business software development, but you also know that the goal is 'working solutions' not 'build up this existing structure to do X'.

ranjanprj
3 replies
7h16m

Micro services is a deployment pattern and not a development pattern you could build monolith and expose various services to and various parts with an Ingress and point of to the same monalic and for example in java project these various end points of the services inside the same on it would only load up the classes/objects which are relevant to that service. There is no overhead in terms of memory or CPU by placing monolith as micro services exposed by end points

randomdata
2 replies
7h11m

Micro services is a team organization pattern, emulating the software service model, except within a micro economy (i.e. a single business). In practice, this means that teams limit communication to the sharing of documentation and established API contracts, allowing people to scale without getting bogged down in meetings.

brodouevencode
1 replies
7h2m

Could it be that deployment patterns and team organization patterns are the same thing, especially in this age of build-run teams?

randomdata
0 replies
6h57m

Conway might find some correlation, but strictly speaking, no. A service is not bound to any particular deployment pattern. Consider services in the macro economy. Each individual business providing a service is bound to do things differently. As micro services are merely emulation of macro services in the micro, each team is equally free to do things however they wish.

imiric
3 replies
6h43m

Good article! I like that it suggests a pragmatic approach, and introduces complexity only when it's necessary and makes sense to do so. Often engineers advocating for KISS end up (poorly) reimplementing the complex version, when just adopting the "complex" alternative would've avoided them all the development and maintenance headaches.

That said, "simple" and "complex" are subjective terms, and they're not easily defined. Even when everyone aligns on what they are, complexity can still creep in. So it takes constant vigilance to keep it at bay.

In the infinite wisdom of grug[1]:

given choice between complexity or one on one against t-rex, grug take t-rex: at least grug see t-rex

grug no able see complexity demon, but grug sense presence in code base

[1]: https://grugbrain.dev/#grug-on-complexity

a1o
2 replies
6h35m

I don't like this grugbrain approach because complexity is subjective, as you mentioned, and the grugbrain mentions simply saying "no" instead of telling a dev to articulate why something shouldn't be in the codebase properly.

I mention this because I've been a lot of times in a place where instead of adopting "well maintened library that everyone is using since more than a decade" I had to write and maintain my own solution just to appease someone else because they thought it would be less work (it wasn't, maybe for them, definitely not for me).

We need less posts in software development that people try to argue in absolutes.

imiric
1 replies
5h31m

I mean, I think that grug also agrees that saying "ok" to complexity makes sense sometimes. :)

I agree that this is a deep topic that requires a nuanced discussion, but my point is that I like the pragmatic approach mentioned in the article. You see threads here arguing that GraphQL and k8s are not simple, which is true, but I think it's wrong to hardheadedly avoid these technologies because of their "complexity", than to reach for them when the right time comes to use them.

a1o
0 replies
5h15m

Yes, I agree with you, nuance is key. I just have bad experiences of people throwing me sentences stolen from these types of articles as an argument of "why I should not do X" and then these people just never do anything throughout the development cycle just keep arguing as dead weight and failing to deliver cycle after cycle. Sorry, lots of trauma here.

brodouevencode
3 replies
7h6m

I get Dan's desire for minimalistic styling but a little margin around the page would be nice for readability purposes.

But on a more contextual note: I would contend that microservice architectures are in effect much simpler than monoliths. The UNIX philosophy of small tools that do one thing well is still very applicable. It's easier to deploy small changes quickly than the massive deploys that could take significant amounts of time. Deploying to a FaaS is quick and easy for almost all things.

If your product requires complex coordination/orchestration just as an operational modality then I'd question the value of that product design regardless of architecture type implementation, even for web-scale products. AND this is really a different discussion.

lolinder
2 replies
6h57m

The UNIX philosophy of small tools that do one thing well is still very applicable. It's easier to deploy small changes quickly than the massive deploys that could take significant amounts of time.

The UNIX philosophy was conceived in the context of a single computer—with dynamic linking the cost of splitting out a new binary is close to zero, so making each module boundary an executable boundary makes some degree of sense.

In the context of a cloud computing network, there's a lot more overhead to splitting out a million tiny modules. Cloud service provider costs, network latency, defending against the possibility of downtime and timeouts—these are all things that UNIX programs never had to deal with but that are everyday occurrences on the cloud.

The trade-offs are very different and so the UNIX philosophy is not equally applicable. You have to weigh the benefits against the risks.

brodouevencode
1 replies
4h4m

In the context of a cloud computing network, there's a lot more overhead to splitting out a million tiny modules.

Don't get me wrong - this can be taken to an extreme and from my experience the places where MS seems to fail (the service is not used, or hated by the engineering teams) is in scenarios where it was absolute overkill.

The trade-offs are very different and so the UNIX philosophy is not equally applicable.

There is a reasonableness standard that exists in both contexts and applies to both in the same way.

lolinder
0 replies
3h54m

There is a reasonableness standard that exists in both contexts and applies to both in the same way.

It applies to both in the same way in that there exists a definition of reasonable and exceeding it would be bad. But those definitions of reasonable are going to be completely different because the environments are completely different.

I guess my point is that invoking the UNIX philosophy as such brings to mind mostly tools whose scope would frankly be far too small to justify deploying a whole cloud service. It would be better to treat deployable units on the cloud as something new, rather than trying to treat them as if they were a logical extension of single-node UNIX executables.

treflop
2 replies
7h15m

The problem is that most people have never actually built something in all the architectures they are considering.

They just read blog posts of what to do.

As an analogy, I’ve manufactured a lot of stuff out of wood, concrete, plastic (3D printed) and increasingly more metal. When I need a jig, I know what each material will be like and what I will be getting out of it.

bluGill
1 replies
6h42m

If I tell you that clay is better for your next project then you don't know enough. Am I right or wrong - should you invest a lot of money into clay equipment, and time into learning to use clay? You need to make a decision based on what others say about the advantages of clay - maybe clay isn't better, maybe it is better but not by enough to be worth learning. If this is just for fun maybe you say you want to learn clay so you spend months learning to do it, but if this is for a job your really wants some confidence that clay is worth learning before investing your time to learn it.

treflop
0 replies
6h21m

Well if my day job is building stuff and it’s the reason I picked this career and it seems companies pay $$$$, give great benefits, and lets me take 2-4+ week vacations wherever I want if I’m good at it, and going to work is great because everyone gives you the fun problems knowing very well you know your shit…

Yes, it's worth learning a little about clay. Maybe I'll even enjoy it.

(And I know you’re trying to make an example, but I have already looked into clay a bunch of times. Clay just doesn't have the material properties that my projects tend to need. It'd be like learning COBOL expecting to build front-end apps.)

snapey
2 replies
8h23m

I really think Clojure should be recognised as boring tech more than it is. It seems to have drifted out of public discourse in the last couple of years, which is a shame because it's a rock solid, simple, predictable language that would make an excellent choice for many use cases.

It's just a shame that the syntax will forever be a barrier to widespread adoption.

samatman
0 replies
8h9m

I don't think the syntax is the main consideration with Clojure.

It's an opinionated language: dynamic, with immutable data structures, hosted on the JVM. I like it quite a bit, personally, and it's been adopted in places where it makes sense.

But many-to-most projects are better off picking another runtime, and the syntax has little to do with why.

mh-
0 replies
8h3m

IMO: Part of boring tech is that it's easy to learn and therefore, usually, easy to hire for.

Clojure ticks neither of those boxes, unfortunately.

mr_tristan
2 replies
5h34m

I find that architecture should benefit the social structure of the engineering team, and there are limits. I work on one of these “simple architectures” at large scale… and it’s absolute hell. But then, the contributor count to this massive monorepo + “simple architecture” hell numbers in the thousands.

Wave financial is only 350 people according to wikipedia - I doubt that’s 350 engineers. I know only of Google and Meta that can even operate with a massive monorepo, but I wouldn’t call their architecture “simple”. And even they do massive internal tooling investments - I mean, Google wrote their own version control system.

So I tend to think “keep it simple until you push past Dunbar’s number, then reorganize around that”. Once stable social relationships break down, managing change at this scale becomes a weird combination of incredible rigidity and absolute chaos.

You might make some stopgap utility and then a month later 15 other teams are using it. Or some other team wants to change something for their product and just submits a bunch of changes to your product with unforseen breakage. Or some “cost reduction effort” halves memory and available threads slowing down background processes.

Keeping up with all this means managing hundreds of different threads of communication happening. It’s just too much and nobody can ever ask the question “what’s changed in the last week” because it would be a novel.

This isn’t an argument for monoliths vs microservices, because I think that’s just the wrong perspective. It’s an argument to think about your social structure first, and I rarely see this discussed well. Most companies just spin up teams to make a thing and then don’t think about how these teams collaborate, and technical leadership never really questions how the architecture can supplement or block that collaboration until it’s a massive problem, at which point any change is incredibly expensive.

mattbuilds
1 replies
5h30m

The way I tend to look at it is to solve the problem you have. Don't start with a complicated architecture because "well once we scale, we will need it". That never works and it just adds complexity and increases costs. When you have a large org and the current situation is "too simple", that's when you invest in updating the architecture to meet the current needs.

This also doesn't mean to not be forward thinking. You want the architecture to support growth that will more than likely happen, just keep the expectations in check.

fl0ki
0 replies
4h7m

Don't start with a complicated architecture because "well once we scale, we will need it".

You want the architecture to support growth that will more than likely happen

The problem is even very experienced people can disagree about what forms of complexity are worth it up-front and what forms are not.

One might imagine that Google had a first generation MVP of a platform that hit scaling limits and then a second generation scaled infinitely forever. What actually happens is that any platform that lives long enough needs a new architecture every ~5 years (give or take), so that might mean 3-5 architectures solving mostly the same problem over the years, with all of the multi-year migration windows in between each of them.

If you're very lucky, different teams maintain the different projects in parallel, but often your team has to maintain the different projects yourselves because you're the owners and experts of the problem space. Your leadership might even actively fend off encroachment from other teams "offering" to obsolete you, even if they have a point.

Even when you know exactly where your scaling problems are today, and you already have every relevant world expert on your team, you still can't be absolutely certain what architecture will keep scaling in another 5 years. That's not only due to kinds of growth you may not anticipate from current users, it's due to new requirements entirely which have their own cost model, and new users having their own workload whether on old or new requirements.

I've eagerly learned everything I can from projects like this and I am still mentally prepared to have to replace my beautifully scaling architectures in another few years. In fact I look forward to it because it's some of the most interesting and satisfying work I ever get to do -- it's just a huge pain if it's not a drop-in replacement so you have to maintain two systems for an extended duration.

hardware2win
2 replies
8h11m

Im always interested in comparison between top adult sites and netflix

throwaway2037
1 replies
7h23m

I don't understand this comment. Can you explain more?

hardware2win
0 replies
5h45m

Netflix always writes those engineering blogs about their fancy microservices and infra

Meanwhile adult sites probably run on some boring php and serve orders of magnitude more content (harder to cache) while being free and top visited sites in the world

wodenokoto
1 replies
9h58m

Our architecture is so simple I’m not even going to bother with an architectural diagram. Instead, I’ll discuss a few boring things we do that help us keep things boring.

Maybe that is why everyone goes with the the latest and greatest sophisticated techniques that are popular on the conference circuit. At least it has a diagram.

corpMaverick
0 replies
8h27m

That is indeed part of the problem, complex architectures sell better. The problem comes when you have to deliver. But you can always blame the team.

whizzter
1 replies
7h35m

As a young gun coming from working in games before touching "internet"/"enterprise" software back around 2006 I had an eye on performance matters and all the early Google papers caught my eye (esp as I was on a project with an overpositive sales CEO that had in his mind that we'd run to Google valuations within a year).

A sobering account was our second CTO who told us that their previous 65000 user application ran on a single database/server, so for the rest of that project we just kept most our application on a single DB and focused on tuning that where needed.

hannofcart
0 replies
6h36m

I work at a company with several 100k MAU. Single db + 1 standby replica. No problems.

I believe even Stackoverflow used to run on something similar, not sure.

rapfaria
1 replies
1h43m

What's Wave?

neilk
0 replies
1h39m

https://www.wave.com/en/

Fintech targeting Africa

popcorncowboy
1 replies
7h3m

Not to take anything away from the angle (which is bang on), but the "we're X people valued at $YBILLIONS" is a weird way to open and detracts from the message. I suppose it's a proxy for.. seriousness? I dunno. Valuations are just bets on NPV given "what could go right" scenarios, so congrats on having a convincing story to tres commas your val. But opening a pitch on architecture with "we're a small team building a simple numbers app but lol $BILLIONS" and then using some other company (SO) as the case study ref.. ok yup what. This affected casual "#winning but like whatever" style is everywhere now. But I'm a salty old fuck so it's probably just me it offsides.

troupe
0 replies
5h20m

"we're X people valued at $YBILLIONS" is a weird way to open and detracts from the message.

The most common argument against simpler architecture is, "but that won't scale and we are going to grow to be a valuable company." So the idea that they aren't a $10 million company does seem somewhat relevant.

nonethewiser
1 replies
3h34m

Wow, this guy's patreon is something else. https://www.patreon.com/danluu

$16/month "Short-form posts"

$256/month "Sponsor"

$2,048/month "Major Sponsor"

$16,384/month "Patron"

You certainly can't fault the guy for trying, although it does make him seem a bit crazy. And I'm aware of scenarios where raising prices doesn't have the negative effect you might expect. You never know what people might spend but I can't imagine this is a tempting proposition for anyone.

Maybe it's all a ploy to get you to spend $16/month to see how many "Sponsor", "Major", and "Patron" level subscribers there are.

neilk
0 replies
1h40m

In this very comment page, there is a guy who flew Dan to a different city, just to hear this thoughts on random issues.

Also, if you look at his other writings, he's often noted trends in how software developers are paid. And the paradox that with even small tweaks, he's sometimes saved his employers far more than he could have earned in ten lifetimes. He's a free agent now... so why not charge what his advice is actually worth?

Dan is a very unusual person, in that he tries to do what is predicted by the data he has, rather than waiting for it to become conventional wisdom.

hiAndrewQuinn
1 replies
6h37m

"Simple" is an interesting term. I've always found it to be very relative to what someone's actual experiences and knowledge bases are.

For example, virtual machines are something I and probably most other people consider "simple". They're pretty much just little computers. For the most part, I can just work with VMs the same way I would work with computers. No sweat.

My DevOps pedigree, however, also means I consider Docker containers and even some light k8s stuff as pretty "simple" - when these two things feel to other people like massive overcomplications.

On the other hand, I'm pretty new to GraphQL, and it feels to me vastly more confusing than good old fashioned REST API endpoints, where I can crack open the code and actually see exactly how that particular endpoint works. I am mostly alone in my org when it comes to this feeling, because I haven't digested the abstractions that make GraphQL work as well.

I don't really have a good answer then for what simple is beyond "I know it (kinda) when I see it".

armanboyaci
0 replies
6h26m

Very good and important observation. In his talk "Simple made easy" [1] Rich Hickey defines simple as opposite of complex and easy as opposite of hard.

The easiness is relative (as you described) and depends on the things you are familiar with. For example, Docker containers and k8s stuff is easy (for you), and GraphQL is hard (for you).

The simplicity should be assessed (somehow) more objectively.

[1] https://www.youtube.com/watch?v=SxdOUGdseq4

hardwaregeek
1 replies
6h36m

Hmm I dunno, this architecture feels simple in some ways but genuinely pretty complex in others. Like they defined their own protocol? That's pretty not-simple.

It's like the StackOverflow example. Sure, the architecture is conceptually simple, but you still bought servers and are maintaining them yourself. That's pretty complicated unto itself. Probably the right decision, but not the "simplest".

SPascareli13
0 replies
5h42m

This is the thing about simple things, usually to maintain a simple "face" they require quite a lot of sophistication on the inside. Like the stackoverflow example, they have a simple architecture, but to make that work they did a lot of very low level optimizations, have a deep understanding of their stack and database, and also hosted their own servers. Basically, it takes a lot of skill to build something "simple".

dimgl
1 replies
7h7m

Interesting article on simplicity because halfway through is a massive plot twist in which author attempts to justify using GraphQL and k8s.

brodouevencode
0 replies
7h3m

But he does attempt to make a point about that in the end. I don't agree with the point, but at least he tried.

bigstrat2003
1 replies
6h12m

It was a genuine joy to see a website in this day and age which doesn't try to impose margins on text. I want my entire monitor to be used, and this site delivers.

MH15
0 replies
5h25m

You might be alone in this take. While I agree a lot of websites do it poorly, the only way I personally can read Danluu is when I make the window smaller.

alanjdev
1 replies
8h4m

I think these kinds of articles are a bit glib. Developers implement architecture to solve problems confronting them. Sometimes a new problem then arises which must be dealt with, and so on, until the architecture in hindsight is quite complex. But you can only know this in hindsight. A few companies didn't run into the "piling on" of issues to be fixed, and so look back in hindsight, see their simple architecture, and think, "we know something that everybody else doesn't," when in fact they're simply experiencing some form of survivorship bias.

bluGill
0 replies
6h35m

Architecture isn't done to solve problems confronting you now, it is about solving problems you will face in the future if you don't solve them now.

IceMichael
1 replies
6h23m

Wall of text

drivers99
0 replies
5h13m

Reader view

5cott0
1 replies
6h38m

is there a patreon tier that adds css margins? if you’re gonna beg for handouts you could at least make the site readable

drivers99
0 replies
5h15m

Just hit the reader view button.

yogsototh
0 replies
7h57m

I am very glad of the architecture we use at my current company. This is a monolith, BUT with the capacity to be deployed as microservices if needed.

The code is structured in such a way that you only start the sub services you need. We have node that launch almost all services, some only a few.

If we need to scale just a particular part of the system we can easily just scale the same node but configuring it just for the sub services we need.

yayitswei
0 replies
3h31m

Old and boring doesn't always mean simple. New ideas can simplify the architecture as well (the author cites GraphQL as an example). Agree that unnecessary complexity should be avoided.

This talk explores the tradeoffs: https://www.youtube.com/watch?v=SxdOUGdseq4

vendiddy
0 replies
6h27m

I like to treat simplicity as an ingrained attitude.

When hiring someone, I love to see if they are visibly annoyed when something is over-engineered. Or when a user interface is too complex.

If they have strong emotional reaction, it's a good cultural fit :)

The inverse is true too. If someone "gets off" to complexity, I view it a red flag.

throwanem
0 replies
7h55m

I might be fully past the idea that what makes the difference is the architecture rather than the team.

I am certainly past the idea that monoliths impose any kind of barrier to complexity. In many ways I find it to be precisely the converse.

Sure, there's nothing meaningful to choose between one giant plate of spaghetti versus a dozen small plates flying in loose formation. I'd still rather have lasagna than either.

tamarlikesdata
0 replies
7h57m

Microservices aren't just about breaking apps into pieces; they're about making each piece independently scalable, deployable, and manageable, which is huge for continuous deployment and integration. Sure, they add complexity, but for big, dynamic projects, that trade-off is worth it.

sudhirj
0 replies
9h58m

Think there's lot of examples of the Fortune 5000 just being a single Rails or Django app for the longest time.

Seems in line with the we're soon going to see a single person billion dollar company narrative.

panqueca
0 replies
6h47m

Let's create an awesome-simplicity on github.

A repo for not hyped, dead simple solutions, to work with.

panqueca
0 replies
6h52m

I couldn't agree more. Even when you decide to use a dead simple cloud provider (fly.io for example), you still have to spend time to understand:

- the CLI

- the cost model

- the cost optimizations

I'm tired of this endless looping. A cheap VPS + Docker is enough and in most of cases (it's even easier to scale/migrate)

notapoolshark
0 replies
6h2m

Agree with this article 100%. At my old company we chose to write our server in Go instead of Java. We ended up regretting it because it was significantly harder to hire developers who could write in Go vs. Java.

Later, I read a Paul Graham essay where he says that a company should choose its programming language based on how fast they can iterate in that language. And it clicked for me.

nercury
0 replies
7h56m

It pains to see the need for a microservice to even start thinking about system architecture. As if the additional database and additional set of classes could not be done on the same instance. And then everyone shrieks in pain when they see the monthly costs :)

mxstbr
0 replies
7h20m

Default GQL encoding is redundant and we care a lot about limiting size because many of our customers have low bandwidth

I'd love to know how you ended up combating this! I'm assuming something like graphql-crunch[0] or graphql-deduplicator[1], but I'd love to know what worked well in practice.

[0]: https://github.com/banterfm/graphql-crunch

[1]: https://github.com/gajus/graphql-deduplicator

mattbillenstein
0 replies
7h17m

LEPP stack is very effective imho - Linux, Nginx, Python, Postgres.

I've solved a lot of the issues he brings up - gevent, no orm - well-known transaction boundaries, etc. It's pretty pleasant to work in.

lxe
0 replies
5h21m

Our architecture is so simple I’m not even going to bother with an architectural diagram.

What's described is far from a simple architecture. Synchronous python? Kubernetes? I'm guessing there's a legacy monolith with a few microservices or load-balancers? Then... GraphQL... why? This is just a hodgepodge of stuff in dire need of actual refactor.

lovebes
0 replies
7h8m

Even his website follows simple HTML stack architecture

kulikalov
0 replies
4h52m

I like the article and am a big proponent of simple architectures.

But.

The described use cases have homogenous workloads. Hence simple architecture. Extrapolate the rest.

And of course there are for sure cases of people increasing complexity for the sake of being needed. That's a topic for sociologists/psychologists to reason about, not for engineers.

jokoon
0 replies
4h27m

Gall's law:

"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."

It's quite funny how unknown this quote is. Imagine all the developers that would be out of work if we had simpler systems. It's hard to admit, but developers get more jobs when their work is complicated, it's the hostage taker syndrome.

Software as a thing, is pretty recent, it's important to remember it has a large Silicon Valley influence, and software quality is just unregulated because of freedom of speech warriors.

The software we have today, is just the combination of good government funding and marketing maniacs.

jimnotgym
0 replies
7h25m

I got told the other day that an ecommerce to ERP integration I built 8 years ago at a company I don't work for anymore, is still running without so much as a reboot. Apparently it is on it's third different website with no changes.

Linux, Bash, Python FTW. Everything was designed to be as simple as humanly possible. I mostly built it in my spare time too.

I once showed it to a real developer and he was horrified that some long messages that had to be sent were hardcoded into a file and concatenated to the one bit that changes...it was my prototype that worked so well I never changed it.

It only ran every 15 minutes, since all the systems that depended on the order ran less frequently than that. Everything was handled sequentially, and quite deliberately so.

It actually had a lot of functionality it gained over a couple of months, it had to print some stuff, detect fraud, fix addresses etc.

My favourite bit is that several companies had offered to build it for £mega and decided it couldn't be done! I wish I could describe it in full...

ijidak
0 replies
4h52m

It's the same reason that cooking recipes often become overly complex. (A sprig of this and a pinch of that.)

We are often embarrassed by the simple.

The simple makes us feel small, unsophisticated, and unimportant.

I always find it amusing when I find a way to solve a problem simply, and my coworkers answer somewhat directly that it seems very simple.

To me that's mark of success, but to many it's uncomfortable.

Of course if we didn't work on it then it's over complicated it needs to be rewritten. Go figure.

godisdad
0 replies
2h54m

This post read like a “we want an operationally simple setup,” until the bit about selecting Kube as an orchestration technology. I think it’s still a service oriented architecture and still has to reason about asynchronicity given it uses celery and queues.

ericholscher
0 replies
7h24m

Strongly agree here. We wrote a similar post on how we’re scaling to a pretty big size with stock Python and Django: https://www.ethicalads.io/blog/2021/07/handling-100-requests...

So much of the content around the programming web is about huge scale and domain-specific stuff. We’re in a pretty magical place where the core technologies we use can scale to the size of a successful business without much work. We should promote that more!

ed_blackburn
0 replies
5h58m

Rails and Django are brilliant when they can host all your capabilities. Keeping it simple is the operational mantra, and rightly so. What's more complex is defining simple. My organisation wants to focus on the business domain, automating business processes for our customers. That means thinking as little as possible about the hosting and operational complexities of what executes that logic. The perceived wisdom would be to build a Django monolith, put it on Postgres and run it in a container. We did that and worked until we needed better scheduling of those containers and in-walks K8s.

What's our solution for simplifying our operational platform? It's to pay AWS to do it for us. And now we use Fargate and Lambda for hosting.

Is that simpler? Or exotic and more complicated? There are tradeoffs, which will vary from team to team and organisation to organisation. To one team, slapping functions in Lambda and knowing how to support them are more straightforward than a team of experts who know how to operate monoliths at scale.

For me, the real conversation is about something other than simplicity. Simplicity is subjective. It's about familiarity and competency. Pick what your organisation is familiar with or competent with (or has the ability find people who are...)

eadmund
0 replies
4h1m

I agree with the majority of this with respect to simplicity in general, and only really take issue with this bit, about GraphQL:

• Our various apps (user app, support app, Wave agent app, etc.) can mostly share one API, reducing complexity

This could of course be done with a RESTful API.

• Composable query language allows clients to fetch exactly the data they need in a single packet roundtrip without needing to build a large number of special-purpose endpoints

This, too, can be done with a RESTful API. Two approaches which immediately spring to mind are passing desired items as query parameters or passing a template item as a query parameter or request body.

• Eliminates bikeshedding over what counts as a RESTful API

This may actually be a benefit of GraphQL! There’re a ton of soi-disant ‘RESTful’ APIs which simply don’t follow REST principles at all (e.g.: ‘REST’ ≠ ‘JSON RPC’; also HATEOAS is hugely important, even when the hypermedia is in a non-HTML format such as JSON). But I also think that real REST really is a powerful framework for distributed computing. Sometimes the discussion really is bikeshedding, but sometimes it is a definition of terms.

dustedcodes
0 replies
1h7m

Look I love the http://motherfuckingwebsite.com and think it's hilarious and I totally agree with the underlying message, to cut out a lot of the shit and just keep the web simple sometimes, but is it really necessary to have absolutely no style whatsoever which makes reading this blog post an absolute chore. It's so off putting that I can't bring myself to read it. Anyone else or just me?

dshah
0 replies
12m

Very much clicked for me.

Loved this line:

"I’ll discuss a few boring things we do that help us keep things boring."

dang
0 replies
2h12m

Discussed at the time:

In defense of simple architectures - https://news.ycombinator.com/item?id=30936189 - April 2022 (189 comments)

cityguy33
0 replies
5h38m

The most sophisticated arch ever needed for any scale in my FAANG and F500 jobs are a ssl-supported load balancer, multiple app servers with thread pools, a sharded database, and message queues. Everything else is just a dressed up version of this

chocolateacc
0 replies
3h50m

I wonder if new serverless platforms, like Modal [0], are even simpler than monoliths. I've been reading some books on software engineering, and it seems clear that network calls, multiple services, integration points, etc. all cause complexity.

However, I do wonder if things like Modal allow you to focus only on writing business logic. No dealing with deploys, network comms, memory, etc.

(No, I don't work at Modal, just wondering "what comes after monolith" -- Modal, Temporal, etc. all seem quite good)

[0] https://modal.com/

caerwy
0 replies
5h11m

and by simple architecture that includes graphql, k8s, and custom network protocol! LMAO, LMFAO!!

benreesman
0 replies
2h49m

This one is a classic (instant classic), I can't say anything about this better than Dan did.

What I can offer is a personal anecdote about what an amazing guy he is. I had been a fan of his blog for a while, and at one point I decided to just email him and offer to fly to wherever he was because I wanted to hear his thoughts on a number of topics.

This was in maybe 2016, but even then I didn't expect someone who must get a zillion such emails to even reply, let alone invite me up to Seattle at my leisure! I think I had the flight booked on a Wednesday for a departure on Friday, for one night's stay, and Dan was profoundly generous with his time, we stayed up late into the night chatting and I learned a great deal, especially about the boundary between software and hardware (a topic on which he is a first-order expert with an uncommon gift for exposition).

I had the great fortune of spending some real time with Dan not just once but twice! When I went to NYC to get involved with what is now the Reels ML group, he happened to also be in NYC, and I had the singular pleasure to speak with him at length on a number of occasions: each illuminating and more fun than you can have without a jet ski.

Dan is a singularly rigorous thinker with the dry and pithy wit of a world-class comedian and a heart of gold, truly generous with his expertise and insight. I'm blessed to have met and worked with a number of world-class hackers, but few, few if any are such a joy to learn from.

annowiki
0 replies
5h57m

Another area is with software we’ve had to build (instead of buy). When we started out, we strongly preferred buying software over building it because a team of only a few engineers can’t afford the time cost of building everything. That was the right choice at the time even though the “buy” option generally gives you tools that don’t work. In cases where vendors can’t be convinced to fix showstopping bugs that are critical blockers for us, it does make sense to build more of our own tools and maintain in-house expertise in more areas, in contradiction to the standard advice that a company should only choose to “build” in its core competency. Much of that complexity is complexity that we don’t want to take on, but in some product categories, even after fairly extensive research we haven’t found any vendor that seems likely to provide a product that works for us. To be fair to our vendors, the problem they’d need to solve to deliver a working solution to us is much more complex than the problem we need to solve since our vendors are taking on the complexity of solving a problem for every customer, whereas we only need to solve the problem for one customer, ourselves.

This is more and more my philosophy. I've been working on a data science project with headline scraping (I want to do topic modeling on headlines during the course of the election) and kept preferring roll your own solutions to off the shelf ones.

For instance, instead of using flask (as I did in a previous iteration of this project a few years ago) I went with Jinja2 and rolled my own static site generator. For scraping I used scrapy on my last project, on this one I wrote my own queue and scraper class. It works fantastically.

Zopieux
0 replies
5h26m

Highlight that 99% of companies should take away from this piece:

since we’re only handling billions of requests a month (for now), the cost of this is low even when using a slow language, like Python [and simple synchronous code], and paying retail public cloud prices.
0xbadcafebee
0 replies
3h22m

Couple thoughts:

Despite the unreasonable effectiveness of simple architectures, most press goes to complex architectures

We're over-simplifying these terms. What's simple in one way is complex in another way. It makes for great blog posts to use generic terms like "simple" and "complex" for broad-ranging and highly technical topics, but it means nothing. Be specific or you're just tilting at windmills.

The cost of our engineering team completely dominates the cost of the systems we operate.

Could this be because you decided to invent a new kind of truck, thus requiring lots of costly engineers, when getting two mechanics to weld a crane onto a flatbed would have done the trick?

NIH is rampant in tech. Everybody thinks they need to invent something new when they can't find off-the-shelf parts that do what they want. When in reality, just making their system a bit more complicated in order to accommodate off-the-shelf would have been faster and cheaper. Sometimes more complex is simpler.

Later on in the article Dan mentions build vs buy, but it's not that simple. You can buy and then modify, and you can build in a way that's cheaper and easier, if uglier and more complex. Design and engineering decisions aren't binary. There are more options than just "only use a complete product" vs "build your own everything", and more business and engineering considerations than just time to market and "scale".

Rather than take on the complexity of making our monolith async we farm out long-running tasks (that we don’t want responses to block on) to a queue.

See, here's the thing with "simple" vs "complex". It's not "simpler" to use a queue than async. Both are complex. Queues may look simple, but they're not. Async may look complex, and it is. You didn't pick simple over complex, you picked something that looked easy over something that looked hard. You made a bet. Which is fine! But let's not pretend this is somehow good design. You just ignored all the complexity you're going to run into later when your queue becomes a problem. Maybe it won't be a problem? But maybe async wouldn't have been a problem at scale either. "Maybe" doesn't equal "good design", so let's not pretend it does (re: the idea that it's simpler and thus better).

Some choices that we’re unsure about [..] were using RabbitMQ [..], using Celery [..], using SQLAlchemy [..], and using Python [..]. [...] if we were starting a similar codebase from scratch today we’d think hard about whether they were the right choice

This is a sign of a lack of engineering experience. I don't mean that as a diss; I only mean that the correct choice (of these options) is clear to somebody really familiar with all these options. If you don't have that experience, you need to go find someone that does and ask them. This is one of the best reasons why you don't want to engineer something yourself: you probably aren't the best engineer in the world, and so probably don't know the kind of problems you are going to run into. However, if you are engineering something yourself, and don't know the right answer, you need to go ask somebody (or many somebodies). If you were inventing some new kind of device that nobody had ever invented, maybe nobody knows? But you probably aren't doing that, so somebody does know. And if for some reason you can't or won't go ask somebody, then you should use the thing that you[r team] is most familiar with and already know all the pitfalls of, regardless of nearly any other downside from its use. Unexpected problems are a much bigger thing than expected problems.

By keeping our application architecture as simple as possible, we can spend our complexity (and headcount) budget in places where there’s complexity that it benefits our business to take on.

And this is still good advice. But, again there is no such thing as a binary choice of "simple vs complex". It's all shades of gray.