return to table of content

Just Enough Software Architecture (2010)

pbnjay
39 replies
20h35m

Published in 2010? Curious how much of it has survived since then?

I like “Design It” because of some of the workshop/activities that are nice for technical folks who need to interact with stakeholders/clients (I’m in a consulting role so this is more relevant). Also it doesn’t lean hard on specific technical architectural styles which change so frequently…

jeremyjh
36 replies
18h46m

I can't think of many things that have changed in architecture since 2010. I'm not talking about fads but about actual principles.

CuriouslyC
32 replies
18h35m

Things are changing now, pretty fast. The architecture that is optimal for humans is not the same architecture that is optimal for AI. AI wants shallow monoliths built around function composition using a library of helper modules and simple services with dependency injection.

wizzwizz4
18 replies
18h34m

I'm not talking about fads but about actual principles.

Most problems are not well-addressed by shallow monoliths made of glue code. It's irrelevant what "AI wants", just as it's irrelevant what blockchain "wants".

CuriouslyC
15 replies
18h24m

If you think development velocity doesn't matter, you should talk to the people who employ you.

loup-vaillant
14 replies
18h5m

If you think AI helps speed up development…

cqqxo4zV46cp
6 replies
15h56m

Please tell me about how you once asked ChatGPT to write something for you, saw a mistake in its output, and immediately made your mind up.

I’ve been writing code professionally for a decade. I’ve led the development of production-grade systems. I understand architecture. I’m no idiot. I use Copilot. It’s regularly helpful. and saves time. Do you have a counter-argument that doesn’t involve some sort of thinly veiled “but you’re an idiot and I’m just a better developer than you”?

I don’t by any means think that a current generation LLM can do everything that a software developer can. Far from it. But that’s not what we are talking about.

viraptor
3 replies
14h1m

We'll need some well researched study on how much LLMs actually help vs not. I know they can be useful in some situations, but it also sometimes takes a few days away from it to realise the negative impacts. Like the "copilot pause" coined by Primogen - you know the completion is coming, so you pause when writing the trivial thing you knew how to do anyway and wait for the completion (which may or may not be correct, wasting both time and opportunity to practice on your own). Self-reported improvement will be biased by impression and facts other than the actual outcome.

It's not that I don't believe your experience specifically. I don't believe either side in this case knows the real industry-wide average improvement until someone really measures it.

fragmede
2 replies
7h56m

Unfortunately, we still don't have great metrics for developer productivity, other than the hilari-bad lines of code metric. Jira tickets, sprints, points, t-shirt sizes; all of that is to try and bring something measurable to the table, but everyone knows it's really fuzzy.

What I do know though, is that ChatGPT can finish a leetcode problem before I've even fully parsed the question.

There are definitely ratholes to get stuck and lose time in when trying to get the LLM to give the right answer, but LLM-unassisted programming has the same problem. When using an LLM to help, there's a bunch of different contexts I don't have to load in because the LLM is handling it giving me more head space to think about the bigger problems at hand.

No matter what a study says, as soon as it comes out, it's going to get picked apart because people aren't going to believe the results, no matter what the results say.

This shit's not properly measurable like in a hard science so you're going to have to settle for subjective opinions. If you want to make it a competition, how would you rank John Carmack, Linus Torvalds, Grace Hopper, and Fabrice Bellard? How do you even try and make that comparison? How do you measure and compare something you don't have a ruler for?

viraptor
0 replies
6h5m

that ChatGPT can finish a leetcode problem before I've even fully parsed the question.

This is an interesting case for two reasons. One is that leetcode is for distilled elementary problems known in CS - given all CS papers or even blogs at disposal, you should be able to solve them all by pattern matching the solution. Real work is anything but that - the elementary problems have solutions in libraries, but everything in between is complicated and messy and requires handling the unexpected/underdefined cases. The second reason is that leetcode problems are fully specified in a concise description with an example and no outside parameters. Just spending the time to define your problem to that level for the LLM is likely getting you more than halfway to the solution. And that kind of detailed spec really takes time to create.

jerf
0 replies
4h54m

"What I do know though, is that ChatGPT can finish a leetcode problem before I've even fully parsed the question."

You have to watch out for that, that's an AI evaluation trap. Leetcode problems are in the training set.

I'm reminded of people excitedly discussing how GPT-2 "solved" the 10 pounds of feathers versus 10 pounds of lead problem... of course, it did, that's literally in the training set. GPT-2 could be easily fooled by changing any aspect of the problem to something it did not expect. Later ones less so though last I tried a few months ago while they got it right more often then wrong they could still be pretty easily tripped up.

albedoa
0 replies
15m

Do you have a counter-argument that doesn’t involve some sort of thinly veiled “but you’re an idiot and I’m just a better developer than you”?

Requiring that the counter-argument reaches a higher bar than both your and the original argument is...definitely a look!

CuriouslyC
3 replies
17h23m

I don't have to think, people have done research.

grugagag
1 replies
17h8m

Only time will tell. Right now this sounds like everything that was once claimed, each technological cycle, only to be forgotten about after some time. Only after some time we come to our senses, some things simply stick while other ‘evolve’ in other directions (for the lack of a better word).

Maybe this time it’s different, maybe it’s not. Time will tell.

saghm
0 replies
15h56m

While I don't disagree with you (and tend to be more of an AI skeptic than enthusiast, especially when it comes to being used for programming), this does weaken the earlier assertion that AI was brought up in response to; "things that have changed in architecture since 2010" is a lot more narrow if you rule out anything that's only come about in the past couple of years by definition due to not having been around long enough to prove longevity.

PaulDavisThe1st
0 replies
16h49m

It is sad, and confusing, to read comments like this on HN.

I mean, you're not even wrong.

wizzwizz4
2 replies
17h42m

AI does help speed up development! It lets you completely skip the "begin to understand the requirements" and "work out what's sensible to build" steps, and you can get through the "type out some code" parts even faster than copy-pasting from Stack Overflow (at only 10× the resource expenditure, if we ignore training costs!).

It does make the last step ("have a piece of software that's fit-for-purpose") a bit harder; but that's a price you should be willing to pay for velocity.

water-your-self
0 replies
16h48m

Velocity is good for impact, typically.

jeffreygoesto
0 replies
9h37m

Poor code monkeys. I am in a industry where software bugs can severely harm people since over 20 years and the fastest code never survived. It always only solved some cheap and easy 70% of the job and the remaining errors almost killed the project and everything had to be reworked peoperly. Slow is smooth and smooth is fast. "Fast" code costs you four times: write it, discuss why it is broken, remove it, rewrite it.

cqqxo4zV46cp
1 replies
15h51m

This response is entirely tribalist and ignores the differences between LLMs and ‘blockchain’ as actual technologies. To be blunt, I find it hard to professionally respect anyone that buys into these culture wars to the point where it completely overtakes their ability to objectively evaluate technologies. This isn’t me saying that anyone that has written off LLMs is an idiot. But to equate these two technologies in this context makes absolutely no sense to me just from a logical perspective. I.e. not involving a value judgment toward either blockchain or LLMs. The only reason you’re invoking blockchain here is because the blockchain and LLM fads are often compared / equated in these conversations. Nobody has suggested that blockchain technology be used to assist with the development of software in the way that LLMs are. It simply doesn’t make sense. These are two entirely separate technologies setting out to solve two entirely orthogonal problems. The argument is completely nonsensical.

wizzwizz4
0 replies
3h28m

Nobody has suggested that blockchain technology be used to assist with the development of software in the way that LLMs are. It simply doesn’t make sense.

Linus Torvalds is a strong advocate. He even wrote a blockchain-based source code management system, which he dubbed “the information manager from hell”[0], spending over three months on it (six months, by his own account) before handing it over to others to maintain.

People complain that this “information manager” system is hard to understand, but it's actively used (alongside email) for coordinating the Linux kernel project. Some say it's crucial to Linux's continued success, or even that it's more important than Linux.

[0]: see commit e83c5163316f89bfbde7d9ab23ca2e25604af290

rednafi
6 replies
18h17m

This whole part sounds like BS mumbo jumbo. AI isn’t developing any system anytime soon and people surely aren’t going to design systems that cater to the current versions of LLMs.

dcow
3 replies
16h4m

Have you heard of modular, mojo, and max?

viraptor
2 replies
13h53m

They're designed for fast math and python similarity in general. Llama.cpp on the other hand is designed for LLM as we use it right now. But Mojo is general purpose enough to support many other "fast Python" use cases and if we completely change the architecture of LLMs, it's still going to be great for them.

It's more of a generic system with attention on performance of specific application rather than a system designed to cater to current LLMs.

dcow
1 replies
11h29m

No. Max is an entire compute platform designed around deploying LLMs at scale. And Mojo takes a Python syntax (it’s a superset) but reimplements the entire compiler so you (or the compiler on your behalf) can target all the new AI compute hardware that’s almost literally popped up overnight. Modular is the company that raised 130MM dollars in under 2 years to make these two plays happen. And Nvidia is on fire right now. I can assure you without a sliver of a doubt that humans are most certainly redesigning entire computing hardware and the systems atop to accommodate AI. Look at the WWDC Keynote this year if you need more evidence.

viraptor
0 replies
9h55m

Sure it's made to accommodate AI or more generally fast vector/matrix math. But the original claim was about "people surely aren’t going to design systems that cater to the current versions of LLMs." Those solutions are way more generic than current or future versions of LLMs. Once LLMs die down a bit, the same setups will be used for large scale ML/research unrelated to languages.

cqqxo4zV46cp
1 replies
16h1m

What? The entire point of the comment you’re replying to is that the LLM isn’t designing the system. That’s why it’s being discussed in the first place. LLMs certainly currently play a PART in the ongoing development of myriad projects, as made evident by Copilot’s popularity to say the least. That doesn’t mean that an LLM can do everything a software developer can, or whatever other moving goalpost arguments people tend to use. They simply play a part. It doesn’t seem outside of the realm of reason for a particularly ‘innovative’ large-scale software shop to at least consider taking LLMs into account in their architecture.

CuriouslyC
0 replies
6h46m

The skeptics in this thread have watched LLMS flail trying to produce correct code with their long imperative functions, microservices and magic variables and assumed that their architecture is good and LLMs are bad. They don't realize that there are people 5xing their velocity _with unit tests and documentation_ because they designed their systems to play to the strengths of LLMs.

afro88
2 replies
16h4m

Maybe this post wasn't the right one for your comment, hence the downvotes.

But I find it intriguing. Do you mean architecting software to allow LLMs to be able to modify and extend it? Having more of the overall picture in one place (shallow monoliths) and lots of helper funtions and modules to keep code length down? Ie, optimising for the input and output context windows?

CuriouslyC
1 replies
6h5m

LLMs are very good at first order coding. So, writing a function, either from scratch or by composing functions given their names/definitions. When you start to ask it to do second or higher order coding (crossing service boundaries, deep code structures, recursive functions) it falls over pretty hard. Additionally, you have to consider the time it takes an engineer to populate the context when using the LLM and the time it takes them to verify the output.

LLMs can unlock incredibly development velocity. For things like creating utility or helper functions and their unit tests at the same time, an engineer using a LLM will easily 10x an equally skilled engineer not using a LLM. The key is to architect your system so that as much of it as possible can be treated this way, while not making it indecipherable for humans.

hollerith
0 replies
5h29m

while not making it indecipherable for humans

This is a temporary constraint. Soon the maintenance programmers will use an AI to tell them what the code says.

The AI might not reliably be able to do that unless it is in the same "family" of AIs that wrote the code. In other words, analogous to the situation today where choice of programming language has strategic consequences, choice of AI "family" with which to start a project will tend to have strategic consequences.

ysofunny
0 replies
17h35m

AI wanted you to write code in GoLang so that it could absorb your skills more faster. kthanksbai

wolfgang42
0 replies
17h26m

I don’t know about this architecture for AI, but your description sounds like the explanations I’ve heard of the Ruby on Rails philosophy, which is clearly considered optimal by at least some humans.

jerf
0 replies
4h58m

Nobody has ten years of experience with a code base "optimized for AI" to be able to state such a thing so confidently.

And nobody ever will, because in 10 years, coding AIs will not look like they do now. Right now they are just incapable of architecture, which your supposed optimal approach seems to be optimizing for, but I wouldn't care to guarantee that's the case in 10 years. If nothing else, there will certainly be other relevant changes. And you'll need experience to determine how best to use those, unless they just get so good they take care of that too.

zmmmmm
0 replies
12h56m

probably containerisation is a big one, and also serverless computing

they aren't principles as such, but they certainly play into what is important and how you apply them

pbnjay
0 replies
14h42m

I mean shared hosting certainly existed but "the cloud" as we think of it today was much simpler and not nearly as ubiquitous. It doesn't really change the principles themselves but it certainly affects aspects of the risk calculus that dominates the table of contents.

kqr
0 replies
6h29m

Since 1970, to be fair... The people at the NATO Software Engineering conferences of '68 and '69 knew quite a bit about architecture. Parnas, my house-god in the area, published his best stuff in the 1970s.

zeroCalories
0 replies
20h10m

Process at my work is heavily influenced by this book, and I think it gives a pretty good overview of architecture and development processes. Author spends a lot of time in prose talking about mindset, and it's light on concrete skills, but it does provide references for further reading.

gfairbanks
0 replies
3h32m

Keeling's Design It book is great [1]. It helps teams engage with architecture ideas with concrete activities that end up illuminating what's important. My book tries to address those big ideas head-on, which turns out to be difficult, pedagogically, because it's such an abstract topic.

Which ideas have survived since 2010?

Some operating systems are microkernels, others are monolithic. Some databases are relational, others are document-centric. Some applications are client-server, others are peer-to-peer. These distinctions are probably eternal and if you come back in 100 years you may find systems with those designs even though Windows, Oracle, and Salesforce are long-gone examples. And we'll still be talking about qualities like modifiability and latency.

The field of software architecture is about identifying these eternal abstractions. See [2] for a compact description.

"ABSTRACT: Software architecture is a set of abstractions that helps you reason about the software you plan to build, or have already built. Our field has had small abstractions for a long time now, but it has taken decades to accumulate larger abstractions, including quality attributes, information hiding, components and connectors, multiple views, and architectural styles. When we design systems, we weave these abstractions together, preserving a chain of intentionality, so that the systems we design do what we want. Twenty years ago, in this magazine, Martin Fowler published the influential essay “Who Needs an Architect?” It’s time for developers to take another look at software architecture and see it as a set of abstractions that helps them reason about software."

[1] Michael Keeling, Design It: From Programmer to Software Architect, https://pragprog.com/titles/mkdsa/design-it/

[2] George Fairbanks, Software Architecture is a Set of Abstractions Jul 2023. https://www.computer.org/csdl/magazine/so/2023/04/10176187/1...

refibrillator
36 replies
15h31m

> project management risks: “Lead developer hit by bus”

> software engineering risks: “The server may not scale to 1000 users”

> You should distinguish them because engineering techniques rarely solve management risks, and vice versa.

It's not so rare in my experience. Code quality and organization, tests and documentation, using standard and well known tools - all of those would help both sides here.

That's why I've had to invoke the "hit by bus" hypothetical so many times in my career with colleagues and bosses, because it's a forcing function for reproducible and understandable software.

Pro tip: use "win the lottery" instead to avoid the negative connotation of injury or death.

ajanuary
29 replies
13h59m

Pro tip: use "win the lottery" instead to avoid the negative connotation of injury or death.

I very much appreciate the attempt to reframe it positively. But personally, if I win the lottery, I’m still doing a handover. The key thing with “hit by a bus” is, no matter what your personality, you don’t have any time to prepare. Thats why you’ve got to get the information out there today. Unfortunately I’ve yet to find a positive spin that has those same connotations.

devsda
22 replies
13h30m

Depends on the other person, but "gets hit by a bus or abducted by aliens" or some other (movie) trope doesn't sound too bad if it's an informal discussion.

serial_dev
21 replies
12h49m

I dislike the "hit by the bus" not because it hypothetically kills a team mate, but because it implies that if one of us dies, the key worry of the management is who left who knows how to deploy the foo-banana service. This phrase is a reminder that nobody would care if you died apart from the fact that only you know how to do x and y.

"Abducted by an alien" is so absurd, it takes away all that and replaces it with something potentially fun and an experience of a lifetime.

I also very much prefer the lottery example, and the fact that someone on HN would still do a handover doesn't change that fact.

wegfawefgawefg
12 replies
11h31m

Not that I am trying to make you upset, but I personally find this level of sensetivity in people annoying.

Life and death metaphors are my right to enjoy and share irreverently as the mortal being I am.

mnsc
5 replies
9h18m

So you would be OK if I used "abducted and gang raped for so long that you come out seemingly alive and physically intact but your mind has broke and you enter a catatonic state and can't do a handover" as an example? Because if you are not one of those "sensitive people" you should be able to focus on the "can't do a handover" part of the scenario and not get caught up in the gruesome part?

And before you ask, yes I enjoy being overly graphic like this on the pseudonumoyous internet to exaggerate my points but I wouldn't do this irl. That is hypocritical of me.

mirekrusin
3 replies
8h46m

Depends if aliens were attractive?

mnsc
2 replies
7h23m

Yeah I realize that "abducted" could suggest aliens, I was thinking more along the lines of Mexican drug cartel style abduction where your raping and being left alive is a wake up call to your partner in the police force that has turned the wrong stones. And the dudes doing the raping is not pretty. But the point is you are not able to do a proper handover, remember?

skeeter2020
0 replies
3h45m

The fact that you only respond like this anonymously should answer your original question/not-a-question.

from a more practical perspective: No, because you're replacing a well-understood metaphor with something that is unknown, juvenile and stupid, removing the value we get from shared language constructs.

mirekrusin
0 replies
5h39m

Why do you feel the need to go into details? "bus factor" is well known concept, it doesn't focus on details how the head disintegrates etc.

coldtea
0 replies
2h42m

Well, before everybody got sensitive I'd be OK with that too. It's not even very different from real world teasing talk and examples that were used by actual dev teams before the cult of HR grew.

And of course it's a strawman exaggerated version of the common "hit by a bus" idiom.

serial_dev
3 replies
8h37m

You can do whatever you want, I'm just sharing if my manager says "we won't know how to fix a bug in the xyz service if you were hit and killed by the bus", my response will be "I don't really give a f what happens at this stupid company after I die".

Take it like this to understand that mine was aimed to be constructive criticism:

Stop reminding the people who work at your stupid company, doing all your stupid scrum bs ceremonies that if they died and left the wife and children behind, you would be really worried about Tom needing two days to fix a bug in the iOS watch application, whereas it would take you only two hours. Again, you are free to do whatever you want, but if you keep reminding me that none of this bs what I'm doing here really matters to me, don't be surprised that I quit as soon as possible.

skeeter2020
1 replies
3h49m

but the manager is obviously (unless they are very, very bad) using a metaphor (that you don't like). By responding literally, what you're really saying is "I don't really give a f what happens at this stupid company after I LEAVE MY CURRENT RESPONSIBILITIES" regardless of why. It sounds like you have extreme trust issues with your manager if they can't make a (pretty benign) verbal mis-step with you, without this sort of response, and your follow up suggests you're in a really bad space. I can't believe it's not visible and leaking into other aspects of your work and interactions.

serial_dev
0 replies
2h42m

I'm okay now, thank you.

I did have a previous workplace, though, where they couldn't stop yapping about the bus factor and I disliked that phrase because it kept reminding me that one day I die and am wasting one more hour of my living days in a pointless retrospective that will have no positive effect on anything.

lelanthran
0 replies
1h33m

You can do whatever you want, I'm just sharing if my manager says "we won't know how to fix a bug in the xyz service if you were hit and killed by the bus", my response will be "I don't really give a f what happens at this stupid company after I die".

So? The company isn't going to let all their employees starve to death out of compassion for the one that did die.

They know you don't give a fuck about what happens after you're dead, but they're still alive, and they have to keep things running so that they can continue eating.

Telling people that you don't care what effect your death has on them is a pretty good way to indicate how selfish you are.

arcanemachiner
1 replies
9h19m

I died reading this comment.

bombela
0 replies
5h17m

But did you hand over your knowledge first?

rrr_oh_man
1 replies
9h53m

> it implies that if one of us dies, the key worry of the management is who left who knows how to deploy the foo-banana service

Story from this week:

Manager at large-ish IT support company, visibly distraught, tells their manager that one of her direct reports got cancer.

Response: "Oh, and I thought something bad happened, like the direct report quit."

manfre
0 replies
6h49m

Sorry you have to work with a toxic manager.

yakshaving_jgt
0 replies
10h52m

This phrase is a reminder that nobody would care if you died

I don’t agree that the phrase implies that at all.

tome
0 replies
10h34m

This phrase is a reminder that nobody would care if you died apart from the fact that only you know how to do x and y.

An alternative interpretation is "we have to keep doing x and y to earn our living, and it's doubly difficult because not only did you know how to do them but we're all having difficulty coping with your passing".

coldtea
0 replies
2h54m

because it implies that if one of us dies, the key worry of the management is who left who knows how to deploy the foo-banana service.

That will exactly be the key worry of the management.

We've had team members lost like that - and that was the worry.

They'll still feel sorry, say their condolensces, and might even go to the funeral, but in the end, the work continues - almost immediately.

bluGill
0 replies
6h25m

if there is a backup plan then management can care about your death. If there isn't one and you die they can't care or enough to even attend your funeral as neeping things running will consume them.

Ma8ee
0 replies
11h39m

the key worry of the management is who left who knows how to deploy the foo-banana service.

While it might be uncomfortable to be reminded of, it is in most cases true.

The company isn’t your family or your friends, and most managers adhere to the doctrine that the single important responsibility of the company is to create value for the stock holders.

JaumeGreen
0 replies
10h29m

People would care at a personal level, but the show must go on.

I had to take a sudden leave the week before, and my coworkers, and boss cared at a personal level. And while most of my tasks were taken care of by them some weren't, and I had to rush Monday morning when I got back.

This talk is not about personal feelings, even though it could include "do a memorial for the lost person", but it's about all the needed work that must go on on any scenario.

You should understand that people should keep going on their lifes once you are not there, and that is good. If they think kindly of you after your passing depends on your relationship during life, not on any company preparation scenario.

ijidak
0 replies
2h44m

- Kidnapped to paradise

- Abducted by beautiful aliens

cryptonector
0 replies
12h58m

"Key person risk" captures all the possibilities.

crabmusket
0 replies
4h52m

"Taken by the Rapture"?

coldtea
0 replies
2h56m

But personally, if I win the lottery, I’m still doing a handover.

How much time are you devoting to it? Enough for the project to ship, which could take months or a year, or merely enough to hastily give some other devs a breakdown of the thing, and "so long suckers" / hope for the best?

And, while you might (still do a handover), would others?

arendtio
0 replies
12h10m

But personally, if I win the lottery, I’m still doing a handover.

I wonder why we talk about such rare events. People get sick, and burnout is very common, too. Some people look for a new job, and when they find it, they call in sick just to not have to handle all the stress of their old job.

There are plenty of more likely examples of not having a handover.

DanHulton
0 replies
3h35m

"Doing a handover" won't save a lot of companies. There's no level of "handover" that will save some of the shops I've been in, that just 100% rely on one team member to handle _everything._ Even if they took a _month_ to try to pass along along their accumulated knowledge, there's ingrained engineering practices and processes that all boil down to "Go ask Chris."

Honestly, I'd say that if you're able to do a handover in two weeks, and that covers everything it needs to cover? Well, then probably the handover isn't _actually_ necessary, and anything you cover in those two weeks could probably be figured out by a reasonably competent teammate. Knowledge is rarely the issue, being a person-shaped load-bearing component for your team _is._

(But also, good on you for taking a bit of time to make sure your old co-workers aren't left in the lurch. It probably also gives a bit of time to consider what you _really_ want to do with the winnings, so as not to blow it all in the first year like a lot of lottery winners do.)

foooorsyth
1 replies
12h2m

“Win the lottery” doesn’t really work in tech. So many mid career engineers are deca-millionaires with 15-20 years of grants and refreshers. They’re working because they enjoy the work.

Even at “regular” companies that don’t have hockey stick stock graphs, lots of the senior engineering staff are quite well off by their 40s. One of my division leads sold his last startup for 10s of millions of dollars. He still send emails late on Friday nights.

IshKebab
0 replies
11h53m

Maybe in silicon valley. Not in the vast majority of the tech world.

skeeter2020
0 replies
3h55m

How about "takes a 3 week vacation"? I've seen lots of companies that can't survive that let alone a permanent departure. Or "increase bus count" which refers to removing a single point of failure and focuses on the motivation vs the cause of why you're in trouble. If you were doing root cause analysis you shouldn't stop at "because Larry got hit by a bus / won the lottery" because that's not the issue.

ollien
0 replies
15h17m

I've heard that positive spin responded to with "this company is my biggest investment, I'm not going anywhere" :)

gofreddygo
0 replies
6h32m

  > Pro tip: use "win the lottery" instead to avoid the negative connotation of injury or death.
Win the lottery is still quite an euphemism for a more common outcome - getting laid off.

I use "the next person" more often to drive the point.

And there's the worse situation of "burnout" where you still have headcount but they've mentally checked out.

Scarblac
0 replies
10h11m

It's happened twice now during my career that important colleagues were literally hit by buses.

And both were back to work in about a week.

I need a different standard example disaster.

Sakos
31 replies
21h55m

So is this a good book? Are there any other software architecture books that are recommended reading?

yashap
11 replies
21h37m

Haven’t read this one, but have read a few - “Domain Driven Design” (by Eric Evans) was the most influential for me.

jojohohanon
9 replies
21h15m

I am very interested in learning more / honing my design skills,

But

Like everyone else my time is extremely limited. Could you say a few words about DDD stood out for you and what other books you might compare it to?

sbayeta
5 replies
20h41m

(Not gp) I read this book more than a decade ago, when I was very inexperienced. The thing I remember the most, and I think the most valuable to me, is the idea of defining a shared domain language with the business domain experts, with a clearly defined meaning for each concept identified. For instance, what a "frozen" account exactly means, what's the difference with a "blocked" account. These are arbitrary, but must be shared among all the participants. This enables very precise and clear communication.

zeroCalories
3 replies
20h5m

At my job we have a shared glossary and data model that we use with business people, but do you really need a whole book on that?

jameshart
1 replies
19h15m

DDD is about a lot more than just defining what it calls ‘ubiquitous language’. It helps you figure out how to constrain which concerns of the language used in one domain need to affect how other domains think about those things - through a model it calls ‘bounded contexts’. Like, in your fraud prevention context, ‘frozen accounts’ might have all sorts of nuances - there might be a legal freeze or a collections freeze on the account, with different consequences; outside the domain, though, the common concept of ‘frozen’ is all that’s needed. DDD gives you some tools for thinking about how to break your overall business down into bounded contexts that usefully encapsulate complexity, and define the relationships between those domains so you can manage the way abstractions leak between them.

No silver bullet, of course, but, like most architectural frameworks, some useful names for concepts that give you the metavocabulary for talking about how to talk about your software systems.

This brief chapter from the O’Reilly Learning DDD book gives a good flavor of some of the value of the concepts it introduces: https://www.oreilly.com/library/view/learning-domain-driven-...

zeroCalories
0 replies
19h0m

Thanks for the explanation, I'll take a deeper look.

aspenmayer
0 replies
19h14m

In highly regulated industries like banking or other highly-secure environments, it’s a gradient between an internal wiki or FAQs etc, to what you have as an example of something more expansive and explicit, to an entire book, to an entire department or business unit for more important concepts that may vary between jurisdictions or be less explicitly defined, but no less important or impactful to the running of the business or group.

darkerside
0 replies
20h15m

I think that's the core idea. The layer on top the idea that that domain language should be expressed in an isolated core layer of your code. Here lies all your business logic. On top of that, you build application layers to adapt it as necessary to interact with the outside world, through things like web pages, emails, etc. as well as infrastructure layers to talk to your database, etc.

doctor_eval
0 replies
19h28m

I think DDD is a great book but like many of these kinds of things, I also think it's one of those books that has a couple of chapters of good ideas and then a dozen chapters of filler.

That said, ideas like ubiquitous language and bounded contexts are extraordinarily powerful, and definitely sharpened my own observations, so despite the filler, all in all I'd say it's one of the keystone books on software design and definitely worth reading.

Another thing that DDD talks about, and is relevant to this, is that design and implementation are two sides of the same coin. In "Just Enough", Fairbanks says, "Every architect I have met wishes that all developers understood architecture". Well, I am not kidding when I say that I wish every architect I've ever met understood software. A lack of understanding of the technical constraints of computing is just as likely to lead to failure as misunderstanding the business constraints. They are both critical to success, and these people should be working and learning from each other, rather than operating in a hierarchy.

To that point, one of the most influential things I've ever read was Code as Design: https://www.developerdotstar.com/mag/articles/reeves_design_...

Since it's a series of essays, it has all the detail and none of the filler.

crdrost
0 replies
19h35m

Not the person you're asking but thought I would chip in.

Domain Driven Design has a lot of good ideas, but then I consistently see them misrepresented by others who seem to half-understand it?

Probably the clearest example is the idea of a “bounded context.” You will find microservice folks for example who say you should decompose such that each microservice is one bounded context and vice versa, but then when they actually decompose a system there's like one microservice per entity kind (in SQL, this is a relation or table, so in a microservice pet-adoption app there would be a cat service, a dog service, a shelter service, an approval service, a user service...).

The thing is, Eric Evans is pretty clear about what he means by bounded context, and in my words it is taking Tim Peters’s dictum “Namespaces are one honking great idea—let’s do more of those!” and applying it to business vocabulary. So the idea is that Assembly Designer Jerry on the 3rd floor means X when he says “a project,” but our pipefitter Alice on the shop floor means something different when she says “a project,” and whenever she hears Jerry talking about a project she is mentally translating that to the word “contract” which has an analogous meaning in her vocabulary. And DDD is saying we should carefully create a namespace boundary so that we can talk about “pipefitting projects” and “pipefitting contracts” and “assembly-design projects” and then have maybe a one-to-one relationship between pipefitting contracts and assembly-design projects. Eric Evans wants this so that when a pipefitter comes to us and tells us that there's a problem with “the project,” we immediately look at the pipefitter project and not the assembly-design project. He really hates that wasted effort that comes from the program not being implemented in vocabulary that closely matches the language of the business.

So if the microservices folks actually had internalized Eric's point, then they would carve their microservices not around kinds of entities, but rather archetypes of users. So for the pet adoption center you would actually start with a shelter admin service and a prospective adopter service and a background checker service, assuming those are the three categories of people who need to interact with the pet adoption process. Or like for a college bursar's office you would decompose into a teacher service, student service, admin service, accountant service, financial aid worker service, person who reminds you that you haven't paid your bills yet service.

So I thought it was a really good read, that's actually a really interesting perspective, right? But I don't think the ideas inside are communicated with enough clarity that I can bond with others who have read this book. It is kind of strange in that regard.

globular-toast
0 replies
21h7m

One of my favourite books. Some people seem to prefer "Implementing Domain-Driven Design" by Vaughn Vernon. I have both on my shelf but I've never been able to read much of the latter.

Clean Architecture by Robert C. Martin has a lot of the same stuff distilled into more general rules that would apply to all software, not just "business" software. It's a good book but overall I prefer DDD.

hemantv
6 replies
21h52m

Game Programming Patterns was one which had a big impact on me.

Other was Effective Engineer

vendiddy
4 replies
21h11m

Is Game Programming Patterns relevant to those who are not building games?

mon_
2 replies
20h22m

Relevant excerpt from the Introduction chapter:

---

Conversely, I think this book is applicable to non-game software too. I could just as well have called this book More Design Patterns, but I think games make for more engaging examples. Do you really want to read yet another book about employee records and bank accounts?

That being said, while the patterns introduced here are useful in other software, I think they’re particularly well-suited to engineering challenges commonly encountered in games:

- Time and sequencing are often a core part of a game’s architecture. Things must happen in the right order and at the right time.

- Development cycles are highly compressed, and a number of programmers need to be able to rapidly build and iterate on a rich set of different behavior without stepping on each other’s toes or leaving footprints all over the codebase.

- After all of this behavior is defined, it starts interacting. Monsters bite the hero, potions are mixed together, and bombs blast enemies and friends alike. Those interactions must happen without the codebase turning into an intertwined hairball.

- And, finally, performance is critical in games. Game developers are in a constant race to see who can squeeze the most out of their platform. Tricks for shaving off cycles can mean the difference between an A-rated game and millions of sales or dropped frames and angry reviewers.

richrichie
1 replies
19h39m

Performance part alone is worth reading about game development for non game developers. Retail off the shelf machines these days are so powerful that it encourages sloppy design and development.

water-your-self
0 replies
16h46m

Performance is very much still relevant in the modern day, even given the area under the curve of moores law

zamalek
0 replies
18h28m

Games are just normal software dev dialed up to ten, though there are many problems that game developers enjoy not having to care about (and visa-versa). Attempting to make a basic 3D engine is probably a good exercise for all developers - even if it goes uncompleted.

smnplk
0 replies
19h2m

Is GPP relevant if you don't want to do OOP ?

mbb70
5 replies
21h3m

Designing Data-Intensive Applications by Martin Kleppmann. It's my "this is the book I wish I read when I started programming"

adamors
1 replies
20h58m

When you started? I mean it’s a good book but it would be wasted on beginners.

mbb70
0 replies
19h5m

Maybe not when I started, but after years of hard lessons working with HDFS, Cassandra and Spark (not to mention S3, Dynamo and SQS) seeing all those hard lessons pinned down like butterflies in a display case made me jealous of anyone who found this book early

pragmatic
0 replies
17h18m

We’re about due for a new edition with some updates. I read it again just recently and in certain sections I would love some updates. Many things have changed in 7 years.

notduncansmith
0 replies
11h41m

I read this book in the first few years of programming professionally, and in my naïveté I was so eager to apply the patterns therein I missed out on many opportunities to write simple, straightforward code. In some ways it really hindered my career.

I don’t blame this on the book, of course; ultimately the intuition it helped me build has been very helpful in my work. With that said, as a particular type of feisty and eager young programmer at the time, I can now say I would have benefitted at least as much from a book titled “Designing Data-Unintensive Applications” :)

JackMorgan
0 replies
14h19m

An absolutely excellent book! I learned so much going through it slowly with a reading group I started at my job.

mehagar
1 replies
20h48m

Clean Architecture and Fundamentals of Software Architecture: An Engineering Approach.

GiorgioG
0 replies
20h11m

Ugh enough with clean architecture. So much boilerplate. Uncle Bob please retire.

JackMorgan
0 replies
14h4m

I learned a lot about good software design from Structure and Interpretation of Computer Programs. It's free and has 350+ exercises. I just committed to doing one a day for a year.

I learned so much from that book it's like I aged ten years in that one year. Revolutionary for me.

I also loved

- Designing Data-Intensive Applications

- Design Of Everyday Things

I've got a list of other influential books with my thoughts here:

https://deliberate-software.com/page/books/

My controversial takes are that the whole series of Domain Driven Design books are pretty poor. I've seen several teams fall into a mud pit of endless meetings arguing about entities vs repositories. Same thing with Righting Software. The books are all filled with vague statements so everyone just spends all this time debating what they mean. It turns teams from thinking critically into religious bickering. Same for all the design patterns books.

ChrisMarshallNY
0 replies
16h57m

Writing Solid Code, from 30 years ago, by Steve Maguire, was a watershed, for me.

Many of the techniques, therein, are now standard practice.

It also espoused a basic philosophy of Quality, which doesn’t seem to have aged as well.

breadwinner
21 replies
16h59m

Architecture for architecture's sake is the worst thing you can do because it unnecessarily increases complexity.

The ultimate goal of good architecture is cost reduction. If your architecture is causing you to spend more time developing and maintaining code then the architecture has failed.

nine_k
9 replies
16h47m

Some architectures are very cheap to implement initially but are more expensive to maintain and evolve. Some are the other way around, require higher upfront expenses but allows to operate and evolve the product easier.

It's always a balancing act. The corollary us that there is no one right architecture, the choice depends on circumstances, and should sometimes be re-evaluated.

Another corollary is that flexibility is extra useful, because it allows to evolve and tweak the architecture to.some extent, maintaining its efficiency in changing circumstances.

flowerlad
4 replies
15h52m

Over engineering is the result when you focus too much on “future” needs.

sverhagen
2 replies
14h36m

And under-engineering is the result when you don't sit down at the beginning of the project and plan for the known scope and requirements of that project. They are both bad ways to go about it. So, like the title says: "just enough", so not more, but also not less.

It also doesn't always have to be a dedicated architect in a proverbial ivory tower. It can be an informal meeting of developers around a whiteboard. Don't get scared by the word "architecture".

mainde
1 replies
6h40m

While I agree that under-engineering can be a problem, it's generally very easy to fix by doing what's now clearly identified as missing/needed. Fixing over-engineering is always a nightmare and rarely a success.

IHMO the experience, the pragmatism and the soft skills needed for a successful/productive informal architectural meeting are too rare for this solution to work consistently.

Personally I've abandoned all hope and interest in software architecture, while on paper it makes a lot of sense, in practice (at least in what I do) it just enables way too many people to throw imaginary problems in the mix, distracting everyone from what matters with something that may eventually be a concern once/if a system hits a top-1% level of criticality/scale.

gfairbanks
0 replies
4h41m

throw imaginary problems in the mix

Yes, this happens too easily. It's the crux of Ward Cunningham's original observation on tech debt discussed recently [1]. He basically said: all of you thinking you can use waterfall to figure it all out up front are deluded. By getting started right away, you make mistakes and you avoid working on non-problems. I can fix the mistakes with refactoring but you can't ever get your time back.

Most teams live in his world now. Few do too much up-front design, most suffer from piled up tech debt.

I hope you give architecture another chance. Focus on the abstractions themselves [2] and divorce that from the process and team roles [3].

[1] https://news.ycombinator.com/item?id=40616966#40624446

[2] Software Architecture is a Set of Abstractions, George Fairbanks, IEEE Software July 2023. https://www.computer.org/csdl/magazine/so/2023/04/10176187/1...

[3] JESA section 1.5, https://www.georgefairbanks.com/assets/jesa/Just_Enough_Soft... "Job titles, development processes, and engineering artifacts are separable, so it is important to avoid conflating the job title “architect,” the process of architecting a system, and the engineering artifact that is the software architecture."

gfairbanks
0 replies
4h31m

In my own gut, I have a sense of the right amount of time to spend on design. Assume (falsely) for a moment that I'm right: How can I transfer that gut sense to you or anyone? Chapter 3 of the book is my attempt to share that gut feel [3].

Even if you clone us, our gut feel doesn't transfer to our clones. So, we can recognize under- and over-engineering in our guts, but how can we help someone else? Is there a better way than the risk-driven model?

  1. Identify and prioritize risks
  2. Select and apply a set of techniques
  3. Evaluate risk reduction
[1] JESA Chapter 3: Risk-Driven Model. https://www.georgefairbanks.com/assets/jesa/Just_Enough_Soft...

serial_dev
2 replies
12h42m

It's not always a balancing act, it implies that everything has a good side and you just need to know what to focus on.

It's not true, there are architectures that are a waste of time today, and a waste of time in 5 years, too.

SgtBastard
1 replies
10h41m

If that architecture has no value in any circumstances at the beginning of a project and no value once the code base has matured, it’s an anti pattern.

metaltyphoon
0 replies
1h40m

Most devs / architects which initially designed the system then to not stick around and find out if what they did actually worked.

j45
0 replies
57m

Pre-mature optimization and over-engineering are a greater issue than the opposite.

Generally, clever architecture at the right stages can beat clever code and leave flexibility to pivot.

javaunsafe2019
5 replies
16h47m

The ultimate goal or SA is to fulfil the quality goals. Cost reduction can be one.

banish-m4
1 replies
12h57m

The category is nonfunctional requirements.

gfairbanks
0 replies
4h11m

It's often taught as "nonfunctional requirements" or NFRs. The architecture community says "quality attributes". Why?

1) Not all qualities are requirements. Requirements tend to be pass/fail, either you meet them or you don't. Latency is a quality and typically lower is better, not pass/fail (though sometimes it is).

2) "Nonfunctional" in other contexts means broken. If you saw a machine with a sign on it saying "nonfunctional" what would you conclude?

At one point I tried to find the origin of the term "quality attributes". It's way older than the software community. I found it being used in the 1960's by the US National Academy of Sciences. If anyone knows the origin I'm interested in learning more.

CraigJPerry
1 replies
14h17m

The goal of software architecture is

> cost reduction

fulfil the quality goals

I’d word it as “keeping the cost of changing software low over the long term”.

I don’t think you can reduce that to a quality goal of “modifiability” because it’s not negotiable like other quality goals.

I don’t think you can say it’s just cost reduction, but that is closer.

It’s an existential thing. Architecture is about retaining the ability to change your software (avoiding the big ball of mud). If you lose the ability to change within time and resource constraints then the project, product or startup is dead.

gfairbanks
0 replies
4h16m

For typical web / IT systems I largely agree with focusing on modifiability as a heuristic because on those kinds of systems it's typically the biggest risk.

But, have you seen this kind of mistake / failure? A system is built so flexibly that it can handle all kinds of future needs, but it's slow. Maybe it's a shopping cart that takes 5 seconds to update. So start with modifiability as the primary heuristic but keep an eye out for other failure risks.

Meta-commentary:

This is an example of why it's so hard to discuss architecture. My book talks about "failure risks", which is pretty abstract or generic. There's no easy heuristic for avoiding "failure risks" like there is for web / IT systems.

Software architecture is a discipline that's bigger than just web / IT systems. Some systems must respond with X milliseconds, otherwise the result is useless, so the architecture should make that possible -- and preferably make it easy.

flowerlad
0 replies
15h53m

Good architecture reduces cost of achieving quality.

jefffoster
1 replies
4h54m

Not just cost reduction but allowing more investment. A good architecture can enable more people to work on your product.

gfairbanks
0 replies
4h25m

Agreed. See: Scale Your Team Horizontally [1].

"I’m not ready to argue against Brooks’ Law that adding people to a late project makes it later. But today, when developers are working on a clean codebase, I see lots of work happening in parallel with tool support to facilitate coordination. When things are going smoothly, it’s because the architecture is largely set, the design patterns provide guidance for most issues that arise, and the code itself (with README files alongside) allow developers to answer their own questions."

[1] Scale Your Team Horizontally, George Fairbanks, IEEE Software July 2019. https://www.georgefairbanks.com/ieee-software-v36-n4-july-20...

banish-m4
1 replies
12h57m

Big grand architecture invariably leads to an elitism culture where there are overpaid technical architects who don't do much but insist on terrible patterns for software engineers to solve under unreasonable constraints including deadlines.

gfairbanks
0 replies
4h1m

One of my main goals with the book was to "democratize" architecture, to make it accessible and relevant to every developer. As the cover blurb says:

"It democratizes architecture. You may have software architects at your organization — indeed, you may be one of them. Every architect I have met wishes that all developers understood architecture. They complain that developers do not understand why constraints exist and how seemingly small changes can affect a system’s properties. This book seeks to make architecture relevant to all software developers, not just architects."

In hindsight, my book hasn't been very good at that but perhaps it was a stepping stone on the path. Michael Keeling's book Design It: From Programmer to Software Architect does a better job of saying how developers can engage with architecture ideas. I'm a personal friend of his and a huge fan of his work. His experience report [2] on how to democratize architecture is what I aspire to do.

[1] Michael Keeling, Design It: From Programmer to Software Architect, https://pragprog.com/titles/mkdsa/design-it/

[2] Keeling and Runde, Agile2018 Conference. Share the Load: Distribute Design Authority with Architecture Decision Records. https://web.archive.org/web/20210513150449/https://www.agile...

gfairbanks
0 replies
4h55m

How much architecture is enough? Chapter 3 Risk-Driven Model [1] guides you to do as little architecture as possible. It says:

"The risk-driven model guides developers to apply a minimal set of architecture techniques to reduce their most pressing risks. It suggests a relentless questioning process: “What are my risks? What are the best techniques to reduce them? Is the risk mitigated and can I start (or resume) coding?” The risk-driven model can be summarized in three steps:

  1. Identify and prioritize risks
  2. Select and apply a set of techniques
  3. Evaluate risk reduction
You do not want to waste time on low-impact techniques, nor do you want to ignore project-threatening risks. You want to build successful systems by taking a path that spends your time most effectively. That means addressing risks by applying architecture and design techniques but only when they are motivated by risks."

An example of "architecture" is using the Client-Server style, where servers never take initiative and simply respond to client requests. That might be a good or bad fit to the problem.

[1] https://www.georgefairbanks.com/assets/jesa/Just_Enough_Soft...

ysofunny
7 replies
17h35m

software architecture is like regular architecture but civil engineering does not exist because there hasn't been an Isaac Newton of software. I'd say the closest so far is Claude Shannon

JackMorgan
2 replies
14h0m

No software engineering practice, architecture, language, or tooling is known to be more effective because we don't even have units of measurement. We are still in the "hope it doesn't fall down" stage of software engineering.

This has profound effects for self reported productively.

For example, biking feels faster than driving 30mph with the windows up down a small suburban road with lots of stop signs. But typically the driver will still get 20 blocks away much much faster.

However if we had no units of measurement, everyone would be arguing the bike was faster.

This is where we are with software engineering.

viraptor
0 replies
13h46m

Also: We don't repeat the same thing enough times in the open to be able to compare. The things we do repeat stay private.

The only companies that really have enough data are the consulting giants. And they have enough perverse incentives that I would be extremely sceptical of their data.

danielovichdk
0 replies
5h12m

Great analogy with the bike and car. I will use that in the future

dgb23
1 replies
8h43m

We have or rather could have data and metrics. We just tend to ignore them outside of specific domains.

For example, from glancing at this summary and table of contents, it seems like there’s no to little mention of performance metrics. What is architecture good for if it doesn’t account for what the computer actually does?

Even in terms of development productivity or UI: why don’t we have mathematical models to decribe the mental stack one needs to develop, change, extend and more importantly use software?

Why are compute resources (human or machine) rarely a consideration when they have a real, measurable impact on interacting with software as developers or users?

gfairbanks
0 replies
3h1m

it seems like there’s no to little mention of performance metrics.

The book uses the jargon from the architecture community. Chapter 12 section 11 on Quality Attribute Scenarios is what you're looking for. But [1] seems to be a summary of Michael Keeling's treatment on qualities and scenarios, which I like better.

My thinking on this has been greatly influenced by Titus Winters who I've been teaching with in Google for the past couple years. He's tied together the ideas of quality attribute scenarios, compile-time tests, monitored metrics, and alerts in a way that is, in hindsight, completely obvious but I've not seen elsewhere. Maybe we can get him to write that up as an essay.

[1] https://dev.to/frosnerd/quality-attributes-in-software-1ha9

[2] Titus Winters, Design is Testability, May 8 2024. https://on.acm.org/t/design-is-testability/3038 (note: video doesn't seem to be posted yet)

nine_k
0 replies
16h36m

While I agree with the general idea of your comparison, I must note that traditional architecture also involves a lot of deliberation and choices not dictated by formulas. Say, the Westminster Palace certainly involves some bits of civil engineering proper, but its defining features (the ornate texture, the iconic clock tower, the internal layout) are dictated mostly by functional and aesthetic choices.

Same applies to much of software.

jillesvangurp
0 replies
8h18m

This is actually the whole mistaken premise that underlies the notion of software architecture and design. Building software is not at all like building a bridge or a sky scraper. It's more similar to designing those.

With big architecture projects you first design them and then build them. This is a lot of work. You have to think about everything, run simulations, engage with stakeholders, figure out requirements and other constraints, think about material cost, weight, etc. Months/years can be lost with big construction projects just coming up with the designs. The result is a super detailed blueprint that covers pretty much all aspects of building the thing.

It's a lot like building software, actually. These design projects have a high degree of uncertainty, risk, etc. But it's better than finding out that it's all wrong before you start building and using massive amounts of people, concrete, steel, and other expensive resources. But when have you ever heard an architect talk about making a design for their design to mitigate this? It's not a thing. There might have been a sketch or a napkin drawing at some point at best. SpaceX has introduced some agile elements into their engineering, which is something they learned from software development.

With software, your finished blueprint is executable. The process of coming up with one is manual, the process of building software from the blueprint is typically automated (using compilers and other tools) and so cheap software developers do it all the time (which wasn't always the case). The process of creating that executable blueprint has a lot of risks of course. And there might be some napkin/whiteboard designs here and there. But the notion that you first do a full design and then a full implementation (aka. waterfall) was never a thing that worked in software either. With few exceptions, there generally are no blueprints for your blueprint.

Read the original paper on Waterfall by Royce. It doesn't actually mention waterfalls at all and it vaguely suggests iterating might be a good idea (or at least doing things more than once). He fully got that the first design was probably going to be wrong. Agile just optimized away the low value step of creating designs for your blueprints that becomes apparent when you iterate a lot.

gfairbanks
0 replies
2h38m

Some folks read Peter Naur's Programming as Theory Building [1] and become zealots. I'm that kind of person. His ideas are woven through my recent talks [2]. Via years of essays, I've been building to a vision of how to build software that I'm calling Continuous Design that's partly stitched together in this overview [3].

[1] Gwern on Peter Naur's Programming as Theory Building, which includes Naur's essay and related comments. https://gwern.net/doc/philosophy/epistemology/index#naur-198...

[2] https://www.youtube.com/playlist?list=PLRqKmfi2Jh3uoMnZdaWmC...

[3] Continuous Design, https://www.georgefairbanks.com/ieee-continuous-design/

xhevahir
1 replies
17h14m

"Risk-dependent" would be a much better name for this methodology. (Why are programmers so fond of this "[X]-driven" phrase?)

canadaduane
0 replies
14h15m

Personally, I've always thought of "X-driven" as a mechanically derived metaphor. This shaft drives that gear, which drives that wheel, etc. It's a short form for "what is the most powerful mechanism of this complex thought machinery".

rowls66
1 replies
19h9m

I found 'A Philosophy of Software Design' by John Ousterhout to be useful. It contains alot of solid easy to understand advice with many examples.

gchaincl
0 replies
16h47m

Great book, I've learnt a lot from it

matt_lo
1 replies
20h43m

Great book. I think it’s more ideal for folks who solution design day-to-day. Better when you have experiences to relate back with.

gfairbanks
0 replies
2h58m

Better when you have experiences to relate back with.

100% this. I've been teaching software design since the 1990's and it's so much easier when the audience has enough experience that I can wave my hands and say "you know when things go like this ... ?" and they do, and then we can get on with how to think about it and do better.

Without that, it's tedious. Folks with less experience have the brainpower to understand but not the context. I try to create the context with a synthetic example, but (again, waving hands...) you know how much richness your current system has compared to any example I can put on a page or a slide.

b1ld5ch1rm5por7
1 replies
8h8m

Within my prior company they circulated this "Software Architecture for Developers" book by Simon Brown: https://leanpub.com/b/software-architecture.

It's still on my reading list, though. I've moved on from that company. But it came highly recommended, because they also documented their architecture with the C4 model.

Has anyone here read it?

gfairbanks
0 replies
3h18m

Simon Brown is another person who has done a far better job than me of "democratizing" software architecture for developers. His talks [1] and workshops on architecture are exceptionally effective and his C4 architecture modeling language [2] is getting real traction.

I have youtube videos too [3] but they aren't as effective.

[1] https://www.youtube.com/results?search_query=simon+brown+arc...

[2] https://c4model.com/

[3] https://www.youtube.com/playlist?list=PLRqKmfi2Jh3uoMnZdaWmC...

revskill
0 replies
11h29m

I'm tired of reading "arbitrary terms" to try to standardize something seems to subjective.

Give me the mathematical model and that's it. No more vague, human-created terms to try to translate your own ideas, that's just a hack.

bjt
0 replies
16h47m

We did a book club on this at work a few years back. I found it extremely repetitive.

__rito__
0 replies
13h52m

Is this a good resource for someone starting a non-trivial OSS? Or something a solopreneur will derive value from? Can you suggest me some books/other resources that will be of value to solo developers?

RcouF1uZ4gsC
0 replies
5h24m

“Bus factor” in my mind is one of ways engineers self-sabotage.

How much are they paying the CEO?

Don’t they consider their replacement costs in the hundreds of millions.

MBA management types strive to convince everyone they are irreplaceable.

Engineers proudly talk about how they are easily replaceable.

Guess who gets shit on?

It is actually good that losing engineering talent be extremely painful for a company. This helps prevent the slow loss of engineering culture like what happened at Boeing.