Published in 2010? Curious how much of it has survived since then?
I like “Design It” because of some of the workshop/activities that are nice for technical folks who need to interact with stakeholders/clients (I’m in a consulting role so this is more relevant). Also it doesn’t lean hard on specific technical architectural styles which change so frequently…
I can't think of many things that have changed in architecture since 2010. I'm not talking about fads but about actual principles.
Things are changing now, pretty fast. The architecture that is optimal for humans is not the same architecture that is optimal for AI. AI wants shallow monoliths built around function composition using a library of helper modules and simple services with dependency injection.
Most problems are not well-addressed by shallow monoliths made of glue code. It's irrelevant what "AI wants", just as it's irrelevant what blockchain "wants".
If you think development velocity doesn't matter, you should talk to the people who employ you.
If you think AI helps speed up development…
Please tell me about how you once asked ChatGPT to write something for you, saw a mistake in its output, and immediately made your mind up.
I’ve been writing code professionally for a decade. I’ve led the development of production-grade systems. I understand architecture. I’m no idiot. I use Copilot. It’s regularly helpful. and saves time. Do you have a counter-argument that doesn’t involve some sort of thinly veiled “but you’re an idiot and I’m just a better developer than you”?
I don’t by any means think that a current generation LLM can do everything that a software developer can. Far from it. But that’s not what we are talking about.
We'll need some well researched study on how much LLMs actually help vs not. I know they can be useful in some situations, but it also sometimes takes a few days away from it to realise the negative impacts. Like the "copilot pause" coined by Primogen - you know the completion is coming, so you pause when writing the trivial thing you knew how to do anyway and wait for the completion (which may or may not be correct, wasting both time and opportunity to practice on your own). Self-reported improvement will be biased by impression and facts other than the actual outcome.
It's not that I don't believe your experience specifically. I don't believe either side in this case knows the real industry-wide average improvement until someone really measures it.
Unfortunately, we still don't have great metrics for developer productivity, other than the hilari-bad lines of code metric. Jira tickets, sprints, points, t-shirt sizes; all of that is to try and bring something measurable to the table, but everyone knows it's really fuzzy.
What I do know though, is that ChatGPT can finish a leetcode problem before I've even fully parsed the question.
There are definitely ratholes to get stuck and lose time in when trying to get the LLM to give the right answer, but LLM-unassisted programming has the same problem. When using an LLM to help, there's a bunch of different contexts I don't have to load in because the LLM is handling it giving me more head space to think about the bigger problems at hand.
No matter what a study says, as soon as it comes out, it's going to get picked apart because people aren't going to believe the results, no matter what the results say.
This shit's not properly measurable like in a hard science so you're going to have to settle for subjective opinions. If you want to make it a competition, how would you rank John Carmack, Linus Torvalds, Grace Hopper, and Fabrice Bellard? How do you even try and make that comparison? How do you measure and compare something you don't have a ruler for?
This is an interesting case for two reasons. One is that leetcode is for distilled elementary problems known in CS - given all CS papers or even blogs at disposal, you should be able to solve them all by pattern matching the solution. Real work is anything but that - the elementary problems have solutions in libraries, but everything in between is complicated and messy and requires handling the unexpected/underdefined cases. The second reason is that leetcode problems are fully specified in a concise description with an example and no outside parameters. Just spending the time to define your problem to that level for the LLM is likely getting you more than halfway to the solution. And that kind of detailed spec really takes time to create.
"What I do know though, is that ChatGPT can finish a leetcode problem before I've even fully parsed the question."
You have to watch out for that, that's an AI evaluation trap. Leetcode problems are in the training set.
I'm reminded of people excitedly discussing how GPT-2 "solved" the 10 pounds of feathers versus 10 pounds of lead problem... of course, it did, that's literally in the training set. GPT-2 could be easily fooled by changing any aspect of the problem to something it did not expect. Later ones less so though last I tried a few months ago while they got it right more often then wrong they could still be pretty easily tripped up.
loup-vaillant could probably optimize their talking point with help from this blog post: https://rachelbythebay.com/w/2018/04/28/meta/
Requiring that the counter-argument reaches a higher bar than both your and the original argument is...definitely a look!
I don't have to think, people have done research.
Only time will tell. Right now this sounds like everything that was once claimed, each technological cycle, only to be forgotten about after some time. Only after some time we come to our senses, some things simply stick while other ‘evolve’ in other directions (for the lack of a better word).
Maybe this time it’s different, maybe it’s not. Time will tell.
While I don't disagree with you (and tend to be more of an AI skeptic than enthusiast, especially when it comes to being used for programming), this does weaken the earlier assertion that AI was brought up in response to; "things that have changed in architecture since 2010" is a lot more narrow if you rule out anything that's only come about in the past couple of years by definition due to not having been around long enough to prove longevity.
It is sad, and confusing, to read comments like this on HN.
I mean, you're not even wrong.
AI does help speed up development! It lets you completely skip the "begin to understand the requirements" and "work out what's sensible to build" steps, and you can get through the "type out some code" parts even faster than copy-pasting from Stack Overflow (at only 10× the resource expenditure, if we ignore training costs!).
It does make the last step ("have a piece of software that's fit-for-purpose") a bit harder; but that's a price you should be willing to pay for velocity.
Velocity is good for impact, typically.
Poor code monkeys. I am in a industry where software bugs can severely harm people since over 20 years and the fastest code never survived. It always only solved some cheap and easy 70% of the job and the remaining errors almost killed the project and everything had to be reworked peoperly. Slow is smooth and smooth is fast. "Fast" code costs you four times: write it, discuss why it is broken, remove it, rewrite it.
This response is entirely tribalist and ignores the differences between LLMs and ‘blockchain’ as actual technologies. To be blunt, I find it hard to professionally respect anyone that buys into these culture wars to the point where it completely overtakes their ability to objectively evaluate technologies. This isn’t me saying that anyone that has written off LLMs is an idiot. But to equate these two technologies in this context makes absolutely no sense to me just from a logical perspective. I.e. not involving a value judgment toward either blockchain or LLMs. The only reason you’re invoking blockchain here is because the blockchain and LLM fads are often compared / equated in these conversations. Nobody has suggested that blockchain technology be used to assist with the development of software in the way that LLMs are. It simply doesn’t make sense. These are two entirely separate technologies setting out to solve two entirely orthogonal problems. The argument is completely nonsensical.
Linus Torvalds is a strong advocate. He even wrote a blockchain-based source code management system, which he dubbed “the information manager from hell”[0], spending over three months on it (six months, by his own account) before handing it over to others to maintain.
People complain that this “information manager” system is hard to understand, but it's actively used (alongside email) for coordinating the Linux kernel project. Some say it's crucial to Linux's continued success, or even that it's more important than Linux.
[0]: see commit e83c5163316f89bfbde7d9ab23ca2e25604af290
This whole part sounds like BS mumbo jumbo. AI isn’t developing any system anytime soon and people surely aren’t going to design systems that cater to the current versions of LLMs.
Have you heard of modular, mojo, and max?
They're designed for fast math and python similarity in general. Llama.cpp on the other hand is designed for LLM as we use it right now. But Mojo is general purpose enough to support many other "fast Python" use cases and if we completely change the architecture of LLMs, it's still going to be great for them.
It's more of a generic system with attention on performance of specific application rather than a system designed to cater to current LLMs.
No. Max is an entire compute platform designed around deploying LLMs at scale. And Mojo takes a Python syntax (it’s a superset) but reimplements the entire compiler so you (or the compiler on your behalf) can target all the new AI compute hardware that’s almost literally popped up overnight. Modular is the company that raised 130MM dollars in under 2 years to make these two plays happen. And Nvidia is on fire right now. I can assure you without a sliver of a doubt that humans are most certainly redesigning entire computing hardware and the systems atop to accommodate AI. Look at the WWDC Keynote this year if you need more evidence.
Sure it's made to accommodate AI or more generally fast vector/matrix math. But the original claim was about "people surely aren’t going to design systems that cater to the current versions of LLMs." Those solutions are way more generic than current or future versions of LLMs. Once LLMs die down a bit, the same setups will be used for large scale ML/research unrelated to languages.
What? The entire point of the comment you’re replying to is that the LLM isn’t designing the system. That’s why it’s being discussed in the first place. LLMs certainly currently play a PART in the ongoing development of myriad projects, as made evident by Copilot’s popularity to say the least. That doesn’t mean that an LLM can do everything a software developer can, or whatever other moving goalpost arguments people tend to use. They simply play a part. It doesn’t seem outside of the realm of reason for a particularly ‘innovative’ large-scale software shop to at least consider taking LLMs into account in their architecture.
The skeptics in this thread have watched LLMS flail trying to produce correct code with their long imperative functions, microservices and magic variables and assumed that their architecture is good and LLMs are bad. They don't realize that there are people 5xing their velocity _with unit tests and documentation_ because they designed their systems to play to the strengths of LLMs.
Maybe this post wasn't the right one for your comment, hence the downvotes.
But I find it intriguing. Do you mean architecting software to allow LLMs to be able to modify and extend it? Having more of the overall picture in one place (shallow monoliths) and lots of helper funtions and modules to keep code length down? Ie, optimising for the input and output context windows?
LLMs are very good at first order coding. So, writing a function, either from scratch or by composing functions given their names/definitions. When you start to ask it to do second or higher order coding (crossing service boundaries, deep code structures, recursive functions) it falls over pretty hard. Additionally, you have to consider the time it takes an engineer to populate the context when using the LLM and the time it takes them to verify the output.
LLMs can unlock incredibly development velocity. For things like creating utility or helper functions and their unit tests at the same time, an engineer using a LLM will easily 10x an equally skilled engineer not using a LLM. The key is to architect your system so that as much of it as possible can be treated this way, while not making it indecipherable for humans.
This is a temporary constraint. Soon the maintenance programmers will use an AI to tell them what the code says.
The AI might not reliably be able to do that unless it is in the same "family" of AIs that wrote the code. In other words, analogous to the situation today where choice of programming language has strategic consequences, choice of AI "family" with which to start a project will tend to have strategic consequences.
AI wanted you to write code in GoLang so that it could absorb your skills more faster. kthanksbai
I don’t know about this architecture for AI, but your description sounds like the explanations I’ve heard of the Ruby on Rails philosophy, which is clearly considered optimal by at least some humans.
Nobody has ten years of experience with a code base "optimized for AI" to be able to state such a thing so confidently.
And nobody ever will, because in 10 years, coding AIs will not look like they do now. Right now they are just incapable of architecture, which your supposed optimal approach seems to be optimizing for, but I wouldn't care to guarantee that's the case in 10 years. If nothing else, there will certainly be other relevant changes. And you'll need experience to determine how best to use those, unless they just get so good they take care of that too.
probably containerisation is a big one, and also serverless computing
they aren't principles as such, but they certainly play into what is important and how you apply them
I mean shared hosting certainly existed but "the cloud" as we think of it today was much simpler and not nearly as ubiquitous. It doesn't really change the principles themselves but it certainly affects aspects of the risk calculus that dominates the table of contents.
Since 1970, to be fair... The people at the NATO Software Engineering conferences of '68 and '69 knew quite a bit about architecture. Parnas, my house-god in the area, published his best stuff in the 1970s.
Process at my work is heavily influenced by this book, and I think it gives a pretty good overview of architecture and development processes. Author spends a lot of time in prose talking about mindset, and it's light on concrete skills, but it does provide references for further reading.
Keeling's Design It book is great [1]. It helps teams engage with architecture ideas with concrete activities that end up illuminating what's important. My book tries to address those big ideas head-on, which turns out to be difficult, pedagogically, because it's such an abstract topic.
Which ideas have survived since 2010?
Some operating systems are microkernels, others are monolithic. Some databases are relational, others are document-centric. Some applications are client-server, others are peer-to-peer. These distinctions are probably eternal and if you come back in 100 years you may find systems with those designs even though Windows, Oracle, and Salesforce are long-gone examples. And we'll still be talking about qualities like modifiability and latency.
The field of software architecture is about identifying these eternal abstractions. See [2] for a compact description.
"ABSTRACT: Software architecture is a set of abstractions that helps you reason about the software you plan to build, or have already built. Our field has had small abstractions for a long time now, but it has taken decades to accumulate larger abstractions, including quality attributes, information hiding, components and connectors, multiple views, and architectural styles. When we design systems, we weave these abstractions together, preserving a chain of intentionality, so that the systems we design do what we want. Twenty years ago, in this magazine, Martin Fowler published the influential essay “Who Needs an Architect?” It’s time for developers to take another look at software architecture and see it as a set of abstractions that helps them reason about software."
[1] Michael Keeling, Design It: From Programmer to Software Architect, https://pragprog.com/titles/mkdsa/design-it/
[2] George Fairbanks, Software Architecture is a Set of Abstractions Jul 2023. https://www.computer.org/csdl/magazine/so/2023/04/10176187/1...