when I started C++ was the "defacto" language, even taught in schools. for many years I avoided programming and I didn't know why. once Go came along it all clicked. I avoided programming because languages like C++ are awful. I could give reasons but people mostly know what they are. Go is great because its mostly simple, and the dev team is not afraid to "say no", even aggressively, if a feature is not strongly useful. plus the tooling is better than most languages one might come across.
As a corollary, types are important. If everything is a string or a hash table, then things won’t be so self-evident.
Rob Pike's Rules of Programming (1989) - https://news.ycombinator.com/item?id=24135189 - Aug 2020 (323 comments)
Rob Pike's 5 Rules of Programming - https://news.ycombinator.com/item?id=15776124 - Nov 2017 (18 comments)
Rob Pike's Rules of Programming (1989) - https://news.ycombinator.com/item?id=15265356 - Sept 2017 (112 comments)
Rob Pike's Rules of Programming - https://news.ycombinator.com/item?id=7994102 - July 2014 (96 comments)
Don't communicate by sharing memory, share memory by communicating. https://www.youtube.com/watch?v=PAAkCSZUG1c&t=168s
And lot's of others here: https://go-proverbs.github.io/
1. Don't.
2. (for experts only) Don't, yet.
“Make it work, then make it beautiful, then if you really, really have to, make it fast. 90% of the time, if you make it beautiful, it will already be fast. So really, just make it beautiful!”
- Joe Armstrong
Complex algorithms over simple data can have big performance payoffs, remove obstacles, and even simplify things.
For instance, binary search over a sorted array (as opposed to a BinaryTree object):
• simplifies merging (it's just concat & sort)
• no pointers, which makes serialisation easier, but also...
• bypasses the need for serialisation
• an array can be on disk, or in memory, or both (via mmap)
• the data can be bigger than your ram
• allows for 'cold starts': just point your program at the file/mapping and tell it to run, no 'loading into ram' step.
• it's also cache-oblivious
Another example is huffman coding. If you studied it at uni, you probably saw it as a Tree-based algorithm with the associated O(n logn) time complexity. I was unaware that there was an in-place, array-based method of constructing a Huffman tree in linear time.
Of course 99% of the time, I'm doing back-end microservices and just using standard collection data structures. But if I ever had a 'big data' task at work, I would strongly prefer to be able to do it on a single machine with a big disk locally rather than buy into whatever the current flavour of MapReduce is.
Rob Pike's response would probably be to profile the code first, then see if your fancy code or alternate data structure makes it faster.
That boils down to not valuing time of the people who use your software. Despicable attitude imho.
Developer time is wasted once. User time is wasted for every user, each time the software is used, for as long as it remains in use.
All these can be large: billions of users, many uses each day, software in use for decades.
If you're the only user, no point to spend time on optimizing anything, unless you perceive it as too slow.
If software has billions of users, then almost any effort in optimizing the tiniest bottleneck in it, is worth the effort. A good example of "the needs of the many (users) outweigh the needs of the few (developers)".
A lot of software sits somewhere in between though (not that many users & performs at acceptable speed).
This is not true. Ask Facebook, who have rewritten things multiple times explicitly because this is not true, but someone assumed it was
>maintain ability and first to market are usually more important
Maintainability and first to market are not trade offs for performance in most cases, no matter how much you want to propagate this ridiculous propaganda.
And Facebook really does have unique(-ish) scalability problems, and I bet rewrites would have been inevitable even with the best possible engineering, because who can write an application to deal with a billion users (2012, currently 3 billion) right from the start? When Facebook launched in 2004 this was pretty much unheard of, and even in today it remains pretty rare (much less 2012).
In the real world, when we have an issue on production, I fix it first, then find a solution. Meaning in that first hour customers are complaining, I'm not gonna try to create a fancy algorithm that can handle all cases. I'll make it work with nested loops if need be. Then I can sit down and figure out a permanent and elegant solution.
- start with the super inefficient solution, solve it pretty quickly
- point out performance problems or edge cases that would need to be solved for a larger scale solution
- go implement those optimizations
I'd probably replace "fancy algorithms" with "self-invented algorithm" instead: you're not solving a new problem, resist the urge to "tackle it yourself" and just look up the almost-certainly-decades-old, and documented-by-thousands-of-people algorithm that's going to get the job done.
(Of course, that'll be less fun for you, but if the only person who appreciates a piece of code you wrote is you, that's one of the signs that you wrote bad code.)
Caring about basic efficiency, and being thoughtful about compute is _not_ the same as premature optimisation. This kind of wastefulness in modern software is what causes death by 1000 cuts style inefficiency, it's not because we aren't optimising, it's because we are wasting.
When you start out you need to have a _theory_ about what will be you bottleneck. A lot of times you cant implement XYZ, and then measure what was slow and fix that. X, Y, and Z are connected and sometimes you need to build X and Z in a specific way, just so that Y can be made fast, and you know that Y is going to be the bottle neck. Later, when you do measure and know for a fact what is slow, you still have to make a bet on an approach that will make it faster. The more educated you bet is the better.
Good programmers measure, but good programmers have to litterate less because they can predict what will be slow, buggy, use too much memory, and make educated bets in advance, that avouid issues. If you start saying that its a rule that you cant predict performance behaviour, then you are dismissing a lot of experience and skills that good programmer accumulate.
It says not to put a "speed hack" in until you know where the bottleneck is. That doesn't sound like what you described at all.
If you're making a video game with tons of physics objects and you know for sure from experience that collision detection is gonna be a major problem, and this is what you're pushing the limits on, I don't think it's a "speed hack" to design the game and its systems around this.
Additionally, if you're working on something that you know is gonna be a big performance concern then you definitely want to measure it! Not to find out if it's a concern but to find out how successful you are at dealing with the concern.
If you’re wrong about predicting the speed of the algorithm then you’ve got needlessly complicated code for the rest of the life of the project.
People are wrong about the speed of algorithms all the time. Often O(n) and O(n^2) take the same amount of real time because the computer spent 99% of the time getting n from the database server.
Your algorithm written in C is often slower than the equivalent Python because the bytecode compiler did something clever.
I’ve spent a lot of time speeding up legacy code. It’s usually much easier than you think, and it’s slow for reasons that wouldn’t have been obvious to the original authors.
In fact it’s usually slow because the codebase got so complicated that the original author of the slow code couldn’t reason about it any more. But it’s easy for me to fix because I have a concrete example that is “too slow” so I can debug it by running the code and observing the slow points.
My last time was last year. I noticed that our metrics service, which had been running for a couple years, was now often using 100% cpu. That can cause issues for other services on that node, so I looked into it. Turns out the Go code I wrote to parse Prometheus metrics was a little clunkier than you’d like for the sheer amount of metrics coming from some monitor (I think I’d just upgraded cadvisor or something). I tried getting it to return fewer metrics, but couldn’t figure it out.
So I spent an afternoon and rewrote my parser. My hypothesis was that I could make it faster by allocating fewer strings, which turned out to be correct. End result was nearly 10x faster. 10/10, super fun and satisfying.
I’ve got about a half dozen other things I’d love to optimize with our current system, but for now I can’t justify the cost.
If that were the case, would developer-lead startups not have a 100% success rate? After all, a function that takes hours to complete, when an optimized function could take milliseconds, is still more than fast enough if you have no users. I'm not sure anyone has actually proven that they can make such predictions accurately.
2. A kind reading of GP would also understand that one is predicting what will be slow under some set of conditions.
3. Requiring 100% accuracy to call it a prediction is absurd. A weather forecast is definitely a prediction even though it's not 100% accurate. The NOAA isn't "guessing" weather in the general usage of the term "guess"
4. By your definition extensive benchmarking, measuring a program, and optimizing based on those results is wrong, because we don't even know if the end user will ever run the program!
Also being a good programmer is not the same as being good at running a startup.
But being able to predict the future of the business is necessary to determine where to optimize. Again, if you have no users, you don't need to optimize at all –period. The success rate should be 100%, as those who predict what to optimize will know when not to go into business.
If you get it wrong, you haven't predicted anything. You were guessing.
If you're building a new system, from new requirements - often just getting started is fine? Build, test/measure, throw-away/refactor - repeat?
Take rust as an example - start with a draft language, a compiler in ocaml. Iterate.
(I concede that Hoare might have known the project might move from ocaml to self-hosted at some point - but I'm not sure that makes much of a difference?)
A lot of times, I try to count the zeros. If im doing anything over the network, its going to be miliseconds, but if i do something with memory its going to be nano seconds, so optimize to take remove network requests. If i work on a problem that is multi threaded, i need to conside what can be multi-threaded and what can not. Can i minimize the stuff that cant be multi-threaded, and is there other work to be done while waiting for things that cant be multi-threaded? A lot of times I consider latency vs bandwidth. I know that Latency will always be a harder problem, and a problem hardware has harder time solving, so I try to design for low latency first, and then worry about bandwidth.
These are all architectiual decission, that are made early and have a big impact. They are VERY expencive to change, so you want to be as right as you can be the first time. If one part turns out to be slow you can profile and work on it, but changing the struecture of how the program operates and how data is stored and flows is much harder.
The author is right, that data structures are the most important tool for optimization. Especially on modern hardware where cache misses are so expencive. The problem is that _everything_ depends on your data structures, so changing them is a lot of work. (I was just at the blender conference where there was a talk about changing the mesh structure from arrays of structs, to structs of arrays, and it took them two years to make this simple change)
It’s such a bummer that the optimal memory layout is also a pain to code with in most languages (thinking of numpy in particular, oof). Are there any languages out there that abstract this away? You’d have to give up pointers, but it’s worth it for some use cases.
I don't think this is a great example, really. You're going to want the brute force raycast working anyway for testing, and you're not going to know the details of what the right space partitioning approach will be until you a) understand the CSG application better, and b) have done some measurements.
So it follows the 5 rules pretty well - the only thing I'd add is that you architect the system knowing that the data structures will change for representing the collections of objects and object/ray intersection paths etc. But your first pass should almost certainly be pretty simple and slow, and you should have your measurement code implemented and testing before you try anything sophisticated on the data side. Same goes for chunking up work across threads.
Because observing rule #1 happens to be the best way to get the experience and empirical background that are necessary to develop a good intuition for where to expect bottlenecks.
Isn’t this a plug for some sort of object oriented programming? Or at least highly structured programming? Interesting in today’s trendy clime of “objects bad/passe, function(al) good.”
Having done quite a bit of OOP where “binding behavior to data” is the thing in an ideal laboratory for it (Smalltalk and others), I gained some experience with watching people do this. I observed that there were plenty of good ideas across the industry on how to model data (inheritance, prototype, classification, mixins, mutability,etc), but what I observed is that which data groups together well is very much an art, and that was where people struggle to succeed with the data centric emphasis. The easy objects/structs are of course easy, Point, Rect, Person. But as the data got more complex, getting people to come up with good boundaries gets more and complex quickly, and you ended up with really poor object constitutions. Reification is hard. I don’t think it is impossible, I’ve seen people who have a knack for reification come in a refactor an object soup into a cluster of data structures that was beautiful. But yeah, good reification I fear is as much an art as science.
No. You see the same thing happen in SQL database design. How you lay out the tables has a profound effect on the final application. Or indeed in database engines. Some use write behind, some write ahead, some log structured. In fact you can often predict how well they will do locking, perform on writes and mass inserts by knowing what datastructure they chose on day 1. Possibly the best examples are in source code control. Git won, yet git has a very ordinary API (command line interface). Git won because Linux used content addressable memory as his underling data structure, and it turned out to be an inspired choice.
The other side of the coin is once you've decided on a data structure, it's often damned hard to change. Thus one Linus chose CAM, git was going to live or die on that choice. Once there were a few git repo's out there, changing it got very hard. Changing the API is not so hard: add a new one, depreciate the old one.
As someone who has done both extensively, I find that the patterns are still applicable to both. The presentation is very different, but still viable. For instance, you might use recursion to traverse data over a for-loop, but that doesn't inherently change the concept of a data "structure".
No matter what, we're still speaking in design patterns and if we share that common language, we can apply it to a domain and solve the larger problems of our day-to-day.
If you want more examples of this, look up Data Driven Development, but also append Haskell or C++ to your search queries, you'll find they repeat the same concepts in two very different "grammars"... ahem... I mean "languages".
For #1 and #2, well you should know beforehand. Don't you know what your software does? The details of how the time is spent surely will be surprising, and there may be a hidden bomb here or there, but if you can't see from the requirements where most of the time will be spent, you have a problem that you should work on fixing.
Of course, that doesn't mean you should go and optimize it. You should have an idea of your implementation performance before writing it, and you should be able to tell if it's acceptable. But heavy optimization does need profiling.
People are repeating that bastardization of the "avoid premature optimization" for decades. Go follow the original, with its nuance; those generalizations are trash.
On rules #3 and #4, the fancy algorithm is usually already written and well debugged. Go use it. The performance when n is small usually doesn't matter. If it's not written, then go read the thing about optimization. You should know how big n will be. If you don't know, ask around.
Rule #5 is the only one that actually stands without much nuance. There are exceptions, but not many.
If you think you know beforehand without measuring it I suggest you do measure it now and then. Because if there's anything I've learned about performance it's how terrible one's intuition can be about performance.
> The performance when n is small usually doesn't matter.
Sure, when the problem is simple enough that there's only N and not also M and K.
YAGNI is not at all a good principle. It only works as a slight bias. If you use it as any stronger kind of rule, it's disastrous.
Profiling finds this sort of low hanging fruit quite easily.
Profiling doesn't find things of the kind of "this takes a lot of computer time, we should program everything around it"; "this takes a lot of computer time, are you sure we can run it profitably?"; or "this takes a lot of memory, are we willing to get new servers?". Besides, the worst moment to optimize your code is after everything is written and people depend on its interface.
Profiling is some duck tape you can use to cover small problems, and mostly only that.
Do you? Do you really know the ins and outs of the optimizations of the CPU you're going to be running on? If you do, you're in a specific domain that may not match to many other developers today.
Modern CPUs are incredibly complex machines running emulation of a PDP-11 single-threaded architecture. If you're starting from the beginning with a belief that you know what the source code you write is going to do on the CPU to a level of precision that means you can skip rule 1 and 2... Most people who believe that are flatly wrong. And if you aren't, you're probably doing embedded computing, which follows its own set of rules (Rob's rules are a lot more tuned to business, cloud, and consumer-desktop-environment computing).
> the worst moment to optimize your code is after everything is written and people depend on its interface.
Honestly, that's exactly opposite of my experience. The interface is designed apart from the machine, and as a software engineer it's my job to fit the machine to that interface. Because the user matters most.
If you didn't, you wouldn't know the broken code needed fixing.
How do you differentiate code that is using the wrong data structure from the one that is solving a hard problem?
You don't until you have to. At the limit, every piece of code is using the wrong data structure; the hardest problems get addressed by building custom hardware to maximize speed by stripping away every other possible concern in the transformation of the symbols and building as many of those as you possibly can force to work in parallel.
Almost nothing is that important, so when your program is too slow, you search for what is taking the most time and do the cost/benefit analysis of re-representing the problem in a way that matches the existing hardware better (which encompasses everything from requiring fewer steps in the computation to caching the right data to caching the right data in the right location, i.e. L0-L1 cache performance, all the way to, sometimes, pushing the computation onto a GPU or out into multiple parallel execution nodes in a cluster if the problem demands it).
None of this is black-and-white and there are degrees. You can certainly, during the design phase, look at a problem and go "That's probably going to want a cluster." But the point of Pike's 1 and 2 is "If you don't have a firm reason to believe a more complicated approach should be taken, err on the side of the simpler approach because it's easier to grow a simpler approach than move a complicated approach."
So, at the example you commented earlier, you inspect every single function, sorted by execution time until the program was fast enough?
> do the cost/benefit analysis of re-representing the problem
How do you do that without knowing the speed you can get with another representation? You rewrite it on every possible way and test?
> But the point of Pike's 1 and 2 is "If you don't have a firm reason to believe a more complicated approach should be taken, err on the side of the simpler approach
Yet we are what, 5 messages down a thread where people vehemently deny you can't have reason to believe anything.
No, nobody's denied that at all. https://news.ycombinator.com/newsguidelines.html "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
The argument made by eschneider and reenforced by myself is "People usually guess wrong when they try to add performance optimization at the design phase," not "epistemology is dead." It's a soft rule from experience, not a black-and-white hard rule; most everyone who's been programming for years has a story about the time they built a complicated gadget that ended up being a waste of time because it either wasn't on the critical path where the bottleneck showed up or users didn't choose to use the software in a way that aggravated that bottleneck anyway.
I'll share my own. Way back in the day I hammered for weeks on getting the physics model right for a nascent game engine. Started from an open-source option and then beat the hell out of it because I knew physics was likely to scale badly, was easy for developers to "hold wrong," and needed to be plug-simple to use and understand.
In the end, 95% of our users only used the game engine as a rendering architecture and never turned the physics engine on. All of that optimization time was zero-business-value.
*edit to offer something less whiney I think optimizations that rely on specific non-core business rules are a root of evil - but you should still be properly indexing your queries, not doing them in loops, trying to make parallel work run in parallel, not write a ton of junk over the wire when ever you can, be conscious of nesting any loops and generally the best optimization is even listed: choosing the correct data types for a problem.
I don't think anyone disagrees with that, but the indexing is a good example of "measure, don't assume" because SQL engines can do surprising things, and the best way to know what to index is just to measure (and sometimes no index is actually best!)
And "completely pretend performance is not a thing" is of course the other extreme from "optimise everything from the get go".
Unfortunately there are always people who these sort of "rules", "laws", and "best practices" as cudgels to beat other arguments with, rather than some loose commentary (which is usually what it is). Previous comment: https://news.ycombinator.com/item?id=36417264
We detached this subthread from https://news.ycombinator.com/item?id=38098729.
Btw, the site guidelines also include "Don't be snarky." - I realize that takes some of the fun out of posting a particular kind of comment, but people tend to overrate the quality of what they post when they post that way, and tend to underrate the degrading effect it has on discussions, and that's a double whammy of badness. Having a don't-be-snarky rule, and sticking to it, is a way of globally optimizing for fun—specifically the fun of interesting, curious discussion. That's more important than the sugar rush of being "cheeky".
If I see that a value can have two or more types then obviously this is "more advanced" (or perhaps better, "more complex") than if it's just one type.
Sometimes this makes things better. Sometimes it doesn't.
> Sometimes this makes things better. Sometimes it doesn't.
Exactly, and that's why you want to have both techniques available, and the data modelling is the judicious interplay of both.
If you'll excuse me I'm going to go walk AND chew gum. Or should that be OR :)
This is just a dismissal instead of an argument, and one that can be applied to almost anything.
This can save a lot on padding, and greatly increase the cache efficiency
func CopyFile(src, dst string) error {
r, err := os.Open(src)
if err != nil {
return err
}
defer r.Close()
w, err := os.Create(dst)
if err != nil {
return err
}
defer w.Close()
if _, err := io.Copy(w, r); err != nil {
return err
}
if err := w.Close(); err != nil {
return err
}
}
if r, err := os.Open(src); err != nil {
return err
}
defer r.Close()
But then r is not in scope for the r.Close properYou could have smaller discrete functions that abstract the handling of each action a little bit and be more reusable, or is that not possible in Go
These shorter functions need to signal their failure somehow, so calling them looks exactly like the example.
Here's a thought experiment for you: pretend the return type is something other than 'error': result, statuscode, responseContext, anything that doesn't imply failure. Would you then suggest handling that is "noise"?
ETA: "there are countless other [than if err != nil] things one can do with an error value, and application of some of those other things can make your program better, eliminating much of the boilerplate that arises if every error is checked with a rote if statement."[2]
1 https://www.eecg.utoronto.ca/~yuan/papers/failure_analysis_o...
And to fix this, we introduce 10 places per function to improperly unwind the stack, have a chance at missing an error result, and completely ignoring that fact that anything can fail anyway, even a simple addition. Instead of just writing exception safe code in the first place.
Is kind of an understatement. If the handling code for that is duplicated as 75% of your code base, there's something wrong with the language. There's got to be some other way than all that noise.
Also, this 75% number sounds made up out of thin air. If your program is doing something non-trivial, it should have far more code doing work than checking errors.
> handled locally
Except in practice, you don’t. You just keep returning the error up the call stack until something handles it at the top, probably by trying again or more likely just logging it.
I have not, ever, seen any numbers for exception style or FP style. My perception is that their numbers might be lower, but I have no evidence, and I am not dogmatic about my guess.
int foo(Data *data) {
int error = do_some_io_request(data);
if (error) log_error(error, "Request failed");
return error;
}
For propagating errors up the stack, the ratio is only 50%: int error = foo(data);
if (error) return error;
For the rest of your code, I guess it's domain specific. But most projects should have a significant amount of the codebase doing things with data in memory that can't fail. int handle = open(file, S_IREAD);
if (handle == -1)
return false;
int size;
if (read(handle, &size, sizeof(size)) != sizeof(size))
{
close(handle);
return false;
}
char *buffer = malloc(size);
if (buffer == null)
{
close(handle);
return false;
}
if (read(handle, buffer, size) != size)
{
close(handle);
free(buffer);
return false;
}
And so on, with the number of things you have to clean up growing as you go further down the function.They don't, do they? One really nice thing about Go is writing and calling functions that _don't_ return any error. You can be confident that they will not throw any exceptions at all.
You are under an illusion if you think they can't fail.
If that's not the spooky action at a distance between the place where exception is thrown and where it should be handled, I don't know what is.
And also more recent versions of Java, such as the current one: https://docs.oracle.com/javase/8/docs/api/java/lang/Exceptio...
>Checked exceptions need to be declared in a method or constructor's throws clause if they can be thrown by the execution of the method or constructor and propagate outside the method or constructor boundary.
This is like... the one thing that Java did absolutely right, but for some reason it's also the thing people hate most about it? I've never understood why.
From a caller's perspective, "res, err :=" and "throws Exception" is the same thing: you need to handle the error on site, which (in contrast to the Go gospel), is not a good thing. In the absolute majority of cases it can't be done sensibly there, you need to pop it up the stack to a place which has more context. Certainly, it's none of a method implementor's business to decide "if a client can reasonably be expected to recover from the error". You know nothing, Jon Snow.
Java has figured out it's a bad concept decades ago, but Go is frustratingly re-enacting Java's early mistake. At least, redeclaring a checked exception was a simpler way of dealing with the error than Go's err ceremony.
No parametric polymorphism in exception specifications. Like if you have a record parser that invokes a user-defined callback for each record, then you literally can’t type it in such a way as to allow the callback to throw exceptions (that the caller is presumably prepared to handle, having provided the callback in the first place).
To be clear, this is not simple. It seems to me like it’ll quickly go into full-blown effect typing territory, which is still not completely solved today and was in an absolutely embryonic state in the early 2000s.
We are unlikely to improve on this until we finally abandon the idea of working directly on a single, plaintext codebase.
Another point on this, which I really saw in action on a recent project - a "big" n is probably much bigger than you think. Sometimes it can be easy to think "oh, this is going to have to do 100,000 operations, I definitely need to optimize it" - but computers are fast, and 100,000 multiplications (for example) happens so fast you probably don't need to think too hard about it.
That's not to say you shouldn't think at all, just that it's often surprising how insanely fast modern computing hardware is.
I do agree though that a lot of time is wasted on premature or even completely counterproductive optimisations for situations where the data being processed is too small to cause noticeable slowness in processing.
Of course this is a tradeoff with Rule 4. If the fancy algorithm is much more complicated I will be more likely to pick the simple one even if it is quadratic and the data has the chance of occasionally being large. But by default I try to stick sub-quadratic if possible.
I wrote an article about this relatively recently: https://kevincox.ca/2023/05/09/less-than-quadratic/
Some amazing person found that the load time of GTA V Online was awfully slow. It was slow because the developers put in a quadratic function at a time when the iteration object was small. But over time that object grew and so grew the load times quadratically until it was unbearable. And it was unbearable for millions of GTA V Online users.
I fully agree that the accidentally quadratic functions will eventually come back to bite you/the user.
Scalability! But at what COST?
And every time I interview, the hiring managers want me but my hiring decision gets vetoed by a leetcode newbie who hasn't experienced production scars yet.
- envious of your amazing pragmatic and effective skills
- jealous guarding of the architect promotion that they covet for themselves
I have a hunch that this is quite rare in most companies. Most managers are unskilled enough to override their own intuitions in favor of the mediocre leetcode dev that just vetoed a strong engineer.
Or writing essentially pseudocode in Jira description for the dev who can't figure things out. Ask me how I spent my day
Keeping this in mind isn't esoteric either, as it applies to JavaScript, Python, Rust, C#, and probably others.
That's right. Async execution prevents the IO from being the bottleneck by offloading it to a different thread.
There are 3 situations where this statement falls apart:
1. If the execution is single threaded, as you rightly pointed out
2. If the response of the async execution matters to the final response of your service. In this case, the primary thread may finish its work but its still waiting for IO to complete. Basically making it synchronous but using async primitives.
3. The CPU utilization of iterating over 100k items in a list is negligible compared to the hundreds of daemons and services running on the host. Even a docker container will utilize more CPU than iteration over 100k items.
The point is: over-indexing over iteration and time-complexity in interviews is pointless as real systems are going to face challenges far beyond that.
Also: if you assume your N's are small then you can get away with almost anything. But if you write a piece of code that is going to work well for N below 100, but suck for N over 10000 Say, anything with O(N^2), then just cap it. Make a big fat error if the small N assumption breaks. It's still better than the AWS bill surprise or hung program or whatever it could be otherwise.
Rules 1 and 2 apply here, though.
For any program that does I/O this wouldn't be the case. A "bottleneck" is going to be a network request, and for much of the programs execution time you can just make the CPU do anything without the user even noticing.
So this argument is based on CPU bound interactive programs, which isn't all programs. In programs with e.g. IO (which would be most) then rules 1+2 would come along.
But I guess the thing is that in an interactive CPU-bound program like a game, there _is_ one single bottleneck and most of the code is on that hot path. It's pretty easy to establish that you are in the bottleneck: it's when you are writing per-frame code.
His first people is not technically wrong but it is highly misleading. Sure you can't know which bits of a program are going to be slow without measuring, but that doesn't mean you have no clue at all. You can often make a very good guess!
Forget everything you have learned about the famous BigO analysis for 99% of the cases because it assumes a computing model that is no where near what we have today. It was close in the 80-s but now it's totally wrong.
the most glaring example i can offer is that nowdays for example a datastructure based on a linked list will almost always be slower than one based on arrays even though the BigO analyses says otherwise.
CPU cache plays a much bigger role and it pays more to chase a consistent cache access rather than jumping all over through pointers and thrasshing the cache.
likewise most algorithms would be faster looping through all array items rather than for example using a set or hashMap when number of items is small (and by small we are still talking about hundreds of elements, the exact number when one datastructure becomes better than the other will depend on many factors.
that's why, don't assume but measure, it's the best advice there is.
Well, yes I can because I know everything you just said already...
> a datastructure based on a linked list will almost always be slower than one based on arrays even though the BigO analyses says otherwise.
Cache locality is one reason that linked lists are usually slower, but I think you've got a bit mixed up because the big O analysis also says they'll be slower (in most common cases).
> that's why, don't assume but measure, it's the best advice there is.
You missed my point (thus proving why this is misleading advice!)
Back then, some devs would do something much more complicated to try to be faster, and often the overall program was worse off.
Nowadays, some devs do something much more complicated, make the program 100x or 1000x slower than the simple obvious way, and think they're doing solid engineering because they're definitely not "prematurely optimizing", they made it so slow and convoluted that you could never accuse them of that!
Another thorn is hiding RPCs, DB queries, and other remote operations behind abstraction boundaries. When people think that .getFooBar() is as cheap as a field access, they have a tendency to not cache+pass forward, but just recompute it, which ends up generating a large amount of redundant queries. That will rack up your 1000x's pretty quick.
You can make a good guess! Just, measure it to be sure before you start optimizing.
For example preallocating arrays. You don't need a benchmark to tell you it will be faster because it literally can't not be.
Another example from my recent real life: I changed a data structure from a hashmap to an array with integer keys. I didn't profile it because can't not be faster. And I knew it needed to be fast because it is called for every branch in my program.
This assertion is false for JavaScript arrays in the v8 engine in some contexts. Preallocation of a JS array causes the engine to make a heterogeneous array, which is more expensive because it's an object under-the-hood. Allocating the array using repeated `push` calls of homogeneous data lets the engine know it's a homogeneous-type array, which is an "exotic object" that can use more optimal representations under-the-hood. (Faster than both is preallocating a homogeneous array, although that of course requires us to know what kind to preallocate).
This is another example of where premature optimization can lead a developer astray. Abstractions are far too complicated these days, and it's too easy to guess wrong without hard data.
https://dev.to/henryjw/populating-a-pre-allocated-array-slow...
I would assume that if you measure enough things and code enough you may start to get a feel for what is going to be expensive and what is not. And then - as before - you can continue to measure and iterate.
(Also, I think you said "first people" when you meant "first point")
Yes but he doesn't say that if you don't measure (which most people won't) then you should still engage your brain.
That's the missing point. Measuring is typically more effort than applying a little brain power and experience. So the choices shouldn't be "measure or nothing", but that's how people always interpret it.
Therefore, given a choice between writing a little code to make the program work or writing a lot of code to maybe make the program work faster... Write a little code and go back and write the longer code iff it will actually help performance as per measurements. You're better off expanding a simple skeleton than moving bones around in an already-complex assembly.
The fact that so many replies are not getting this goes to show how misleading this advice is.
The focus of the advice to me is that software engineers frequently overestimate their ability to predict performance bottlenecks and efficiently allocate time to preemptively address them in the middle of writing the core of the system (for multiple reasons, including both the overall complexity of a complete system and the fact that performance is also a function of the way users interact with the system).
I've built multiple systems where a component we never expected to become a bottleneck becomes the bottleneck because users choose to use the machine in an extremely novel way relative to our intent when we built it. Getting the software in front of users to get signal on the difficult-to-predict user factor was consistently a better use of our time than swapping out a functional n squared algorithm with an also functional n log n algorithm on a piece of the system that no users were operating with yet.
That's why it's bad advice! Perhaps I should rephrase: it's not incorrect advice (to the right target audience), but it most definitely is inadvisable advice.
Who is the audience for the famous "premature optimisation" quote? Maybe originally it was "people who know you shouldn't ignore performance", but it definitely isn't now.
Easily misinterpreted advice is bad advice in my book.
This goes double for databases. Folks who use a DB as a dumb bit bucket or a simple 1:1 reflection of their object definitions are often surprised when the DB takes it personal and dooms performance.
If I ever see another ORM-generated DB schema it'll be too soon.
"Any organization that designs a system will produce a design whose structure is a copy of the organization's communication structure." (Melvin E. Conway(
"The structure of any system designed by an organization is isomorphic to the structure of the organization" (Yourdon and Constantine)
Coming back to your point. How do you ensure that data structures are organized well and stay that way as design changes?
You separate data and code in organizational level. You keep the design of database schema, use cases, and mapping between them is separate from the implementation of the rest. This group also writes all integrity checks and etc. Data and code organizations are separate.
IF you don't do it this way, it's hard to separate code and data because the structure of the organization does not separate them.
It's a hard problem if not THE hard problem. Using an ORM or not has no bearing on this. Conway's Law extends to the lowest levels. If an organization changes substantially, no UI, data structure, or database schema will be left unscathed.
Solve the problems you know about. Tomorrow always delivers problems you couldn't (and shouldn't) anticipate. You'll drive yourself and your coworkers crazy by trying to cover all possibilities. It's a disease called Flexibility Syndrome.
My viewpoint is hiring others to write code. My business is tomorrow and keeping contractors in check.
Planning and maintaining data and their schemas in-house and contracting out writing code has been incredibly successful so far.
I'd argue that most ORMs generate the schema you ask it to. Using an ORM isn't going to create a worse database layout that you could do by hand. The issue is that some/many developers don't know SQL, nor have the required knowledge of their databases to use an ORM.
The ORM requires you to know what lies underneath, it is a fairly leaky abstraction. Understanding that, you can get most ORMs to create nice schemas.
I have seen and done complex queries using a DSL for an ORM, in my case Django, but now you're just learning a different query language, so you're right that the ORM doesn't bring much to the table. Realistically, those who are uncomfortable with SQL are going to create poor queries with the ORM as well.
For quick prototyping and systems with limited amounts of data ORMs can speed up development quite a bit. Technically there's a cost, but computers are fast enough that it doesn't matter on the small scale.
That may be built in temporal table support in MS SQL Server or MariaDB, so you don't need explicit audit tables in your schema. Or perhaps timestamp ranges with exclusion constraints in Postgres for enforcing valid schedules without hacks in the app layer.
Eventually you notice that you're having to define everything twice: once for the DB and again for the ORM definition.
This is why I prefer solutions like PostgREST, Postgraphile, and Hasura (for Postgres while other DBs have other similar solutions). Define it once in the DB and just let it bubble up. You may want to expose your storage API through views, but it still ensures optimal data structure at the foundation.
DB -> data-generated API service -> client API codegen -> client app such as web or mobile.
With a typical ORM, you're building from the middle out. It almost always results in a faulty foundation. Pour and set the foundation first. Always.
https://postgrest.org/ https://postgraphile.org/ https://hasura.io/
That really depends on what you mean and the ORM. Typically the larger and more popular ORMs can and do take advantage of database specific features, if you let it.
Using the database specific feature will prevent you from using the ORM as an abstraction for replacing you DBMS, but outside open source projects that support SQLite, MariaDB and Postgresql, I've never seen that used in professional settings. One system I've worked with had on paper support for DB2, Sybase and MS SQL. It ran solely on MS SQL, the others had not been tested in years and the developers had pulled in so many MS SQL specific features that it was never going to run on neither DB2 nor Sybase ever again.
As someone who consumes more APIs than they write nowadays, I appreciate when data is scoped to task[0]. Employing window functions and stored procedures that can make queries into data fit for purpose is the ideal - databases are faster at manipulating data than any intermediate language, most of the time. Unfortunately, I don't see them employed enough. Backend developers seem content with throwing the schema over the wall way too often.
[0]: As an aside, this is why I personally like GraphQL so much. The middleware lets me cleanup data that backend engineers simply refuse to do, most of the time.
> You may want to expose your storage API through views
Views are a great way to decouple on-disk storage structure from access patterns.
> DB -> data-generated API service -> client API codegen -> client app such as web or mobile.
This seems like a nice approach, albeit mine is even more basic. I pick a DB, create SQL migrations for the schema (sometimes even generate those from an ERD planning tool, with manual edits where needed), apply them to a DB with something like dbate: https://github.com/amacneil/dbmate
After that, if I want to use an ORM, I generate entity mappings from the tables/views in a schema-first approach, for example, like https://blog.jetbrains.com/dotnet/2022/01/31/entity-framewor... or https://www.jetbrains.com/help/idea/persistence-tool-window....
I don't think that sort of codegen is as popular as it should be, but for the most popular frameworks in each language, the schema-first approach is usually doable with no additional software that would need to be deployed to prod.
> Tony Hoare's famous maxim "Premature optimization is the root of all evil."
It's actually from Donald Knuth and this quote is frequently taken out of context to argue against optimization in general.
Here is the entire quote
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
The point is to spend time optimizing where it will have impact.
Your whinging would be more appropriate if Measurement was not emphasized, right near the top.
Maybe it is, but that's not how it's being used in this context.
In my experience, that's exactly how most people understand it.
Programming used to be very different from what it is now. "Premature optimization" wasn't "hey just use this popular lib cause it scales", it was "let's use some impossible to understand bit fiddling algorithm that only works on this piece of hardware".
Programming has definitely evolved. This maxim seems to be exactly as applicable then as it is now though, and as misunderstood.
Almost all languages have a standard library that has all the normal algorithems you would want, and where something wierd is better they have that done for you.
At https://stackoverflow.com/questions/74417624/how-does-clang-... is someone asking why the compiler replaced:
int a = 0;
while (n--)
a += (n * n);
an O(n) algorithm, with the O(1) equivalent of: a = n * (n-1) / 2 + n
`n * (n-1) / 2 + n` would be the sum of numbers, not sum of squares.
It's always good to have the longer quote which gives needed context.
"I’m sorry I have no recollection how this quotation came about. I might have attributed it to Edsger Dijkstra."
> Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
to
> Rule 5 is often shortened to "write stupid code that uses smart objects".
"smart objects" left such a bad taste in my mouth, the original rule is so much better albeit lengthier.
>Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming.
I completely agree with this. Which is why all the LeetCode interviews always struck me as odd. They focus on algorithms, not data structures, which is exactly what you don't want to do out of the gate, most of the time.
I suppose, if you don't know algorithms at all you wouldn't realize when it is either
A) an exception to the rule
or
B) A time where you need to lean into specific algorithm for X, Y or Z reason
However, you can teach algorithms in relatively short order, I have honestly found people grok which data structures to use less easily, though its all anecdotal
works for most cases.
Code is a lousy place to store boat-loads of attributes. You can do data-oriented transformations and filters on them if in a data structure. Hierarchical file systems are limiting and messy, because grouping by one factor de-groups another. I want to be able to "query" dev code units via SQL or similar. CRUD patterns are pretty consistent across industries such that most CRUD idioms shouldn't need custom re-invention, and thus should be attribute-tized.
You will still need occasional code-based tweaking, and this can be accomplished by having the attributes generate "runtime draft" UI markup and/or SQL clauses, which can then be tweaked via code as needed.
I'm building a proof-of-concept that uses what I call "fractal rendering" to be able to intercept the "draft" construction of screens and SQL clauses at a fine level or course level, depending on need. This avoids the all-or-nothing problem of prior attempts per attribute-vs-code debate. Ya git both! (Dynamic SQL generation should probably be avoided for public-facing sites, per injection risk, but limits the wonderful power of query-by-example.)
I don't claim it will be as performant as the code-centric approaches, but if it catches on, performance tweakers will find a way to scale it. (For CRUD apps, the performance bottleneck should usually be the database, not app code anyhow, unless you doing something wrong or special.)
The CASE tools of the 1980's and 90's started going in this direction, but were too expensive & clunky, and then the OOP push ended the idea by favoring attributes-in-code, back to square one, sigh. (CASE tools don't have to be proprietary.)
It's ripe area for groundbreaking R&D. CRUD may not be sexy, but it runs the world.
Which is one of the reasons why I will never use that style of problems as an interview question.
And the other reason is: that style of problems also don't teach anything about architecture.
The schema/data are completely agnostic to the computation. You don't even need a computer. 100% of schemas are possible to print out on sheets of physical paper and iterate offline.
When you're dealing with most modern software, there's so many of these ETL-like pipelines, abstractions, black-boxes and just sprawling complexity that the job becomes much more about complexity management than anything.
No component exists in a vacuum, I would argue software is as much about managing the complexity of the data and algorithms towards a purpose as it is about the purpose, data or algorithms.
Entropy comes for us all, after all.
Of course modern computation is often impractical to do by hand. It might even be so complex that humans would make too many errors and take to long to ever complete correctly.
So I don't think that "data dominates". You may need to adapt your data to the algorithm or vice versa. What dominates is what we want to do. Without algorithm we are doing nothing.
In a sense data is part of the algorithm, it is implicitly coded into the assumptions the algorithm makes about its arguments and results.
Seeing the connection from both sides is insightful.
Further programs are what interpret data, by making branching decisions based on the data. Program must be interpreted by the language interpreter, and then on the 2nd level the interpreted program interprets the data.
If you think leetcode-like problems is always about the algo, "BFS or DFS" etc., then at best you are not realizing the data structure choices you are making, at worst you may not be so good at solving them or haven't progress that much through leetcode-like challenges.
I've been thinking about it a lot. I want to still believe it, but then my experience tells me that the "right data structure" is usually "the data structure most convenient for the operations/algorithms you want to run". Which makes this principle either backward, or circular.
It only convinces me further that Pike's take is wrong. Specifically:
- "Data dominates" and "Data structures, not algorithms, are central to programming" are just plain wrong;
- "If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident" is correct only in the descriptive sense, not in the prescriptive sense. That is, it only works when you read it as restatement of Brooks's "tables vs. flowcharts" line (quoted elsewhere in this thread): it's easier to guess what your code is doing from how you organized your data, than it is to guess how you organized your data by looking at the code. I.e. it's an advice on where to start understanding someone's existing solution; it has nothing to say about solving new problemms / writing new code.
Which of course is self evident. Though I suppose it doesn't hurt to remind people to step back and look at the full picture.
All of the sudden, the interviewer relaxes and you can just see on their face the, "oh, we've got an actual senior engineer on the call here." They open up about their technical problems and stop caring about "proving if you can even code".
Similarly, the teams I've had the most problems creating positive change, meeting milestones, or collaborating together properly as a team with is where no one seems to have a good grasp of data structures and code architecture. It seems to have become more common too. Many folks are quite used to having a framework that can just do everything, and if it can't, someone "smarter than them" has made a plugin or middleware to save them from having to think too hard.
I personally find engineers who avoid data structures to be shooting themselves in the foot, they're giving up on one of the most useful tools in their arsenal and I witness how it limits them day-to-day. It's tough to watch.
Being able to choose the appropriate data structure and explain trade-offs etc is much more valuable than simply knowing how to reverse a binary tree sorta stuff.
As I noted elsewhere in the thread, I haven't interviewed in awhile where I needed to grind out DS&A questions, but at the time where I did, I often found myself given a particular structure and asked to traverse it in some way, with is pretty heavy on the algorithmic thinking but not really testing my knowledge on data structures themselves.
Sounds to me like things have become a little more even handed out there
Indeed. The simplest way to invert a binary tree is to include an "invert" flag in the data structure. If it's set, walk the tree right to left instead of left to right.
What isn't covered in this flow is how composition of multiple data-structure/algorithm pairings work together amidst many others. It's often not an algorithm on its own that defines an overall solution. It's the connecting of many of them together, to build a whole software stack, domain language, etc.
When you understand the algorithms, the corresponding data structures, when to use them, similarities in various frameworks in your chosen domain, how that analogues to your current problem set, etc you really start getting somewhere.
Often? Always! An algorithm always only works on a certain data data structure.
I tend to use softer words like "often" as it makes folks feel less defensive towards the discussion. If someone came up to you and told you that your outlook on something is 100% definitively wrong, you might balk at the statement merely because I said "100%". Just as you have stated "Always!" and corrected me so emphatically.
Since I've found this to be a difficult topic for some, and given this is a public forum, I chose to be cautious in my wording.
`list.get_nth(n)` has O(N) runtime, as does `list.length()`, so binary search is actually completely possible, with runtime O(N^2) (aka "works very badly").
(Fair point that all four data structures need to be sorted, though, although ideally that would go without saying, since it's kind of inherent in what binary search is.)
" Show me your tables, and I won't usually need your flowcharts;
they'll be obvious."
But to be fair, you need to be a pretty damn good comp sci person (and
actively thinking in algs and DS) to quickly look at DS and see the
"obvious" processing implied.Instead they wrote a horrible buggy mess that tries (and usually fails at least a little) to do the same thing as the obvious solution but with much more, and more convoluted, code.
I helped my nephew preparing some competitive coding event, and transforming the data into the right data structure was a huge part of solving most problems. For example most of the code you might end up writing is for finding the longest path through a weighted DAG, but the trick was realising that you could represent the problem as a weighted DAG, and if you did then the length of the longest path through that graph was your answer. If you didn't spot that then you might very well still be able to solve the problem, but your solution would end up being much slower and more complicated.
- build the DAG with heavy nodes (e.g. lead split-shot)
- use string or fishing line for edges, with length proportional to 'weights'
- hold the root node(s) high in the air, if more than one, hold them together, at the same height
- the lowest node, or the length of taught line (drop distance below your hand), is the answer
/-3-> D
A -1-> B -1-> C
\------4------^
In this case, the longest path is A-C, but C will be held at level 2 by B, leaving D lowest at level 3. This works for a fully-acyclic directed graph (eg a tree), but not for a directed-acyclic directed graph. It can also find the node whose shortest path is the longest (so longest shortest path, minimax style). (It also works for a acyclic undirected graph, as illustrated at [0], but that's a bit tangential.)It only works for fruit trees.
I think maybe the former follows from the latter?
90% of leetcode is choosing the right data structure. If your algorithm doesn’t wanna come together, you probably missed a better way to model the data.
hash table, stack, queue, linked list, sorted array, binary tree, graph.
Many problems seemed difficult until I just considered shoving the data into one of those.
The typical leetcode questions also focus on "data structures" because the candidate needs to have the mental catalog of data structures in his brain to pull from when he "pattern matches" on the problem/solution.
The interviewer isn't going to volunteer whether the candidate needs a "priority queue", or "adjacency matrix", or "trie", etc to solve various algorithm questions. If the candidate is stuck, then the interviewer might give a hint to the data structure to use. But of course, too much hand-holding will not make the candidate come across as a strong hire signal.
I concur; in my experience, premature optimization is one of the most expensive pitfalls. This primarily stems from the fact that circumventing potential issues too early leaves them unchallenged, resulting in the next team having to devise expensive solutions to address the unnecessary complexity.
The approach that was instilled in me is this: optimizations rely on presumptions, and these presumptions are often incorrect in the beginning.
Additionally, I've discovered that managing ego and understanding psychology play crucial roles in dissuading individuals from creating overly complex code.
E.g. starting out with a microservice architecture even though you only have 100 users, because you think it will be too difficult to re-architect a monolith the day you hit a million user.
So you should address why it feels like the code becomes less malleable over time.
https://www.techradar.com/news/move-over-chrome-macos-monter...
https://www.tomsguide.com/news/google-chrome-reportedly-dest...
A great majority using these quotes act like no engineering design work is allowed whatsoever.
I have seen that mentality permeate everything even hitting concurrency decisions for obvious things. (E.g. We need to respond to HTTP requests while doing stuff in the background, and accessing IO and run multiple server state syncing but let us have purely synchronous, single threaded code and we will figure it all out later. That went well.)
And for some reason there is a subgroup that thinks security architecture is "optimizations."
It depends on the product but when you have https://news.ycombinator.com/item?id=38029479 or https://news.ycombinator.com/item?id=38076636 it is not over-engineering: it is management and business forcing poor product.