Dijkstra is a wonderful source of memorable quotes and hot takes from the early days of software.
A sampling:
“The competent programmer is fully aware of the limited size of his own skull. He therefore approaches his task with full humility, and avoids clever tricks like the plague.” (Dijkstra, 1972)
“The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense.” (Dijkstra, 1982)
“Object-oriented programming is an exceptionally bad idea which could only have originated in California.” (Dijkstra, 1989)
whats wrong with object oriented programming?
Most real world problems don’t fit into neatly into hierarchical structures. OOP pushes you towards trying to model everything in the world an objects with strictly defined operations that can be performed on, where those actions are determined by the data type itself.
You end up being forced to co-mingle your data structure design, with your data processing design. Which tends to be rather unhelpful.
Keeping your data structures fairly separate from the processes/functions that operate on them makes it much easier to build composable software. Data structures tend to be very hard to change over time, because the migration process is always tricky. Any structure that’s exposed outside your programs address space, whether that be via an API, storage on disk or in a DB, needs a migration path so new code can deal with old structures correctly. On the other hand, algorithms operating on that data are trivial to change, there’s no migration risk, and indeed we should expect those algorithms to change often, as the requirements of the software change.
In an OOP world, because you’re strongly encouraged to tightly bind your data structures to the algorithms that operate on them. You quickly end up in a horrible situation where changing your algorithms is very difficult without also being forced to change your data structures. Suddenly what should be a simple and easy change (introducing a new way of processing data), becomes difficult because coupling created by objects makes it hard to change the processing without also changing the data structures.
In a simple “write-once” world, OOP is fine. But once you want to write software that’s expected to evolve and adapt over years, as business requirements change, OOP quickly becomes more a hinderance than help.
In OOP you can have data structures and algorithms separated. You can use composition over inheritance without issues.
The fact that a language is strong typed and you get compilation errors if you forgot something is a big plus.
OOP is fundamentally about no static variables.
don't conflate 'Objects' with 'Object Oriented Design', if you split your data Structures from you algos then your just using an OOP-language to do programming not doing OOP.
strong typing has nothing to do with OOP. static variable are replaced by singletons and other similar in spirit objects.
OOP today is a 'No true Scotsman' concept much like communism, agile, and other vague by design terms. You can't argue against it because every individual has at least one definition in his head and it shifts through the conversation toward the one that's not refuted by the claims.
the intent of such concepts is to hit you in the feels and trigger some ideals inside you, so that you associate the positive feelings you have toward those ideals to the concept.
if you argue about it as it is used in practice you will inevitably have someone bring up that java style OOP is not true OOP and that you should look intro Smalltalk or some other language that implement "Real OOP".
since arguing about specifics is a losing battle, lets argue about the bigger picture, if we take the goals of alain kay like he talked about in many of his talks his goals was to make systems more like biogology, but as a system designer the last thing you want to do is that. we don't know much about biology, people in that field a still reverse engineering pre-existing designs to this day and not designing much of their own from scratch. If you design a system you want to have the most control and foresight in the dynamics of your systems, uncontrolled & unintended emergent effects are your source of problems.
entangling data and behavior make your conceptual design state-space size explode, when you go full OOP you become an ontologist philosophizing about what is an X and what is an X-manager, X-Provider, ... and less of a system designer trying to make sure your system does not land in the wrong states.
Aren't you then just attacking a programming style that nobody actually uses or advocates?
One of the most influential books about OOP - "Design Patterns: Elements of Reusable Object-Oriented Software" - talks about composition, separating interface from implementation etc - not about hierarchies ( other than to prefer composition ).
Much of the Design Patterns phenomenon involves convincing Java to do things that are idiomatic in other languages.
Composition and "separating interface from implementation" are not specific to OOP languages.
Composition has been used since LISP I and ALGOL 60 in almost all programming languages.
"Separating interface from implementation" is also the main feature of the programming languages based on abstract types, starting with Alphard and CLU, which are not OOP languages and which predate the time when Smalltalk has become known to the public, launching the OOP fashion.
An abstract type is defined by an interface, i.e. by a set of functions that have arguments of that type and this is a programming language feature that is completely independent of the OOP features like member functions, virtual functions and inheritance.
All OOP languages have some kind of abstract types, though with a different point of view on the relationships between individual objects and types a.k.a. classes and the functions that operate on them, but there have been many languages with abstract data types without the OOP features.
Moreover "separating interface from implementation" has also been the main feature of all programming languages based on modules, starting with Mesa, Modula and Ada.
The features that identify an OOP language are the idea that the functions belong to individual objects, not to types, hence the member functions, the substitution of the union types (of the right kind, like in Algol 68, not of the pathetic kinds encountered in Pascal and C and derived languages) with virtual functions (to provide an alternative form of dynamic polymorphism, which is preferable for closed-source software, by allowing changes without recompilation, unlike with tagged unions where there are select/case/switch statements that must be recompiled when the union is extended), and the inheritance in a class hierarchy.
There are many languages that are multi-paradigm, like C++ or D, where you can choose whether to write a program in an OOP style or in a totally different style, but there are also languages where it is difficult to avoid using the OOP features, or even impossible, because all the available data types may be derived from some base "object" type, inheriting its properties.
That, as I have been told, is "object-based programming", not object-oriented programming.
That is not at all unique to OOP, and in fact OOP makes the problem undecidable due to ad-hoc subtyping. More structured languages like ML, Haskell, and Rust are much easier to reason about and have much stronger type systems.
So can you tell me which programming paradigm actually solves these problems that OOP has?
And also, can you give me a huge non-OOP codebase that shows in practice how it is better than the potential OOP implementation?
Paradigms don’t solve each other’s problems. They’re just another approach that may be better in specific contexts and you can even mix them. Although today, some languages is veering towards using structs and the like for data models and classes as logic containers (swift, kotlin).
As for huge codebases, everyone knows that line of codes does not equal quality.
So basically, OOP is still the best for big iterative projects?
I think the main problem with OOP is that the classic animal/cat/dog type class hierarchy examples that are used in teaching are in fact hardly ever used in the real world - giving a very misleading view of how it's actually used in practice.
Most people using OOP languages prefer composition over hierarchies and I almost never see those complex class structures modelled on data ( for some of the reasons you give ).
In terms of evolvability - encapsulation - a key feature of OOP - is a key tool in enabling that.
You're right that animal/cat/dog hierarchies are hardly ever used, but exposing students too early to DogFactoryFactorySingleton might cause lasting mental damage.
Too many distractions. Software architecture not transparent enough. Design pattern hell where it's not necessary. Too many devs who do not know what they are doing.
I can't speak for Dijkstra, but in my view it is mostly about how it is taught.
When you're learning OOP, you deal with questions like "is a Circle an Ellipse or is an Ellipse a Circle?" F'in' neither, actually. You end up "modelling" "business objects" in "code", leading to monstrosities like Hibernate.
In reality, you have the real-world domain and you have the in-the-computer domain, and trying to make one look like the other is a mistake. OOP is a dandy way of managing some forms of complexity, but it's not often an especially good way of looking at most problems, much less the only way.
https://en.wikipedia.org/wiki/Object-oriented_programming
You can find some criticism points in there
If you want to write a script, OOP "kind of" nudges you towards making multiple data structures and "function" classes instead of having a simple 100 line function.
And it is the wrong choice, if you need a script and can make that function in python.
OOP, particularly Java, is good when you have a lot of functionality, especially if you can use already existing massive ecosystem.
If I would have to write something from scratch, I would not use OOP, but if you need something fairly complicated, chance is there is already a generic solution available, and you need to costomize it, where Java is pretty good.
OOP is great for some applications, e.g. for GUIs, but very bad for other applications, e.g. for scientific computing (where abstract types are very useful, but dynamic polymorphism and inheritance are harmful; moreover, the view that functions belong to data, instead of thinking about data as the things on which functions operate, is prone to inefficiencies whenever the amount of data is huge; one of the reasons why inheritance is harmful in scientific computing is that most operations with physical quantities have 2 or more arguments, and more often than not they are of different types; therefore any attempt to define those operations as member functions of some class that belongs to a hierarchy of classes makes no sense, even if all such operations are best defined as overloaded functions where a specific implementation is selected at compile time, based on the types of the arguments).
Unfortunately, when OOP has become fashionable, its proponents have tried to convince everybody that OOP is the best paradigm for absolutely all programming problems, not only for those where OOP is indeed the best, and they have been rather successful for some time, which was bad for the software industry in general, resulting in many sub-optimal programs.
Nothing. It's just one of many useful tools we can use to develop software. It's like with food. Sometimes some people loudly claim that a certain food is unhealthy, and then a few months or years later you read the exact opposite. Fortunately, there are low-pass filters.
Like everything it has its pro and cons. One of the con is that it favors a strong coupling between data and operations. Which mean that a change at one place is likely to change/break things elsewhere.
He's the GOAT of hot takes. His critique of the GOTO statement in a 1968 letter to the ACM, which they retitled "Go-to statement considered harmful", is one of the best remembered critiques in programming history.
The phrase "considered harmful" has become a meme and is the go-to phrase (pun intended) for essayists looking to criticise some aspect of computing
https://www.cs.utexas.edu/users/EWD/ewd02xx/EWD215.PDF
https://en.wikipedia.org/wiki/Considered_harmful
One of the best remembered, and which led to harmful prejudice against BASIC. While the foundation of the criticism makes sense, it led to silly notions as "there is never a case where GOTOs are useful" and "people who start with BASIC are broken programmers forever"
The problem with unrestricted GOTO isn’t that they’re never useful or that bad people use them. The problem is specifically that they make predicate transformer semantics (and probably other formalisms) pragmatically useless.
For example in a language without GOTO inside an if statement the guarding condition is known to be a predicate (at least until another operation changes it) whereas in a language with unrestricted GOTO there is no guarantee whatsoever that the guard holds since execution could have jumped past it.
Whenever GOTO comes up, I have to mention Knuth's beautiful paper that was part of that argument: "Structured Programming with go to Statements" (1974). Some quotes:
"At the I F I P Congress in 1971 I had the pleasure of meeting Dr. Eiichi Goto of Japan, who cheerfully complained that he was always being eliminated."
"For many years, the go to statement has been troublesome in the definition of correctness proofs and language semantics....Just recently, however, Hoare has shown that there is, in fact, a rather simple way to give an axiomatic definition of go to statements; indeed, he wishes quite frankly that it hadn't been quite so simple."
I cannot find anything written by Hoare about it, but Knuth goes on to describe it: the idea is that labels have associated preconditions and GOTOs have the semantics { precondition(L) } goto L { false }.
"Informally, a(L) [my "precondition(L)"] represents the desired state of affairs at label L; this definition says essentially that a program is correct if a(L) holds at L and before all "go to L" statements, and that control never "falls through" a go to statement to the following text. Stating the assertions a(L) is analogous to formulating loop invariants. Thus, it is not difficult to deal formally with tortuous program structure if it turns out to be necessary; all we need to know is the "meaning" of each label."
It is a very nice paper. Some sources:
http://www.kohala.com/start/papers.others/knuth.dec74.html
https://pic.plover.com/knuth-GOTO.pdf
https://dl.acm.org/doi/10.1145/356635.356640
Indeed, which is why I offered the pragmatic qualifier. It easily follows that the weakest precondition for L is that at least one of the weakest preconditions for every goto to that L holds. One can possibly do even better with some kind of flow analysis. The issue isn't that it's impossible, it's that programming is hard enough and it makes it much harder to be sure the code is correct. Or maybe put another way "not difficult" for Knuth and for the typical working programmer (or me) aren't identical.
Thanks for sharing that paper! As an aside, it's really remarkable just how readable the state of the art papers such as the one you shared there from that era are. Either computing science has greatly advanced to the point where clarity is no longer achievable without years of specialized preparatory study or the quality of writing has regressed.
He was also against LISP, but turned around in 1999:
https://www.cs.utexas.edu/users/EWD/transcriptions/EWD12xx/E... "I must confess that I was very slow on appreciating LISP’s merits. My first introduction was via a paper that defined the semantics of LISP in terms of LISP, I did not see how that could make sense, I rejected the paper and LISP with it."
The GOTO-paper is widely misunderstood though. It is making a case for blocks and scopes and functions as structures which makes it easier to analyze and reason about the execution of complex programs. The case against unconstrained GOTO follows naturally from this since you can't have those structures in combination with unconstrained GOTO.
An the person who changed the title, inventing the now iconic trope "considered harmful" was none other than recently deceased Niklaus Wirth.
Dijkstra's take was significantly less spicy. The original title was "A Case Against the Goto Statement". Niklaus Wirth is actually the one who changed it (your wikipedia link covers this).
Ironclad deduction and logic
" a) 2 ≤ i < 13
b) 1 < i ≤ 12
c) 2 ≤ i ≤ 12
d) 1 < i < 13
There is a smallest natural number. Exclusion of the lower bound —as in b) and d)— forces for a subsequence starting at the smallest natural number the lower bound as mentioned into the realm of the unnatural numbers. That is ugly, so for the lower bound we prefer the ≤ as in a) and c).
Consider now the subsequences starting at the smallest natural number: inclusion of the upper bound would then force the latter to be unnatural by the time the sequence has shrunk to the empty one. That is ugly, so for the upper bound we prefer < as in a) and d). We conclude that convention a) is to be preferred."
I despise this guy
Why? Seems completely correct and good.
You either have an empty range be -1 .. 0, which is ugly, or 0 .. -1, which is also ugly. Thus, start-inclusive, end-exclusive.
"That is ugly" is what separates math (1-indexing in Fortran, R, SAS, SPSS, Matlab, Mathematica, Julia) from the apes
This and column-major order for matrices (a vector is a column, not a row)
I'm not seeing the superior ironcladness in this statement. I must be an ape.
Those languages are made for writing throwaway code that sometimes is made to suffer in agony for years, when it wasn't written to be revised more than a week later.
We can be thankful that this applied physicist in particular decided to dedicate time to think carefully about the needs of who would be his colleagues and successors, when he was pretty much inventing programming as a profession; he told of the anecdote of when he got married, and he wasn't able to write in "programmer" as his occupation in the Dutch paperwork, because it didn't exist as a job category yet.
1 < i < 1 is also ugly. Better to write false, or remove that conditional entirely.
And then, once you remove empty ranges, there's no reason to to choose a range style based on which you prefer for an empty range.
Perhaps even more relevant nowadays
My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.
https://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/E...
I'm not sure if it's due to having come across this quote long ago, but this idea is ingrained in my bones.
Except I think people have often overly focused on "lines of code" as the unit for this metric, leading many to overrate terse code, even when it is very dense.
But I'm not sure what wording would capture this idea succinctly enough to include it in a pithy quote.
There's a benefit to terseness. Go's naming conventions are superior to Java's.
APL (and it's many cousins) is a bridge too far for me, but I can see the appeal to having the entire program on a single screen.
Agree to (not entirely, but largely) disagree!
It's funny that you used Go as an example. Like 60% of its raison d'être is noticing that more lines of code is often better if each line is simple. And I think it's mostly right about that. Its inscrutable naming conventions are, on the other hand, a poor choice. (Notwithstanding that java's are a poor choice in the opposite direction.)
Dijkstra on APL: https://www.jsoftware.com/papers/Dijkstra_Letter.htm
In one short letter he makes an insightful critique yet misses the the point so hard I can practically hear the woosh 42 years later.
Every program should be as simple as possible, but no simpler.
I didn't know the last one. That explains why Alan Kay spent a few minutes of one of his OOPSLA talks roasting Dijkstra[0]
"I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras"
Which, I dunno, feels kind of funny coming from him of all people.
https://youtu.be/oKg1hTOQXoY?t=342
See https://medium.com/@acidflask/this-guys-arrogance-takes-your...
Thanks for posting this. I really enjoyed reading it. It'd be worth a top-level post (hint) - there's lots in it to discuss without side-tracking this thread.
Would Dijkstra have tolerated the expression "amour-propre" better than "ego"?
Judge not, lest ye be judged.
Half of his clever quotes are completely bonkers though and have been disproven by history. How many of you are proving your programs correct before entering them into the computer? Because that is the only correct way to program. And remember he disparaged Margaret Hamiltons software methodology. Sure, she helped put a man on the moon, but apparently she did it the wrong way.
I suspect geeks like Dijkstra because he is "edgy" more than because he is correct.
Also, Object-Oriented programming was actually invented in Norway, even though Alan Kay of Smalltalk fame tried to take credit.
He was right about GOTO though, but many developers did not even understand his argument but just read the headline and concluded "GOTO bad".
Not enough of us, that's how many.
Also, if he said this back when the majority of programs were written in assembly I think it makes a lot more sense.
Alan Kay didn’t claim to invent it. He coined the term. He’ll be the first to tell you he was inspired by Simula.
https://www.quora.com/What-did-Alan-Kay-mean-by-I-made-up-th...
I like him because he is outspoken rather than because he is edgy.
I don’t know how amenable he was to his arguments possibly being incorrect, but I love working with/knowing people who are that combination of outspoken and not-overly-stubborn. People like that are a firehose of ideas and knowledge, even if not everything they say is correct. They also usually are “passionate” about their work and at least competent enough to have unorthodox opinions that don’t just sound blatantly stupid.
Most people are too timid or low-ability to be outspoken at all.
I really think that it is a disservice to Dijkstra to remember him as the Don Rickles of computer science.
I think of him more as the Truman Capote of computer science. Backus' characterization of one particular EWD as "bitchy" is right on the money.
Is it? Have you read his EWDs? They're quite interesting and he comes across as far more humble than many people seem to imagine him as. Blunt in his criticism, certainly, but not self-agrandizing or boastful.
Yes, I read numerous EWDs, starting roughly 40 years ago. Many of them are indeed interesting (even, or maybe particularly the gossipy parts).
The problem is not that he self-aggrandizes or is boastful; it's that he keeps tearing other people down. And that he was very willing to offer sweeping pronouncements on matters that he had no practical experience in. He basically stopped touching computers in the early 1970s, and certainly stopped writing practical programs. What basis then, did he have for dismissing object oriented programming, which for all its flaws has done decidedly more for human progress than program verification has?
And as the tense exchange with Backus shows, he was quick to dismiss the relevance of functional programming, which certainly has done a lot for the kind of mathematical reasoning he advocates for programming (Apparently he taught his students Haskell later. Did he ever apologize to Backus for his wrong headed initial assessment?).
Would computer science miss ANYTHING relevant if he had stopped working in the field in 1975? His output basically consisted of proving propositions over cherry picked toy problems. I don't think anybody doubts that this is possible, but is this a viable approach to build real systems (not only large ones, but also ones whose requirements evolve over time)? I consider this, at best, highly unproven, and yet he advocates this as the sole approach to be taught in CS education. Knuth noted, correctly, that neither Dijkstra's education nor any of the practical systems he built (Algol compiler, etc) were based on this approach.
Not sure what he meant by "object-oriented", but he - as the other members of the IFIP - was well aware of Simula 67, and he belonged to the faction that rejected van Wijngaarden's proposal and regarded Simula 67 as "a more fruitful development" [1]. So - if he said or wrote that at all - it's likely related to what Kay understood by the term, or the implementation as a dynamically typed, originally interpreted language done at Xerox PARC, not to the kind of "object-orientation" for which Simula 67 is recognized today (remember that the term "object-oriented" was originally not applied to Simula 67 by the public, but to Smalltalk).
[1] https://www.researchgate.net/publication/2948437_Edsger_Dijk...
See “TUG LINES”, Issue 32 [1]
I think the comment was referring to what they were doing at Xerox PARC - yes.
There are more recent writings of him taking jabs at Californian universities for teaching Java to freshmen instead of what he was doing in Austin - teaching functional programming via Haskell. [2]
[1] Dijkstra, E.W. Quoted by Bob Crawford. TUG lines, Journal of the Turbo User Group, 32, Aug.–Sep. (1989)
[2] https://www.cs.utexas.edu/users/EWD/OtherDocs/To%20the%20Bud... (2001)
[2] doesn’t actually seem to call out California by name, though. It would have been a little unfair if he had, since California’s most famous public university was teaching intro CS with Scheme and SICP at the time.
Thanks for the references.
Unfortunately I wasn't able to find [1] anywhere on the web; the CHM only seems to have issues up to 22. Do you have a link where I can access it?
The other one [2] is also interesting, but the claim is not against OO, but "that functional programs are much more readily appreciated as mathematical objects than imperative ones, so that you can teach what rigorous reasoning about programs amounts to", and that "Java [..] is a mess", which again applies to the language quality (see also Brinch Hansen's paper https://dl.acm.org/doi/10.1145/312009.312034), not the paradigm.
I will get downvoted for that bur I will ask anyway. So what exactly are his achievements?
After reading the article it seems that he mostly produced hot takes, while others did the actual heavy lifing. Goto considered harmful? And some algorithm that would probably be found by someone else?
Will Linus get a monument made of pure gold when he dies?
For one thing, he invented semaphores (https://www.cs.utexas.edu/~EWD/translations/EWD35-English.ht..., https://en.wikipedia.org/wiki/Semaphore_(programming))
I mean, lookup his wikipedia.
He won a turing award for advocating for structured control flow. That is the modern standard style for programming. He contributed a number of algorithms including the shortest path algorithm.
One of my favorite quotes from Dijkstra is his not so subtle dig on the arrogance of MIT professors on refusing to adopt his solutions to their problems in OS design. Given the time of the remarks, they're most probably the original designers of the ill fated Multics OS.
"You can hardly blame M.I.T. for not taking notice of an obscure computer scientist in a small town in the the Netherlands."
Is that the Smalltalk kind of OO or the C++ kind?