return to table of content

Edsger Dijkstra carried computer science on his shoulders (2020)

rcbdev
64 replies
1d8h

Dijkstra is a wonderful source of memorable quotes and hot takes from the early days of software.

A sampling:

“The competent programmer is fully aware of the limited size of his own skull. He therefore approaches his task with full humility, and avoids clever tricks like the plague.” (Dijkstra, 1972)

“The use of COBOL cripples the mind; its teaching should, therefore, be regarded as a criminal offense.” (Dijkstra, 1982)

“Object-oriented programming is an exceptionally bad idea which could only have originated in California.” (Dijkstra, 1989)
Poliorcetes
20 replies
1d7h

whats wrong with object oriented programming?

avianlyric
12 replies
1d6h

Most real world problems don’t fit into neatly into hierarchical structures. OOP pushes you towards trying to model everything in the world an objects with strictly defined operations that can be performed on, where those actions are determined by the data type itself.

You end up being forced to co-mingle your data structure design, with your data processing design. Which tends to be rather unhelpful.

Keeping your data structures fairly separate from the processes/functions that operate on them makes it much easier to build composable software. Data structures tend to be very hard to change over time, because the migration process is always tricky. Any structure that’s exposed outside your programs address space, whether that be via an API, storage on disk or in a DB, needs a migration path so new code can deal with old structures correctly. On the other hand, algorithms operating on that data are trivial to change, there’s no migration risk, and indeed we should expect those algorithms to change often, as the requirements of the software change.

In an OOP world, because you’re strongly encouraged to tightly bind your data structures to the algorithms that operate on them. You quickly end up in a horrible situation where changing your algorithms is very difficult without also being forced to change your data structures. Suddenly what should be a simple and easy change (introducing a new way of processing data), becomes difficult because coupling created by objects makes it hard to change the processing without also changing the data structures.

In a simple “write-once” world, OOP is fine. But once you want to write software that’s expected to evolve and adapt over years, as business requirements change, OOP quickly becomes more a hinderance than help.

arein3
6 replies
1d5h

In OOP you can have data structures and algorithms separated. You can use composition over inheritance without issues.

The fact that a language is strong typed and you get compilation errors if you forgot something is a big plus.

OOP is fundamentally about no static variables.

gryn
3 replies
1d4h

don't conflate 'Objects' with 'Object Oriented Design', if you split your data Structures from you algos then your just using an OOP-language to do programming not doing OOP.

strong typing has nothing to do with OOP. static variable are replaced by singletons and other similar in spirit objects.

OOP today is a 'No true Scotsman' concept much like communism, agile, and other vague by design terms. You can't argue against it because every individual has at least one definition in his head and it shifts through the conversation toward the one that's not refuted by the claims.

the intent of such concepts is to hit you in the feels and trigger some ideals inside you, so that you associate the positive feelings you have toward those ideals to the concept.

if you argue about it as it is used in practice you will inevitably have someone bring up that java style OOP is not true OOP and that you should look intro Smalltalk or some other language that implement "Real OOP".

since arguing about specifics is a losing battle, lets argue about the bigger picture, if we take the goals of alain kay like he talked about in many of his talks his goals was to make systems more like biogology, but as a system designer the last thing you want to do is that. we don't know much about biology, people in that field a still reverse engineering pre-existing designs to this day and not designing much of their own from scratch. If you design a system you want to have the most control and foresight in the dynamics of your systems, uncontrolled & unintended emergent effects are your source of problems.

entangling data and behavior make your conceptual design state-space size explode, when you go full OOP you become an ontologist philosophizing about what is an X and what is an X-manager, X-Provider, ... and less of a system designer trying to make sure your system does not land in the wrong states.

DrScientist
2 replies
1d4h

Aren't you then just attacking a programming style that nobody actually uses or advocates?

One of the most influential books about OOP - "Design Patterns: Elements of Reusable Object-Oriented Software" - talks about composition, separating interface from implementation etc - not about hierarchies ( other than to prefer composition ).

mcguire
0 replies
20h7m

Much of the Design Patterns phenomenon involves convincing Java to do things that are idiomatic in other languages.

adrian_b
0 replies
1d3h

Composition and "separating interface from implementation" are not specific to OOP languages.

Composition has been used since LISP I and ALGOL 60 in almost all programming languages.

"Separating interface from implementation" is also the main feature of the programming languages based on abstract types, starting with Alphard and CLU, which are not OOP languages and which predate the time when Smalltalk has become known to the public, launching the OOP fashion.

An abstract type is defined by an interface, i.e. by a set of functions that have arguments of that type and this is a programming language feature that is completely independent of the OOP features like member functions, virtual functions and inheritance.

All OOP languages have some kind of abstract types, though with a different point of view on the relationships between individual objects and types a.k.a. classes and the functions that operate on them, but there have been many languages with abstract data types without the OOP features.

Moreover "separating interface from implementation" has also been the main feature of all programming languages based on modules, starting with Mesa, Modula and Ada.

The features that identify an OOP language are the idea that the functions belong to individual objects, not to types, hence the member functions, the substitution of the union types (of the right kind, like in Algol 68, not of the pathetic kinds encountered in Pascal and C and derived languages) with virtual functions (to provide an alternative form of dynamic polymorphism, which is preferable for closed-source software, by allowing changes without recompilation, unlike with tagged unions where there are select/case/switch statements that must be recompiled when the union is extended), and the inheritance in a class hierarchy.

There are many languages that are multi-paradigm, like C++ or D, where you can choose whether to write a program in an OOP style or in a totally different style, but there are also languages where it is difficult to avoid using the OOP features, or even impossible, because all the available data types may be derived from some base "object" type, inheriting its properties.

mcguire
0 replies
20h17m

That, as I have been told, is "object-based programming", not object-oriented programming.

anon291
0 replies
21h10m

The fact that a language is strong typed and you get compilation errors if you forgot something is a big plus.

That is not at all unique to OOP, and in fact OOP makes the problem undecidable due to ad-hoc subtyping. More structured languages like ML, Haskell, and Rust are much easier to reason about and have much stronger type systems.

koonsolo
2 replies
1d3h

So can you tell me which programming paradigm actually solves these problems that OOP has?

And also, can you give me a huge non-OOP codebase that shows in practice how it is better than the potential OOP implementation?

skydhash
1 replies
1d2h

Paradigms don’t solve each other’s problems. They’re just another approach that may be better in specific contexts and you can even mix them. Although today, some languages is veering towards using structs and the like for data models and classes as logic containers (swift, kotlin).

As for huge codebases, everyone knows that line of codes does not equal quality.

koonsolo
0 replies
23h15m

So basically, OOP is still the best for big iterative projects?

DrScientist
1 replies
1d4h

I think the main problem with OOP is that the classic animal/cat/dog type class hierarchy examples that are used in teaching are in fact hardly ever used in the real world - giving a very misleading view of how it's actually used in practice.

Most people using OOP languages prefer composition over hierarchies and I almost never see those complex class structures modelled on data ( for some of the reasons you give ).

In terms of evolvability - encapsulation - a key feature of OOP - is a key tool in enabling that.

microtherion
0 replies
21h39m

You're right that animal/cat/dog hierarchies are hardly ever used, but exposing students too early to DogFactoryFactorySingleton might cause lasting mental damage.

zwnow
0 replies
1d7h

Too many distractions. Software architecture not transparent enough. Design pattern hell where it's not necessary. Too many devs who do not know what they are doing.

mcguire
0 replies
20h8m

I can't speak for Dijkstra, but in my view it is mostly about how it is taught.

When you're learning OOP, you deal with questions like "is a Circle an Ellipse or is an Ellipse a Circle?" F'in' neither, actually. You end up "modelling" "business objects" in "code", leading to monstrosities like Hibernate.

In reality, you have the real-world domain and you have the in-the-computer domain, and trying to make one look like the other is a mistake. OOP is a dandy way of managing some forms of complexity, but it's not often an especially good way of looking at most problems, much less the only way.

jokoon
0 replies
1d5h

https://en.wikipedia.org/wiki/Object-oriented_programming

You can find some criticism points in there

arein3
0 replies
1d5h

If you want to write a script, OOP "kind of" nudges you towards making multiple data structures and "function" classes instead of having a simple 100 line function.

And it is the wrong choice, if you need a script and can make that function in python.

OOP, particularly Java, is good when you have a lot of functionality, especially if you can use already existing massive ecosystem.

If I would have to write something from scratch, I would not use OOP, but if you need something fairly complicated, chance is there is already a generic solution available, and you need to costomize it, where Java is pretty good.

adrian_b
0 replies
1d7h

OOP is great for some applications, e.g. for GUIs, but very bad for other applications, e.g. for scientific computing (where abstract types are very useful, but dynamic polymorphism and inheritance are harmful; moreover, the view that functions belong to data, instead of thinking about data as the things on which functions operate, is prone to inefficiencies whenever the amount of data is huge; one of the reasons why inheritance is harmful in scientific computing is that most operations with physical quantities have 2 or more arguments, and more often than not they are of different types; therefore any attempt to define those operations as member functions of some class that belongs to a hierarchy of classes makes no sense, even if all such operations are best defined as overloaded functions where a specific implementation is selected at compile time, based on the types of the arguments).

Unfortunately, when OOP has become fashionable, its proponents have tried to convince everybody that OOP is the best paradigm for absolutely all programming problems, not only for those where OOP is indeed the best, and they have been rather successful for some time, which was bad for the software industry in general, resulting in many sub-optimal programs.

Rochus
0 replies
1d6h

whats wrong with object oriented programming?

Nothing. It's just one of many useful tools we can use to develop software. It's like with food. Sometimes some people loudly claim that a certain food is unhealthy, and then a few months or years later you read the exact opposite. Fortunately, there are low-pass filters.

HenriTEL
0 replies
1d6h

Like everything it has its pro and cons. One of the con is that it favors a strong coupling between data and operations. Which mean that a change at one place is likely to change/break things elsewhere.

amiga386
8 replies
1d7h

He's the GOAT of hot takes. His critique of the GOTO statement in a 1968 letter to the ACM, which they retitled "Go-to statement considered harmful", is one of the best remembered critiques in programming history.

The phrase "considered harmful" has become a meme and is the go-to phrase (pun intended) for essayists looking to criticise some aspect of computing

https://www.cs.utexas.edu/users/EWD/ewd02xx/EWD215.PDF

https://en.wikipedia.org/wiki/Considered_harmful

glimshe
4 replies
1d6h

One of the best remembered, and which led to harmful prejudice against BASIC. While the foundation of the criticism makes sense, it led to silly notions as "there is never a case where GOTOs are useful" and "people who start with BASIC are broken programmers forever"

User23
2 replies
1d5h

The problem with unrestricted GOTO isn’t that they’re never useful or that bad people use them. The problem is specifically that they make predicate transformer semantics (and probably other formalisms) pragmatically useless.

For example in a language without GOTO inside an if statement the guarding condition is known to be a predicate (at least until another operation changes it) whereas in a language with unrestricted GOTO there is no guarantee whatsoever that the guard holds since execution could have jumped past it.

mcguire
1 replies
20h45m

Whenever GOTO comes up, I have to mention Knuth's beautiful paper that was part of that argument: "Structured Programming with go to Statements" (1974). Some quotes:

"At the I F I P Congress in 1971 I had the pleasure of meeting Dr. Eiichi Goto of Japan, who cheerfully complained that he was always being eliminated."

"For many years, the go to statement has been troublesome in the definition of correctness proofs and language semantics....Just recently, however, Hoare has shown that there is, in fact, a rather simple way to give an axiomatic definition of go to statements; indeed, he wishes quite frankly that it hadn't been quite so simple."

I cannot find anything written by Hoare about it, but Knuth goes on to describe it: the idea is that labels have associated preconditions and GOTOs have the semantics { precondition(L) } goto L { false }.

"Informally, a(L) [my "precondition(L)"] represents the desired state of affairs at label L; this definition says essentially that a program is correct if a(L) holds at L and before all "go to L" statements, and that control never "falls through" a go to statement to the following text. Stating the assertions a(L) is analogous to formulating loop invariants. Thus, it is not difficult to deal formally with tortuous program structure if it turns out to be necessary; all we need to know is the "meaning" of each label."

It is a very nice paper. Some sources:

http://www.kohala.com/start/papers.others/knuth.dec74.html

https://pic.plover.com/knuth-GOTO.pdf

https://dl.acm.org/doi/10.1145/356635.356640

User23
0 replies
18h49m

Indeed, which is why I offered the pragmatic qualifier. It easily follows that the weakest precondition for L is that at least one of the weakest preconditions for every goto to that L holds. One can possibly do even better with some kind of flow analysis. The issue isn't that it's impossible, it's that programming is hard enough and it makes it much harder to be sure the code is correct. Or maybe put another way "not difficult" for Knuth and for the typical working programmer (or me) aren't identical.

Thanks for sharing that paper! As an aside, it's really remarkable just how readable the state of the art papers such as the one you shared there from that era are. Either computing science has greatly advanced to the point where clarity is no longer achievable without years of specialized preparatory study or the quality of writing has regressed.

personalityson
0 replies
1d5h

He was also against LISP, but turned around in 1999:

https://www.cs.utexas.edu/users/EWD/transcriptions/EWD12xx/E... "I must confess that I was very slow on appreciating LISP’s merits. My first introduction was via a paper that defined the semantics of LISP in terms of LISP, I did not see how that could make sense, I rejected the paper and LISP with it."

goto11
0 replies
1d4h

The GOTO-paper is widely misunderstood though. It is making a case for blocks and scopes and functions as structures which makes it easier to analyze and reason about the execution of complex programs. The case against unconstrained GOTO follows naturally from this since you can't have those structures in combination with unconstrained GOTO.

cyco130
0 replies
1d6h

An the person who changed the title, inventing the now iconic trope "considered harmful" was none other than recently deceased Niklaus Wirth.

Kamq
0 replies
1d6h

Dijkstra's take was significantly less spicy. The original title was "A Case Against the Goto Statement". Niklaus Wirth is actually the one who changed it (your wikipedia link covers this).

personalityson
5 replies
1d6h

Ironclad deduction and logic

" a) 2 ≤ i < 13

b) 1 < i ≤ 12

c) 2 ≤ i ≤ 12

d) 1 < i < 13

There is a smallest natural number. Exclusion of the lower bound —as in b) and d)— forces for a subsequence starting at the smallest natural number the lower bound as mentioned into the realm of the unnatural numbers. That is ugly, so for the lower bound we prefer the ≤ as in a) and c).

Consider now the subsequences starting at the smallest natural number: inclusion of the upper bound would then force the latter to be unnatural by the time the sequence has shrunk to the empty one. That is ugly, so for the upper bound we prefer < as in a) and d). We conclude that convention a) is to be preferred."

I despise this guy

FeepingCreature
4 replies
1d6h

Why? Seems completely correct and good.

You either have an empty range be -1 .. 0, which is ugly, or 0 .. -1, which is also ugly. Thus, start-inclusive, end-exclusive.

personalityson
2 replies
1d6h

"That is ugly" is what separates math (1-indexing in Fortran, R, SAS, SPSS, Matlab, Mathematica, Julia) from the apes

This and column-major order for matrices (a vector is a column, not a row)

fosforsvenne
0 replies
1d5h

I'm not seeing the superior ironcladness in this statement. I must be an ape.

forgetfulness
0 replies
1d3h

Those languages are made for writing throwaway code that sometimes is made to suffer in agony for years, when it wasn't written to be revised more than a week later.

We can be thankful that this applied physicist in particular decided to dedicate time to think carefully about the needs of who would be his colleagues and successors, when he was pretty much inventing programming as a profession; he told of the anecdote of when he got married, and he wasn't able to write in "programmer" as his occupation in the Dutch paperwork, because it didn't exist as a job category yet.

leereeves
0 replies
1d6h

1 < i < 1 is also ugly. Better to write false, or remove that conditional entirely.

And then, once you remove empty ranges, there's no reason to to choose a range style based on which you prefer for an empty range.

a_imho
5 replies
1d6h

Perhaps even more relevant nowadays

My point today is that, if we wish to count lines of code, we should not regard them as "lines produced" but as "lines spent": the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger.

https://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/E...

sanderjd
4 replies
1d4h

I'm not sure if it's due to having come across this quote long ago, but this idea is ingrained in my bones.

Except I think people have often overly focused on "lines of code" as the unit for this metric, leading many to overrate terse code, even when it is very dense.

But I'm not sure what wording would capture this idea succinctly enough to include it in a pithy quote.

nordsieck
2 replies
1d3h

There's a benefit to terseness. Go's naming conventions are superior to Java's.

APL (and it's many cousins) is a bridge too far for me, but I can see the appeal to having the entire program on a single screen.

sanderjd
0 replies
17h8m

Agree to (not entirely, but largely) disagree!

It's funny that you used Go as an example. Like 60% of its raison d'être is noticing that more lines of code is often better if each line is simple. And I think it's mostly right about that. Its inscrutable naming conventions are, on the other hand, a poor choice. (Notwithstanding that java's are a poor choice in the opposite direction.)

hoosieree
0 replies
1d3h

Dijkstra on APL: https://www.jsoftware.com/papers/Dijkstra_Letter.htm

In one short letter he makes an insightful critique yet misses the the point so hard I can practically hear the woosh 42 years later.

idontwantthis
0 replies
1d4h

Every program should be as simple as possible, but no simpler.

vanderZwan
4 replies
1d7h

I didn't know the last one. That explains why Alan Kay spent a few minutes of one of his OOPSLA talks roasting Dijkstra[0]

"I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras"

Which, I dunno, feels kind of funny coming from him of all people.

https://youtu.be/oKg1hTOQXoY?t=342

baruchel
2 replies
1d5h
eszed
0 replies
1d

Thanks for posting this. I really enjoyed reading it. It'd be worth a top-level post (hint) - there's lots in it to discuss without side-tracking this thread.

cafard
0 replies
1d

Would Dijkstra have tolerated the expression "amour-propre" better than "ego"?

norir
0 replies
1d1h

Judge not, lest ye be judged.

goto11
3 replies
1d4h

Half of his clever quotes are completely bonkers though and have been disproven by history. How many of you are proving your programs correct before entering them into the computer? Because that is the only correct way to program. And remember he disparaged Margaret Hamiltons software methodology. Sure, she helped put a man on the moon, but apparently she did it the wrong way.

I suspect geeks like Dijkstra because he is "edgy" more than because he is correct.

Also, Object-Oriented programming was actually invented in Norway, even though Alan Kay of Smalltalk fame tried to take credit.

He was right about GOTO though, but many developers did not even understand his argument but just read the headline and concluded "GOTO bad".

vanderZwan
0 replies
19h43m

How many of you are proving your programs correct before entering them into the computer?

Not enough of us, that's how many.

Also, if he said this back when the majority of programs were written in assembly I think it makes a lot more sense.

steego
0 replies
1d1h

Alan Kay didn’t claim to invent it. He coined the term. He’ll be the first to tell you he was inspired by Simula.

https://www.quora.com/What-did-Alan-Kay-mean-by-I-made-up-th...

opportune
0 replies
21h36m

I like him because he is outspoken rather than because he is edgy.

I don’t know how amenable he was to his arguments possibly being incorrect, but I love working with/knowing people who are that combination of outspoken and not-overly-stubborn. People like that are a firehose of ideas and knowledge, even if not everything they say is correct. They also usually are “passionate” about their work and at least competent enough to have unorthodox opinions that don’t just sound blatantly stupid.

Most people are too timid or low-ability to be outspoken at all.

cafard
3 replies
1d4h

I really think that it is a disservice to Dijkstra to remember him as the Don Rickles of computer science.

microtherion
2 replies
21h42m

I think of him more as the Truman Capote of computer science. Backus' characterization of one particular EWD as "bitchy" is right on the money.

Jtsummers
1 replies
21h37m

Is it? Have you read his EWDs? They're quite interesting and he comes across as far more humble than many people seem to imagine him as. Blunt in his criticism, certainly, but not self-agrandizing or boastful.

microtherion
0 replies
20h18m

Yes, I read numerous EWDs, starting roughly 40 years ago. Many of them are indeed interesting (even, or maybe particularly the gossipy parts).

The problem is not that he self-aggrandizes or is boastful; it's that he keeps tearing other people down. And that he was very willing to offer sweeping pronouncements on matters that he had no practical experience in. He basically stopped touching computers in the early 1970s, and certainly stopped writing practical programs. What basis then, did he have for dismissing object oriented programming, which for all its flaws has done decidedly more for human progress than program verification has?

And as the tense exchange with Backus shows, he was quick to dismiss the relevance of functional programming, which certainly has done a lot for the kind of mathematical reasoning he advocates for programming (Apparently he taught his students Haskell later. Did he ever apologize to Backus for his wrong headed initial assessment?).

Would computer science miss ANYTHING relevant if he had stopped working in the field in 1975? His output basically consisted of proving propositions over cherry picked toy problems. I don't think anybody doubts that this is possible, but is this a viable approach to build real systems (not only large ones, but also ones whose requirements evolve over time)? I consider this, at best, highly unproven, and yet he advocates this as the sole approach to be taught in CS education. Knuth noted, correctly, that neither Dijkstra's education nor any of the practical systems he built (Algol compiler, etc) were based on this approach.

Rochus
3 replies
1d6h

Object-oriented programming is an exceptionally bad idea which could only have originated in California.

Not sure what he meant by "object-oriented", but he - as the other members of the IFIP - was well aware of Simula 67, and he belonged to the faction that rejected van Wijngaarden's proposal and regarded Simula 67 as "a more fruitful development" [1]. So - if he said or wrote that at all - it's likely related to what Kay understood by the term, or the implementation as a dynamically typed, originally interpreted language done at Xerox PARC, not to the kind of "object-orientation" for which Simula 67 is recognized today (remember that the term "object-oriented" was originally not applied to Simula 67 by the public, but to Smalltalk).

[1] https://www.researchgate.net/publication/2948437_Edsger_Dijk...

rcbdev
2 replies
1d6h

if he said or wrote that at all

See “TUG LINES”, Issue 32 [1]

it's likely related to what Kay understood by the term, or the implementation as a dynamically typed, originally interpreted language done at Xerox PARC

I think the comment was referring to what they were doing at Xerox PARC - yes.

There are more recent writings of him taking jabs at Californian universities for teaching Java to freshmen instead of what he was doing in Austin - teaching functional programming via Haskell. [2]

[1] Dijkstra, E.W. Quoted by Bob Crawford. TUG lines, Journal of the Turbo User Group, 32, Aug.–Sep. (1989)

[2] https://www.cs.utexas.edu/users/EWD/OtherDocs/To%20the%20Bud... (2001)

jprival
0 replies
1d1h

[2] doesn’t actually seem to call out California by name, though. It would have been a little unfair if he had, since California’s most famous public university was teaching intro CS with Scheme and SICP at the time.

Rochus
0 replies
1d6h

Thanks for the references.

Unfortunately I wasn't able to find [1] anywhere on the web; the CHM only seems to have issues up to 22. Do you have a link where I can access it?

The other one [2] is also interesting, but the claim is not against OO, but "that functional programs are much more readily appreciated as mathematical objects than imperative ones, so that you can teach what rigorous reasoning about programs amounts to", and that "Java [..] is a mess", which again applies to the language quality (see also Brinch Hansen's paper https://dl.acm.org/doi/10.1145/312009.312034), not the paradigm.

rvba
2 replies
1d4h

I will get downvoted for that bur I will ask anyway. So what exactly are his achievements?

After reading the article it seems that he mostly produced hot takes, while others did the actual heavy lifing. Goto considered harmful? And some algorithm that would probably be found by someone else?

Will Linus get a monument made of pure gold when he dies?

Someone
0 replies
1d2h
DontchaKnowit
0 replies
1d2h

I mean, lookup his wikipedia.

He won a turing award for advocating for structured control flow. That is the modern standard style for programming. He contributed a number of algorithms including the shortest path algorithm.

teleforce
0 replies
1d4h

One of my favorite quotes from Dijkstra is his not so subtle dig on the arrogance of MIT professors on refusing to adopt his solutions to their problems in OS design. Given the time of the remarks, they're most probably the original designers of the ill fated Multics OS.

"You can hardly blame M.I.T. for not taking notice of an obscure computer scientist in a small town in the the Netherlands."

quickthrower2
0 replies
1d5h

Is that the Smalltalk kind of OO or the C++ kind?

archixe
47 replies
1d6h

"Whether written using a fountain pen or typewriter, Dijkstra’s technical reports were composed at a speed of around three words per minute. “The rest of the time,” he remarked, “is taken up by thinking.”9 For Dijkstra, writing and thinking blended into one activity. When preparing a new EWD, he always sought to produce the final version from the outset."

"He also never purchased a computer. Eventually, in the late 1980s, he was given one as a gift by a computer company, but never used it for word processing. Dijkstra did not own a TV, a VCR, or even a mobile phone. He preferred to avoid the cinema, citing an oversensitivity to visual input. By contrast, he enjoyed attending classical music concerts."

This is very fascinating to me, as a generally unsure person, I can't imagine writing anything this way let alone scientific papers. It certainly requires a deep focus and great knowledge on what you are writing about. I will try to give it a go next time I write out my thoughts.

There is interesting interview where he goes deeper into this. https://www.youtube.com/watch?v=P0w1MJHxStg

AndrewKemendo
22 replies
1d2h

There’s no other way to do it for this type of a brain. I know because I have the same type of brain.

I spend 90% of my time formulating descriptions of the problem and the desired end state

Hallucinating futures where the state of the world is in a state that I either wanted to be or that somebody’s asking me to build

Once you know your final end state, then you need to evaluate the current state of the things that need to change in order to transition to the final state

Once you have your S’ and S respectively then the rest of the time is choosing between hallucinations based on sub-component likelihood of being able to move from S to S’ within the time window

So the process is to basically trying to derive the transition function and sequencing of creating systems and components that are required, to successfully transition from state S to state S'

So the more granular and precise you can define the systems at S and S' then the easier it is to discover the likelihood pathway for transitional variables and also discover gaps, where systems don't exist, that would be required for S'

Said another way: treat everything - both existing and potential futures- as though they are or within an existing state machine that can be modeled. Your task is to understand the markov process that would result in such a state and then implement the things required to realize it.

The religious call this "Prayer"

Others call it "Manifesting"

biggestbrain1
12 replies
21h8m

This is perhaps the most verbose and ridiculous way of saying "I think about how to solve the problem". It feels like a parody with the prompt of: how a person who scored 170 on an online IQ test would describe how their brain works.

Twisol
4 replies
20h4m

It would be kinder to assume that "I think about how to solve the problem" doesn't capture the nuances of actually doing that thinking to the satisfaction of the commenter, and this is their attempt to articulate it to match their experience. The process of "understand the starting point, the desired end point, and identify the path between them" closely matches the way I approach problems. If you don't feel inclined to meet someone where they're at, you can say so without condescension.

biggestbrain1
3 replies
19h31m

"understand the starting point, the desired end point, and identify the path between them" That is the definition of problem-solving. If there was any nuance in the original comment, we both failed to find it.

deanCommie
2 replies
17h31m

Twisol's point that you failed to find is that you could choose to give AndrewKemendo the benefit of the doubt.

Clearly there is something novel about the way Dijistra thought and worked. Most of us don't do things that way - formulate our thinking, then work towards one perfect first draft.

AndrewKemendo saw that and said "I identify, I'm the same way", and tried to describe what his way of thinking sounds like.

If to you it sounds like the exact way of thinking and problem solving as yours or mine, well, then perhaps AndrewKemendo did not describe it well enough. Perhaps it's impossible to describe it to someone else. But the necessary context to his description is that his brain works different from yours or mine. So the short textual description also means something different than what it does for you and me.

Is there the possibility that AndrewKemendo is self-grandiose and not aware that he is not special and his verbose descriptions of his unique way of thinking are nothing but? Sure. But even if we had no other evidence that this is not the case, it would cost you nothing to be curious and assume best intention. But we do have other evidence - and it's in the first line - "There’s no other way to do it for this type of a brain. I know because I have the same type of brain."

rramadass
1 replies
6h29m

I know you mean well but i am somewhat with biggestbrain1 on this.

The comment just comes off as mere posturing (eg. "I know because I have the same type of brain") and if you actually think about what is written down all i see is empty verbiage and something which could have been said simpler and more directly (for example there is no need to bring in "Markov Processes" here).

Dijkstra's (and Floyd/Hoare's) programming techniques are hard enough to learn that such comments merely obfuscate the essential ideas and pushes people away from trying to study it because it "appears too hard". Things should be made as simple as possible to motivate people to study and learn.

HPMOR
0 replies
35m

These commentaries have such nuance and focus it reads like a work of art. I appreciate the many people who contributed to this thread. A true critical reading of each person's comments. It's absolutely beautiful.

AndrewKemendo
2 replies
18h49m

Why does my description threaten you??

augustk
1 replies
7h0m

I don't think it threatens anyone, it may simply come across as pretentious.

AndrewKemendo
0 replies
1h18m

I guess I can understand that.

I’m not making any bold claims about inventing anything or anything like that in fact, there are multiple times where I say I’m not doing anything special.

I’m just describing that my mental process resembles and steps the same way a markov process does (or OODA or SPA if you prefer) and that not everybody thinks like that and that and I found that certain types of scientific thinkers also have that same type of thinking. The Dykstra explanation there lined up with my experience

I cant control how people react to that so :shrug:

cangeroo
1 replies
18h29m

Whereas "AndrewKemendo" provided first-hand testimony, which added value in the form of a datapoint, I struggle to see how your opinion of his testimony added any value, other than negativity.

It's fine to dislike something - and just move on.

foofie
0 replies
10h22m

Whereas "AndrewKemendo" provided first-hand testimony, which added value in the form of a datapoint, (...)

So did the "In this moment I am euphoric" guy.

I am with biggestbrain here. The post reads like "I'm just like Dijkstra, I solve problems".

coldtea
0 replies
19h9m

Yeah, in the same way that Moby Dick is a "verbose and ridiculous way of saying "A guy holds a grudge against a whale", and Birth of the Cool is the same as the 8-bit chiptune cover.

The cliff notes summary is not the same as what the parent tries to convey. The nuance in what the parent wrote was the thing that went wooosh.

Hayatabad
0 replies
18h6m

+1.

vacuity
6 replies
1d

Interesting. I try to do something like this, but way simpler and my productivity is low right now. Maybe more practice will help. Would you say you're usually good at doing this and getting results?

rramadass
2 replies
6h24m

Are you trying to learn Dijkstra/Hoare/Floyd style programming techniques or are you trying to understand the idea of usage of "Markov Processes" (totally unnecessary) in the comment?

vacuity
1 replies
2h41m

The general problem solving approach. What the two sibling replies to yours are talking about.

rramadass
0 replies
2h7m

Ah, this is a more general question on which it is hard to give specific suggestions because "problem-solving" can mean anything/everything without knowing your exact context.

However you might want to start with The Thinker's Toolkit by Morgan Jones. This gives you a catalog of models which teaches you to structure your requirements born from problem analysis in various ways to aid problem-solving.

Denzel
1 replies
21h34m

The Minto Pyramid Principle (MPP) offers an analogous, perhaps more accessible, process to problem solving similar to Andrew’s description.

You use a problem solving process built on structured analysis by first defining the problem in terms of an Undesired Result (R1), Desired Result (R2) — the S and S’ in Andrew’s process. Then, you determine the Starting Point in terms of the logical structures that generate your R1; these structures can be a sequence of cause-effect, a structural decomposition (e.g. of organization, geography, etc.), a classification, or some combination of the three. From this structure you can hypothesize experiments to confirm/disconfirm where the causes are. With these causes in hand, you can generate possible solutions or corrective actions. Finally, you’d evaluate your alternatives and arrive at your solution to move from R1->R2.

MPP’s problem solving process has the additional advantage of structuring your actions/results in a way that makes writing a document or presentation simple and straightforward, to convince others for example.

Check out the book if you’re interested in improving your problem solving and analysis skills.

AndrewKemendo
0 replies
14h52m

Thanks for this. Very interesting indeed

AndrewKemendo
0 replies
22h35m

At this point, I consider it deterministic in terms of efficacy

So you can basically hand me any problem and I will implement this process, and I have a high success rate for delivering desired outcomes.

And again, this isn’t really like my process I invented. It boils down into a practical implementation of a Markoff process in planning as applied to any set of tasks such they could be discretely described as a state machine.

The key challenge IMO is in describing the state machine, and that is what takes a lot of elucidation.

In many cases we don’t have the ability to precisely describe a process as a state machine because we haven’t defined the boundaries of the system, and then measured it enough, in enough different dimensions, across enough time to be able to give that level of understanding to input an outputs.

j7ake
1 replies
11h59m

Why are you using a Markov process though to model time-dependent likelihood pathways ?

Doesn’t make sense. Your next step depends on much more than just knowing where you are at S. One needs to account for the history of where you were before.

Or maybe you’re just using technical words with precise meanings to describe a vague imprecise heuristic?

rramadass
0 replies
7h45m

Your question is valid. I think the person is just using bombastic words for something already well-known and simpler. A Markov Chain is just a FSM with probabilistic transition functions and in the limit is just a deterministic FSM when the transition function probability becomes 1.

hardasspunk
17 replies
1d6h

3 WPM really shook me as well. Today we complain about not having enough time to work on our projects, but back in the good old days, as successful as Dijkstra was, got time to spend 3 WPM and still produced close to 7k articles! Astonishing.

anonzzzies
9 replies
1d5h

At least as I see my colleagues who buy special keyboards to type code faster and still being hellishly unproductive, deleting and rewriting while I prefer to think before typing and have a solution that works when I type it in instead of iterating over solutions that cannot work in the first place. When I type it's fast, I just don't do it so much compared to my peers, but I am more productive delivering working code.

Writing with a pen I tried, but no-one, including me, can read it after...

umanwizard
7 replies
1d3h

People buy special keyboards because they enjoy using them, not because they’re faster to type on.

doubled112
2 replies
1d

What baffles me is that some people aren't willing to buy nice peripherals.

I don't know about anybody else, but I spend 8-18 hours a day attached to a mouse and keyboard. I'm not going to use one that doesn't feel good to me.

foofie
0 replies
10h17m

What baffles me is that some people aren't willing to buy nice peripherals.

Buying nice peripherals is like buying nice clothes.

a1369209993
0 replies
22h24m

Like with most things, the problem is usually that we can't find peripherals that are actually nice, rather than overpriced (and endorsed by shills). If the options are paying 40$ for a shitty keyboard, versus paying 400$ for a still equally (if not more) shitty keyboard, the choice is pretty obvious.

anonzzzies
1 replies
1d2h

If they tell me they buy them to type faster on then I tend to assume that's the reason.

umanwizard
0 replies
1d2h

I think that’s an incorrect assumption. People are often bad at articulating why they enjoy something.

dmoy
0 replies
1d2h

Or sometimes 'cus it's less pain-inducing for the wrists

crabbone
0 replies
1d

Not if you play videogames, but yeah, for writing code keyboard speed doesn't make a difference.

toss1
0 replies
1d2h

I'm generally with you in terms of think—then—write, and wondering how people can write/type rapidly then endlessly edit. But I do have some modes where it works to write out incomplete thoughts, getting them out of my head, and then edit them into something workable.

Perhaps some people just really need to take that to extremes, getting out of their head all their fragmentary or preliminary thoughts, and filtering and working them down on paper or the screen?

It generally seems (to me) that a good-heavy amount of thinking first works best and/or is most productive, but overall, we all have different brains, training, and experiences, so I'd go with: "whatever works for you" (but try multiple methods).

bluGill
4 replies
1d4h

Are you really any faster on average? Sure I can type 60 wpm (I haven't measured, but this is a reasonable speed for anyone to obtain with a little practice, maybe I'm only 40, or I might even get to 100), but that is in typing situations where I'm not having to think. If your job is to enter some document into a word processor (a human OCR), or transcribe a recording (human speech to text) then typing speed is your limit. However for most of us the limit is thinking. I can type one sentence fast, but then I need to stop to think what the next sentence will say and so my time goes down. When writing code I need even more time to think and thus my total speed is lower. My fingers can still move at 60wpm though.

hardasspunk
1 replies
1d1h

I've never measured my thinking speed before. This measurement is intriguing. I feel that speed of thinking will be inversely proportional to the quality of content we write. Also, even if I can think fast and write quickly, I might have to go back and make amendments which can reduce my average thinking speed.

bluGill
0 replies
1d

Linguists have noted that while some people talk faster than others - and some languages/cultures speak faster than others. However this is only a measure of word count, where people talk faster they are not actually saying more, they are just using more words to say it. (note that I could have eliminated "just" in the above sentence - to my ear it seems needed, but the sentence doesn't change meaning without it)

Human thought speed seems to be a reflection on how fast you need to think for normal conversations. Thus orders like "run" or "jump" are likely to be fast as if you need to tell someone to escape a lion speed is important. While if you are planning a hunt you are allowed a lot more time to get details right. Engineering often is a very slow activity as you need to think about a lot of details but you have time. Typing a message like this I want to be fast enough to follow my thought speed which is much faster (I won't claim to type this fast, but I get close and can believe with more practice I would be this fast)

deanCommie
0 replies
17h25m

I don't know why people are arguing with you. Clearly the question here is whether you think, then type, or type while you think.

I can type at >100 WPM (just tested). I recently had to write a 3-page document for work, the summary of weeks of research and interviewing. It wound up 1959 words (just checked). I wrote it over the course of one 8 hour day, after having procrastinated on it for the previous 2 days.

I came home at the end of that day exhausted because I felt like I finally did weeks of work in one day.

But, this 8 hour day had 2 hours of meetings and 1 hour for lunch. Say another 1 hour for random slacking. So pure concentrated writing was 4 hours or 240 minutes, which winds up to about 8 words per minute. That's it!

And I would consider this fairly fast, and again I'm not including the 2 days where I tried to write the doc at various times but failed to get started. SOMETHING was spinning in my brain in the background getting the ideas clarity.

IggleSniggle
0 replies
23h32m

Back when I worked at an office my coworker commented to me that he could always tell when I was coding vs corresponding.

When corresponding, I type at around 120 wpm. When coding, it's got to be closer to 5 or 10. But I really like this idea that it would be useful to push that number lower (while still focusing on the task at hand).

Thing is, I got into this field because I really enjoy interacting with computers via keyboard. It's super satisfying to punch some buttons and make things happen. This is what I love about vim: right off the bat, a single button press feels like it does a lot. Then you punch a whole bunch of buttons to make a new button make something happen. Nevermind that it doesn't achieve product goals. I'm here for the button pressing, and for the creating of new buttons to be pressed.

leereeves
0 replies
1d5h

I'm sure not owning a TV, let alone social media or YouTube, gave him a lot more time to work.

coldtea
0 replies
19h3m

An 800 word opinion article can absolutely take 4 hours for a journalist to write, and that's 3.3 WPM.

And we're talking journalistic opinion piece, which are a dime a dozen, not deeply nuanced scientific writing. If anything Dijkstra was writing his notes too fast and loose!

tbalsam
1 replies
1d

I am rather autistic and this is how I think, toosies. 90-95% of my time working on a project is just sitting, staring into space, completely unaware of the world around me. I think it's because while you can "explore the field" with compute, as it were, the informational shapes of whatever the problem is at hand are always there, and usually they are _much_ more simple than people seem to let on, IMPO.

I'm not really sure how to describe it for me, as I have partial aphantasia, and a kind of weird shape-synesthesisa which also blends with my sensibilities around physical taste. It can really only be described as "feeling" a shape, where a shape represents some quantity of information for a topic, if it were the case that I'm not using receptors in my hands to touch something, but instead, it's the shape feeling its own shape relative to other shapes, and it's not as much a touch-like sensation as a visceral, semi-intuitive experience of the information of whatever that shape-object-whatever represents.

I do not think that thinking in this way was fully formed from a young age (i.e. it used to be an extremely active story-based imagination), it seems to have developed in some manner as an (informationally) dissociative coping mechanism because my autism is strong enough that the world does not stop overstimulating me. When I am in this headspace, I am generally unaware of sensory input, there is only me, and the (generally mathematical) "problem" at hand. Sometimes I can go for hours, working on a problem, moving shapes and concepts around, and fitting them together in a way that explains more while taking less information to do so. This can be both a gift and a hindrance.

I generally have to limit my budget for sensory stimulation and for 'new' things because my brain is so sensitive to information. This generally is more of a disability than a help, on the whole, though it is nice to be able to solve problems in the way that I do, that is one comfort to me.

I do have a tendency to want to fit things in my brain before thinking about them, but the 'onboarding' process can take a while. I used to get in trouble all the time as a kid before this sense by trying to do mental math all of the time, I would sit, think for a long time about a problem, write an answer down, and move on. Unfortunately, instead of being encouraged, it was something I got punished for (I was homeschooled), and there wasn't much in the way of what I would consider appropriate adaptivity in that regard.

Problems are fun, and the hardest part about them I think is that we are generally crap at phrasing them and looking for solutions, most of my work seems to just be cutting through the cruft of whatever trends-du-jour or symbolic obfuscation shenanigans are going on for a given topic. Once you sort out the structure of a given mathematical problem, the answer seems to fall into place well, and one does not always need compute for that.

So, all of that to say, each person thinks and processes things in a different way, and if you're into information theory, then one might say the kl divergence (in a manner of speaking) between how you encode and solve problems might be very high with djikstra's methods of thinking! It depends upon what unique strengths and weaknesses you have, and that can take years to find whichever special problem solving mode fits best to your sensibilities and gifts. For example, I have significant trouble around symbols that have potentially multiple meanings (like, as is often the case with many formal mathematical definitions), and since those take a disproportionate amount of "processing time" for me, my problem solving methods have evolved to replace and/or avoid them unless necessary. That is likely a silly solution for many people!

So, while you may not entirely fit Djikstra's natural proclivities, there might be elements that fit you. But whatever fits you best as a problem-solving technique is likely, frustratingly, and beautifully, entirely unique and specific to you. <3 :'))))

Twisol
0 replies
19h54m

You've described something extraordinarily similar to my own mental topology for processing problems. I'm also autistic, although I think our sensitivities differ somewhat.

I also try to (and mostly succeed at) hold a whole problem in my head at once. This made me fairly successful at architecting software systems with many moving parts, but it's an incredible drain on my energy, and I would sometimes find myself so exhausted I would go to sleep for hours (i.e. not a simple nap) in the middle of the day. I credit this ability for my current burnout, honestly! It's deeply satisfying to do, but maintaining it week after week amidst all the other responsibilities of my previous job (especially the interpersonal ones) was crippling.

zubairq
0 replies
13h25m

I agree with this comment. My typing speed was sometimes the bottleneck, but usually it was my speed of thought, but often I fell into this cargo cult thinking that the ability to type fast matters

kayodelycaon
0 replies
1d1h

It doesn't seen weird to me because 3 words per minute is 180 words per hour. The fastest I've ever typed as a fiction writer is around 800 words per hour. I think my average is around 300~400 words per hour.

hedora
0 replies
1d1h

he always sought to produce the final version from the outset

This is definitely a learnable skill. However, it might be a lost art with younger generations. I had a year of high school english that taught this. Most of us kids did fine.

The assignments worked as follows:

- Write an multipage essay on this topic; you have a week. (first quarter, repeat 2-3 times)

- Here is your exam question. You can bring one 3x5" notecard with you tomorrow, but you'll need to write the actual multipage essay in 55 minutes. (second quarter; repeat 2-3 times)

- You will be given a question about the book we read, and will have 55 minutes to write a multipage essay answering the question. No supplementary materials allowed (except the book). (last quarter; repeat 2-3 times)

Grading for things like the overall structure of the essay and sentence flow got stricter as the course progressed. So did grading on reading comprehension.

crabbone
0 replies
1d

In high-school, I had to write an expose... something about the "silver age of Russian poetry" (it's Block, Belyj, Akhmatova, Gipius, Gumiljov etc. if you are interested). It was about 20 pages long, on a typewriter.

It's physically more straining than to type on any computer keyboard, even with very stiff keys. And I didn't know how to touch-type at the time. So, it was pecking with two fingers. With all that said... three words per minute isn't a lot. At all. By the time I was finished typing, I could probably do better than that.

To me, this quote describes someone who rather hates / struggles with technology. I had a bunch of math professors like this: they couldn't draw diagrams to save their lives, they couldn't write formulas in any editor, and so often we'd have very poorly hand-drawn formulas photographed and inserted as images into MS Word documents or similar incomprehensible junk.

This doesn't necessarily mean they are bad at the very narrow field they specialized in, but it usually means they are very bad at virtually everything else adjacent to that field. Also, this often means that they've made their career in academia by being the first person in that narrow field they specialize in.

This is also my impression of Dijkstra: he had some novel, but also kind of low-hanging fruit ideas... at the time when he was almost alone in that field. Had he started today, he'd be very lucky to get a tenured position, but most likely would end up being a TA / dropped out and just do something else. Even though there's still a lot of politics and all sorts of un-meritocratic ways of getting into prestigious positions in academia, you need to be very good. A lot of the times it's being very good at cheating, but there isn't really any more room for greenfield academics who instantaneously propel themselves to the top of the academic hierarchy.

sspiff
21 replies
1d10h

I read this title and automatically assumed this was going to be about Dijkstra.

While perhaps less directly impactful on the industry, in computer science teaching and academia, there are few names who loom as large as his.

He has touched so many aspects of computer science that we now consider fundamental.

NhanH
18 replies
1d10h

The whole structured programming practically birthed the industry, so it’s hard to say he has less direct impact in the industry though.

Animats
17 replies
1d8h

It was controversial at the time.

Dijkstra advocated "single entry, single exit" for each control block. Programs should be composed of such blocks. Makes for very neat flowcharts. Good for entry and exit conditions.

Single entry wasn't that controversial. Single exit, though, means no "break", or "continue" for loops, and no early returns from functions. This forces a rather convoluted style. Try writing a loop of the form "get thing, if done quit, process thing, put thing" in that style. You have to have two calls to "get thing", or extra flags.

Turns out you don't need that for program proofs, once you have machine assistance to make sure all the cases were covered. It makes for cleaner hand proofs, though.

(I never met Dijkstra. Knew people who worked with him.)

Munksgaard
9 replies
1d7h

Single exit, though, means no "break", or "continue" for loops, and no early returns from functions.

Sounds like pretty standard functional programming a la Standard ML or Haskell, or am I missing something?

amiga386
6 replies
1d7h

a rather convoluted style

It may be standard for them, but it's why functional languages are niche while C, C++, C#, Java, JavaScript and Python dominate.

Munksgaard
3 replies
1d6h

That's a bold claim, do you have any evidence to back that up with?

To be clear, I agree that it could be a reason, along with a multitude of others. I think that discerning which is the most substantial reason (if any such exist) is hard if not impossible.

amiga386
2 replies
1d6h

I don't, I just have the feelings and impressions of various programmers, which is anecdata.

I think the pattern that emerges is that the popular languages I listed are multi-discipline, they can be adapted to whichever paradigm you prefer, even if it's a little cumbersome, and over time they adopt the key features of other languages, while retaining their existing benefits. In other words... you can get the best features of Haskell in Python, but you can't get the best features of Python in Haskell?

odyssey7
0 replies
6h9m

To add to the anecdata of programmer impressions: Haskell can be extremely performant compared to Python. In terms of developer experience, it almost causes me physical pain to use some of the poorly designed Python libraries out there.

nyssos
0 replies
23h57m

you can get the best features of Haskell in Python

Haskell's best features relative to more mainstream languages are HM-style type inference, higher-kinded types, and typeclasses - none of which are possible in a language without real static types.

neonsunset
1 replies
1d5h

Rust is a quite functional language, C# has been incorporating (aka stealing from F# :)) functional features for years starting with LINQ and ending with advanced forms like pattern matching on sequences (list patterns).

I believe all kinds of Python frameworks too like to incorporate FP into their APIs, and of course, there are list comprehensions.

louthy
0 replies
1d2h

LINQ is arguably stolen from Haskell, not F#. It is closer to ‘do’ notation than Computation Expressions (which has more features).

The fact that Eric Meijer was very active in the Haskell community, I think, pretty much confirms that.

bmicraft
1 replies
1d4h

Guard clauses are considered good practice in many languages, as opposed to nested if's.

I'm not saying either approach is generally the right one, but I think it's interesting that best practice can be the polar opposite of his "single exit" recommendation

drivers99
0 replies
22h49m

Dijkstra came up with "guard". Well, "guarded command" as a formal part of the logic of correct programs: https://en.wikipedia.org/wiki/Guarded_Command_Language

The guard is a proposition, which must be true before the statement is executed. At the start of that statement's execution, one may assume the guard to be true. Also, if the guard is false, the statement will not be executed. The use of guarded commands makes it easier to prove the program meets the specification. The statement is often another guarded command.
rprospero
1 replies
1d5h

I had always heard that the "single exit" focus on "break" and "continue" was a misunderstanding. When wild GOTOs roamed the earth, with was common for a subroutine to end by jumping to a new part of the code, with the jump location being different depending on conditionals within the code. The "single exit" commandment was that a subroutine should always exit back into the routine that had called it. That makes it less an injunction against a function having early returns and more a rejection of continuation passing style.

AnimalMuppet
0 replies
1d4h

"Break" and "continue" have been (accurately) described as "structured goto". They are, at heart, goto. But they are constrained to operate in ways the make sense within the structured programming approach.

norir
1 replies
21h15m

The single exit style works great in a language with TCO, nested functions and no looping primitive at all. No such language is widely available. But the style works really well and can be simulated in languages like scala if you ignore the while keyword and express all loops as tail recursive functions. This style does not work well in c (particularly due to the lack of nested functions) and similar languages.

It is a different way of thinking though and it takes a while before your mind stops reaching for while loops.

trealira
0 replies
3h17m

The single exit style works great in a language with TCO, nested functions and no looping primitive at all. No such language is widely available.

There's Scheme. There may be constructs like DO and WHILE (though I don't remember if these are standard or just common extensions of Scheme), but they're often just macros implemented using inner functions and tail calls.

mcguire
0 replies
19h48m

The key point of structured programming is that you don't need any control structures other than sequential composition, a conditional construct, and a (typically while) loop construct.

marcosdumay
0 replies
1d2h

Well, anybody that completely redefines an area of knowledge is doomed to get one or two details wrong on the first try.

amiga386
0 replies
1d7h

Was there not the workaround (at least by the time Pascal arrived), of intra-function gotos, and being able to assign the return value from anywhere? In effect, the "return" keyword of other languages.

e.g.

   function Foo (Value : integer) : boolean;
   label return;
   begin      
      if Value < 0 then
      begin
          Foo := false;
          goto return;
      end;
      Foo := Bar(Value);
   return:
   end;

hiAndrewQuinn
1 replies
1d10h

Same here. He's probably the closest thing this field has to a Euler, or maybe an Erdös.

KuriousCat
0 replies
1d10h

I too thought about EWD!

sambalbadjak
21 replies
1d9h

I like Dijkstra's quote: "The question of whether machines can think is about as relevant as the question of whether submarines can swim."

litenboll
9 replies
1d9h

Maybe I'm stupid, but does he mean that as long as the task at hand is solved it doesn't matter how we categorize it. In the submarine case it would be "move through water", for example. Or is it deeper than that?

chongli
4 replies
1d5h

Yes. He was interested in problem-solving, not philosophizing. The debates about AI going on right now are the kind he’d prefer to avoid.

deltaburnt
3 replies
1d4h

To be fair, 10+ years ago this conversation definitely would have been pretty silly. Maybe about as interesting as asking "is there other life in the universe".

No one knows the answer, it's an incredibly over discussed topic, and we won't know for sure for many years.

I think those points still apply to AI intelligence today. However, the power of today's AI greatly outstrips anything Djikstra would have seen in his day.

eli
0 replies
1d1h

He's objecting to the question. How well you feel you can answer it doesn't matter.

bmoxb
0 replies
1d2h

The improvement of AI lately doesn't invalidate his point though. I'm sure submarine technology has similarly improved but it's still irrelevant whether or not a submarine can be said to 'swim' not.

DrScientist
0 replies
1d4h

The point isn't about whether it is unknowable or not - rather does having the answer have any practical value - ie does the attribution of 'thinking' add any value to understanding a program?

meiraleal
0 replies
1d5h

I think it is about the relevance of anthropomorphizing machines, submarines don't swim.

cmiles74
0 replies
1d1h

Is there something here about the terms being sloppy and unscientific, making the answer somewhat useless? Whatever "swimming" or "thinking" might be, it's not something clearly defined.

Uehreka
0 replies
1d9h

does he mean that as long as the task at hand is solved it doesn't matter how we categorize it.

Yes

ThomasBHickey
0 replies
1d1h

I think he meant it depends on how you define 'swim'

amelius
9 replies
1d8h

Relevant to what exactly?

rstuart4133
4 replies
1d7h

He's a computer scientist speaking to other computer scientists, so I'd say he's talking about relevancy to computer science.

But I think perhaps you've missed his subtle use of language.

Fish swim, obviously. What submarines do in water isn't usually described a swimming.

That's a bit odd in when you think about it as in both cases the objective is to get from A to B, while in the water. In fact if you look at only the outcomes, what fish and submarines do when swimming looks almost identical. They move (or perhaps be stationary in a current), they tend to be efficient about it, they try to not make a lot of noise, they even use very similar mechanisms to move vertically.

Despite that it almost seems that to swim you have to be fish, or a sea snake, or a blue bottle or water bug - but not a submarine. And going by the discussions here to think you have to be a human, or a dog, or just about anything but a computer. And that's true no matter how closely a computer can emulate tasks we say require thinking in a human.

A conclusion you might draw then is whether a submarine swims deciding belongs in the domain of linguistics, not computer science. And he's saying that's true for whether computers "think" too.

dsco
3 replies
1d7h

See what's strange here is that we would call a dancing robot "dancing", and not think twice about it.

throwawaaarrgh
0 replies
1d6h

We do have a dance called 'the robot'

mjburgess
0 replies
1d7h

Because robots are built to perform the illusion of being animal-like, and often human-like more specifically.

So there's a theatrical game being played when interacting with these devices that makes them valuable to the people playing that game.

More generally, when the adtech companies selling the current round of AI do the same, without any irony, it's usually a mixture of charlatanism, selling and legal avoidance.

(Eg., that "ChatGPT wrote X" is a kind of theatrical game wherein OpenAI are the material beneficiaries, and most others, are the loosers).

User23
0 replies
1d4h

I would call dancing lights “dancing” too. As has already been said, it’s a linguistic issue.

And it’s perfectly reasonable to say a machine “thinks.” It’s just good to understand that it’s a metaphor and not a literal description of what the machine is doing. I avoid saying machines think because it’s confusing, but in principle it’s fine.

namaria
2 replies
1d7h

The quote claims it's irrelevant, not relevant (to something).

amelius
1 replies
1d2h

Well, it is relevant to the current discussion.

Jtsummers
0 replies
1d2h

https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/E...

Here's the context since it's been quoted without providing it.

_a_a_a_
0 replies
1d7h

Relevant to the question of whether machines can think.

avgcorrection
0 replies
1d2h

There is also a different approach to the [unification] problem, which is highly influential though it seems to me not only foreign to the sciences but also close to senseless. This approach divorces the cognitive sciences from a biological setting, and seeks tests to determine whether some object “manifests intelligence” (“plays chess,” “understands Chinese,” or whatever). […]

There is a great deal of often heated debate about these matters in the literature of the cognitive sciences, artificial intelligence, and philosophy of mind, but it is hard to see that any serious question has been posed. The question of whether a computer is playing chess, or doing long division, or translating Chinese, is like the question of whether robots can murder or airplanes can fly — or people; after all, the “flight” of the Olympic long jump champion is only an order of magnitude short of that of the chicken champion (so I’m told). These are questions of decision, not fact; decision as to whether to adopt a certain metaphoric extension of common usage.

https://chomsky.info/prospects01/

psychoslave
14 replies
1d9h

Ok, the first thought reading the title was "ah, surely this is about Dijkstras."

The immediate second thought that came was the famous quote “I don't know how many of you have ever met Dijkstra, but you probably know that arrogance in computer science is measured in nano-Dijkstras”, from Alan Kay should I believe https://www.goodreads.com/quotes/69528-i-don-t-know-how-many...

photonthug
7 replies
1d9h

Hmm, time will tell on this rivalry. OOP may not be as harmful as GOTO, but it does seem to be falling out of favor. Dijkstras point of view may be as vindicated with one as with the other.

tialaramex
2 replies
1d1h

Note that Dijkstra's letter is about the "go-to statement" a then common but now very rare language feature which is akin to the machine code absolute jump - whatever you were doing, now you're doing this with no context.

BASICs are perhaps the language HN readers are most likely to have seen which (in some cases) have the GOTO feature Dijkstra wrote the letter about, if you've seen goto in C for example, or C++, that resembles the go-to statement superficially, but in practice it's not very dangerous because it's constrained. Suppose my C++ f() recursive function adds a "goto" - it's not allowed to jump into a different function g(), it's not even allowed to jump into a different iteration of the same function f(), it's only allowed to change which part of the current iteration of the current function happens next, and it can't skip the destruction or initialization of variables, or other book keeping.

kevin_thibedeau
1 replies
19h42m

C/C++ has the dangerous goto under the guise of longjmp().

tialaramex
0 replies
15h40m

Although longjmp is more dangerous than C's goto, it's not that close to what Dijkstra was warning about. A longjmp is only allowed to move execution back to somewhere it was previously, and in the process all the appropriate book keeping is done, it can't send you somewhere you've never been or evade initialization/ destruction (at least any more so than these languages manage by themselves).

rfrey
2 replies
1d3h

Being correct or incorrect is orthogonal to arrogance.

photonthug
1 replies
1d2h

I agree, but this can also be tactical since for whatever reason it's rare in certain cultures that quiet humility is listened to. Also confidence or bluntness may be easily mistaken for arrogance. IDK about Kay but lots of Americans who think they like a "straight shooter" are still unprepared for a Dutch-style discussion

akira2501
0 replies
20h15m

it's rare in certain cultures that quiet humility is listened to.

There's the old adage that you "can lead a horse to water but you can't make him drink." I wonder what those who give up on humility think they're actually gaining?

mayoff
0 replies
18h38m

Michael Snoyman has a great talk about this, “FP is the new OOP”. He argues that some major concepts that OOP popularized are now mainstream and accepted, and OOP-bashing ignores those concepts and just acts like OOP only means the bad parts.

https://youtu.be/to8ISIQjETk

jhp123
1 replies
1d1h

Did Dijkstra ever write down a personal insult like this against anyone? I haven't come across it...

LAC-Tech
0 replies
23h2m

Object-oriented programming an exceptionally bad idea which could only have originated in California

tristramb
0 replies
1d8h

I think Peter Medewar once said of James Watson and Francis Crick that "they have done somthing worth being arrogant about".

The same applies to Dijkstra.

You could start with this account of the Dijkstra-Zonneveld Algol 60 compiler written by Kruseman Aretz: https://ir.cwi.nl/pub/4155.

marcosdumay
0 replies
1d3h

I always wonder if he was so different in person than he was on his essays and interviews.

We are talking about the person that institutionalized the "we people aren't smart enough to do X" line of thinking from software engineering.

Of course, he was also quick to call a spade a spade, but I haven seeing any case of him denouncing something that wasn't obviously bad.

LAC-Tech
0 replies
23h5m

Apparently Alan Kay knew Dijkstra and was meant as a joke rather than some big insult.

Gibbon1
0 replies
22h11m

I feel like it's not just Dijkstra but the abrasive arrogance computer science inherited from mathematics culture has been a huge negative. It's why the workplaces are toxic and why women have abandoned the field. It's worse where computer science culture and marketing culture intersect.

But Dijkstra is the poster child for that sort of thing.

photonthug
6 replies
1d9h

Lots of interesting threads to chase down but this biographical piece was cool:

together with his wife, he purchased a Volkswagen bus, dubbed the Touring Machine, which they used to explore national parks

Maybe not so surprising for a guy who would not indulge in a vcr, cinema, smartphone or even a computer :) more info would be neat if anyone knows from a longer biographic. Photographer? Bird watcher? Hiker? Camper?

Hendrikto
3 replies
1d8h

Smartphones were not even a thing during his lifetime.

photonthug
2 replies
1d8h

OP does say mobile phone. But you're wrong according to Wikipedia.. some 5-10 years of overlap depending on what qualifies as smart

meiraleal
1 replies
1d5h

OP said smartphone tho

arjonagelhout
0 replies
1d2h

OP in this case was referring to the author of the article, where the term mobile phone was used.

eesmith
1 replies
1d6h

You can see the vehicle at https://youtu.be/mLEOZO1GwVc?t=1220 . He and his wife are driving to some sort of camping area.

It's part of a longer interview with him. He also does crossword puzzles.

mcguire
0 replies
21h7m

As well as extensively investigating the longevity of fountain pen inks. He used vintage Montblanc 149s, if it's important.

borlanco
3 replies
1d3h

There are more anecdotes about Dijkstra in this collection of testimonials written by his friends, colleagues, and students: https://arxiv.org/abs/2104.03392.

These are some of my favorites:

From Tony Hoare:

  They removed recursion from Algol 60, on the grounds of its alleged inefficiency. Edsger's answer was that recursion was a useful programming tool, and every workman should be allowed to fall in love with their tools. 
From J.R. Rao:

  He would repeatedly stress the importance of choosing words carefully: "If you have to use your hands, then there is something wrong with your words", he would say, adding "imagine there is a blind man in your audience, how would you speak?" 

  Over time, I came to learn and appreciate that Prof. Dijkstra's class was operating at two different planes. There was the lesson and then, the lesson within the lesson. At one level, the goal was to solve the presented problem. At the second and more richer meta-level was the approach for arriving at the solution. 

  Prof. Dijkstra taught us that in computing science, complexity comes for free; one has to work hard for simplicity. 
From Alain J. Martin:

  In his technical writing, he used language like a precision tool. For him, precision did not necessarily imply ease of reading, and he stated that the reader also had to make an effort to understand. 
From Christian Lengauer:

  Professionally, Edsger's impact on me is best summarized by his ATAC Rule 0: "Don't make a mess of it." It made me strive for simplicity in notation and modelling throughout my working life and take unpleasant complexity as an indication of a possible lack of comprehension. 

  I decided against writing with a fountain pen. He never took issue with this, except in a letter that he posted seven weeks before his death: "I hope you will overcome your resistance and learn how to fill a pen without soiling your fingers, or otherwise you are denying yourself one of the joys of life." These were his last written words to me. 
From Lex Bijlsma:

  A famous quote of EWD is the following: 'I mean, if 10 years from now, when you are doing something quick and dirty, you suddenly visualize that I am looking over your shoulders and say to yourself "Dijkstra would not have liked this", well, that would be enough immortality for me' (EWD1213). I can testify that this actually works. 
From K. Mani Chandy:

  When I write papers, even now, I still see Edsger over my shoulder going "Tsk! Tsk!" [...] I found that being less sloppy not only helped my readers understand what I had written, but most importantly, helped me from confusing myself.
From David Turner:

  I have an invisible Edsger inside my head which looks over my shoulder when I am writing and quietly goes "Tut tut" if I write something that is muddled or not accurate. I don't always listen to that voice but know I should. 
From J Strother Moore:

  At faculty meetings when Edsger was not present it was common for someone to say "If Edsger were here he'd say such-and-such." 

  He came to work in the summer wearing a big Texas cowboy hat - they are made to keep you cool in the Texas sun. Often he would have on a cowboy's string tie. So from the waist up, he looked more Texan than I did. But he almost always wore shorts and sandals, which ruined the cowboy image completely. 
From David Gries:

  Edsger critiqued not the person but only what they said, and later one could drink a beer and laugh as if nothing happened. Technical differences and shortcomings should be treated this way. 
From Maarten van Emden, edited for brevity:

  Douglas Engelbart aroused Dijkstra's ire so much that he needed a whole page of vituperative prose to offload his emotions (EWD387). What has Engelbart done to provoke this outburst? One only has to refer to "Engelbart's Law", see Wikipedia. Its reasoning seems to run as follows: look at what mere printing has done as a tool for thought; the system demonstrated is so much more powerful than printing that it must quickly lead to Intellect Augmentation. This way Engelbart showed no appreciation for the rich culture developed over centuries. What makes printing a powerful tool for thought is mostly due to other things than technology. Much of the power of this culture comes from publishers and editors, who sniff out what is worth printing and hold back what is not. Another important component of this culture is provided by libraries and librarians. Much is due to scholarly societies, which started printing their proceedings and to commercial publishers, which created journals, each with their editorial board and unseen bevy of reviewers. Most of all it is due to the idea of a university.

airstrike
1 replies
1d1h

From EWD387:

> The other two speakers that gave three one-hour lectures, Dr. McKay from IBM, Yorktown Heights and Professor Engelbart, SRI, Menlo Park, were both terrible. McKay spoke undiluted IBMerese for three full hours and I am not going to give any further comments; I only heard the first hour —like many participants— and that was enough (too much). Because I had an urgent letter to write I missed Engelbart's first lecture —it was not really a lecture, he showed a movie— but I attended his next two performances. He was not only terribly bad, he was dangerous as well, not so much on account of the product he was selling —a sophisticated on-line text-editor that could be quite useful— as on account of the way in which he appealed to mankind's lower instincts while selling it. The undisguised appeal to anti-intellectualism and anti-individualism was frightening. He was talking about his "augmented knowledge workshop" and I was constantly reminded of Manny Lehman's vigorous complaint about the American educational system that is extremely "knowledge oriented", failing to do justice to the fact that one of the main objects of education is the insight that makes quite a lot of knowledge superfluous. (Sentences like "the half-life of a fresh university graduate is five years" are only correct if you have crammed the curriculum with volatile knowledge, erroneously presented as stuff worth knowing.) His anti-individualism surfaced when he recommended his gadget as a tool for easing the cooperation between persons and groups possibly miles apart, more or less suggesting that only then you are really "participating": no place for the solitary thinker. (Vide the sound track of the Monsanto movie showing some employees: "No geniuses here: just a bunch of average Americans, working together."!) The two talks I heard were absolutely insipid, he had handed out a paper "An augmented knowledge workshop.": the syntactical ambiguity in the title is characteristic for the level of the rest of the article. As a result of his presentations I have told a few of the participants that I had found, thanks to this seminar, a new software project. "Because in the years to come there will be a crippling shortage of competent programmers, I shall develop a software package, called "The Instruction Interpreter". From the moment of its completion, users do no longer need to program, they just give their instructions to the system." (This is only an edited version of one of the paragraphs of the Engelbart article!) I would have liked to start a discussion with him but I knew that my lack of mastery of the understatement would have made me too rude for English ears if I had spoken. Finally —after a more than two-hour effort in the middle of the night in sorting out his muddle— I decided that he was not worth the trouble. (One of the most offending conclusions I ever came to!)

ontouchstart
0 replies
19h4m

Wow

hamburga
0 replies
1d2h

These are great.

bgribble
3 replies
1d3h

I was a grad student in Austin in the 1990s. If Dijkstra showed up for one of the regular department-wide "lunch and learn" type talks, there was a bit of a buzz among the grad students in the room... you knew he was going to pop off with a total left-field question and the results would be pretty entertaining. Free cookies AND comic humiliation from a giant in the field! It was standing-room-only.

syndicatedjelly
1 replies
1d2h

Any stories that stand out/are worth sharing?

mcguire
0 replies
21h15m

1. Asking prospective faculty if the colors (i.e. some of the phrases or expressions were highlighted in different colors) in their slides meant anything. I seem to remember him doing this to several new PhDs the dept. was thinking about hiring and none had a good answer. Apparently nobody warned them, either.

2. Asking a networking guy about the use of the term "stack" in his talk. A little unfair, given that the two uses (push/pop stack vs. network protocol stack) are unrelated, but still a bit funny.

pradn
0 replies
1d2h

Did they have samosas from Ken’s at grad tea time back then? :)

Almondsetat
3 replies
1d9h

I recently realized, after talking to a friend of mine who studied Dijkstra's works, that for as much as he was interested in writing correct programs his "Structured Programming" approach makes absolutely zero guarantees on correctess or even verifiability. A program written in a structured way is not inherently more correct nor easier to automatically check for bugs.

rstuart4133
1 replies
1d7h

his "Structured Programming" approach makes absolutely zero guarantees on correctess or even verifiability

That was never the claim. The claim was humans using "Structured programming" produced code with less bugs than humans using other methods. "Humans" being the key word here, a compiler will happily produce perfect programs every time just using goto's (because there is no choice in assembler). But when humans use goto's it's usually a disaster.

History has vindicated him big time. Many modern languages don't have goto's, precisely for this reason.

throwawaaarrgh
0 replies
1d6h

To add to that: a "correct and verifiable" program isn't necessarily better. If I decided to make food using chemistry which is "correct and verifiable" as being the healthiest food in the world, it would probably taste terrible.

mcguire
0 replies
20h1m

On the other hand, a program written without structured programming is insanely hard to check for bugs, modify, or get anywhere near right in the first place.

math_dandy
2 replies
1d2h

Dijkstra wasn’t IO-bound.

hedora
0 replies
1d1h

He was until he started communicating asynchronously.

hamburga
0 replies
1d2h

Exactly, almost nobody is, except for stenographers.

krunck
1 replies
1d1h

I just found this. First paragraph says it all.

https://www.cs.utexas.edu/users/EWD/ewd13xx/EWD1316.PDF

CaffeinatedDev
0 replies
21h58m

Haha just a genuinely humorous guy. It's refreshing to see this side of researchers

throwawaaarrgh
0 replies
1d6h

Where does one go today to find cutting edge research, outside of joining a corporation?

throw1234651234
0 replies
1d2h

He carried the Witcher spy network too.

rpigab
0 replies
1d4h

How could I spend 3 months in an internship at the Math&CS building of the TU of Eindhoven without knowing he had been so close to this place, after having studied CS including of course Dijkstra's algorithm for finding shortest paths in graphs?

Beautiful place btw, of course there are many buildings that didn't exist in his time, I remember the huge and old Atlas building, construction started in 1958.

You can go anywhere by bicycle from residential areas of Eindhoven to the TU/e, the city centre, supermarkets, train station, Philips Stadium, even neighbooring towns like Nuenen or Oirschot can be reached on roads dedicated to bikes.

richrichie
0 replies
10h52m

If one man carried CS on his shoulders, is it proof that CS is much hyped as a scientific field of its own?

It may be big because of industrial applications, but as a discipline does it deserve to be bigger than a section in an applied math dept or elec engg dept in universities?

neves
0 replies
19h37m

What a delicious text and personality. I always like to read about Dijkstra, but this is top notch.

make3
0 replies
1d8h
huskywall
0 replies
1d1h

My favorite quote of his: "Computer science is no more about computers than astronomy is about telescopes"

hardasspunk
0 replies
1d6h

Just read a few of these EWDs. Some of my favorite quotes I read came from the end of these reports.

The question of whether a machine can think is the same as the question of whether a submarine can swim.

In this respect the programmer does not differ from any other craftsman: unless he loves his tools it is highly improbable that he will ever create something of superior quality.

The article is very well written and is full of insights into Dijkstra's thought process.

Looking forward to reading more EWDs.

derelicta
0 replies
1d8h

I will always be impressed by folks who dare become academicians. It requires a lot of self-sacrifice

cushpush
0 replies
22h28m

One of the greatest, I love that he would have his e-mails printed out to read on physical parchment. I believe he said that CS is as much about actual computers as Astronomy is about telescopes. Very cool.

cpeterso
0 replies
1d2h

My favorite Dijkstra quote:

The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise.
User23
0 replies
1d5h

EWD is one of my intellectual heroes. His reasoning exemplifies the philosophical goal of making ideas both clear and distinct. His equational proof is a standout example of the tools he used to do so.

About a decade ago, over the course of a few months I skimmed through the archive of his notes[1] and at least perused the majority and dug deeper into the more tantalizing ones. I dare say that it was one of the most educational periods of my professional life.

Also, I find his more acerbic observations amusing, but they’re mostly confined to non-technical notes and easily skipped. And who among us hasn’t shared frustration against this or that unfortunate fad in our field?

[1] https://www.cs.utexas.edu/users/EWD/transcriptions/transcrip...

Jtsummers
0 replies
1d

Discussed at the time:

https://news.ycombinator.com/item?id=24942671 - Oct 30, 2020 with 196 comments

A lot of good discussion in there.

DijkstrasPath
0 replies
1d5h

This is why I chose my username

ChrisMarshallNY
0 replies
1d5h

Computer Science's H. L. Mencken.

https://en.wikipedia.org/wiki/H._L._Mencken

AnimalMuppet
0 replies
1d4h

That title is a bit much. Did Dijkstra do so more than Wirth? Hoare? Backus? Knuth? von Neumann? Maybe, maybe, maybe, no, and definitely no.

Dijkstra had some real achievements. He was one of the giants, no doubt. But he wasn't the giant. He was highly opinionated. He had his way of working, and anybody who did it any other way was wrong (and probably either stupid or lazy). That's not the optimal route to improving a field, especially when you aren't 100% right.