return to table of content

Programming Is Mostly Thinking (2014)

brabel
50 replies
8h1m

Great article. I just want to comment on this quote from the article:

"Really good developers do 90% or more of the work before they ever touch the keyboard;"

While that may be true sometimes, I think that ignores the fact that most people can't keep a whole lot of constraints and concepts in their head at the same time. So the amount of pure thinking you can do without writing anything at all is extremely limited.

My solution to this problem is to actually hit the keyboard almost immediately once I have one or more possible ways to go about a problem, without first fully developing them into a well specified design. And then, I try as many of those as I think necessary, by actually writing the code. With experience, I've found that many times, what I initially thought would be the best solution turned out to be much worse than what was initially a less promising one. Nothing makes problems more apparent than concrete, running code.

In other words, I think that rather than just thinking, you need to put your ideas to the test by actually materializing them into code. And only then you can truly appreciate all consequences your ideas have on the final code.

This is not an original idea, of course, I think it's just another way of describing the idea of software prototyping, or the idea that you should "throw away" your first iteration.

In yet different words: writing code should be actually seen as part of the "thinking process".

indigoabstract
11 replies
6h44m

I had the same thought as I read that line. I think he's actually describing Linus Torvalds there, who, legend has it, thought about Git for a month or so and when he was done thinking, he got to work coding and in six days delivered the finished product. And then, on the seventh day he rested.

But for the rest of us (especially myself), it seems to be more like an interplay between thinking of what to write, writing it, testing it, thinking some more, changing some minor or major parts of what we wrote, and so on, until it feels good enough.

In the end, it's a bit of an art, coming up with the final working version.

tsss
7 replies
6h23m

Well that explains why Git has such a god awful API. Maybe he should've done some prototyping too.

patmcc
1 replies
24m

I'm going to take a stab here: you've never used cvs or svn. git, for all its warts, is quite literally a 10x improvement on those, which is what it was (mostly) competing with.

commandar
0 replies
5m

It's really hard to overstate how much of a sea change git was.

It's very rare that a new piece of software just completely supplants existing solutions as widely and quickly as git did in the version control space.

QuercusMax
1 replies
2h8m

What do you mean by API? Linus's original got didn't have an API, just a bunch of low level C commands ('plumbing'). The CLI ('porcelain') was originally just wrappers around the plumbing.

Maxatar
0 replies
1h20m

Those C functions are the API for git.

ranger_danger
0 replies
1h27m

baseless conjecture

mejutoco
0 replies
4h11m

On the other side the hooks system of git is very good api design imo.

indigoabstract
0 replies
5h36m

Yeah, could be.. IIRC, he said he doesn't find version control and databases interesting. So he just did what had to be done, did it quickly and then delegated, so he could get back to more satisfying work.

I can relate to that.

alfagre
1 replies
4h4m

I tend to see this as a sign that a design is still too complicated. Keep simplifying, which may include splitting into components that are separately easy to keep in your head.

This is really important for maintenance later on. If it's too complicated now to keep in your head, how will you ever have a chance to maintain it 3 years down the line? Or explain it to somebody else?

mannykannot
0 replies
3h12m

This is the only practical way (IMHO) to do a good job, but there can be an irreducibly complex kernel to a problem which manifests itself in the interactions between components even when each atomic component is simple.

throw1234651234
0 replies
2h56m

I don't do it in my head. I do diagrams, then discuss them with other people until everyone is on the same page. It's amazing how convoluted get data from db, do something to it, send it back can get, especially if there is a queue or multiple consumers in play, when it's actually the simplest thing in the world, which is why people get over-confident and write super-confusing code.

ChrisMarshallNY
5 replies
7h30m

I tend to iterate.

I get a general idea, then start writing code; usually the "sticky" parts, where I anticipate the highest likelihood of trouble.

I've learned that I can't anticipate all the problems, and I really need to encounter them in practice.

This method often means that I need to throw out a lot of work.

I seldom write stuff down[0], until I know that I'm on the right track, which reduces what I call "Concrete Galoshes."[1]

[0] https://littlegreenviper.com/miscellany/evolutionary-design-...

[1] https://littlegreenviper.com/miscellany/concrete-galoshes/

JKCalhoun
4 replies
5h31m

I do the same, iterate. When I am happy with the code I imagine I've probably rewritten it roughly three times.

Now I could have spent that time "whiteboarding" and it's possible I would have come close to the same solution. But whiteboarding in my mind is still guessing, anticipating - coding is of course real.

I think that as you gain experience as a programmer you are able to intuit the right way to begin to code a problem, the iterating is still there but more incremental.

vbezhenar
1 replies
5h5m

Yeah, the same. I rewrite code until I'm happy with it. When starting new program, it might cause lots of time wasted because I might need to spend weeks rewriting and re-tossing everything until I feel I got it good enough. Tried to do it faster, but I just can't. The only way is to write a working code and reflect on it.

My only optimization of this process is to use Java and not just throw out everything, but keep refactoring. Idea allows for very quick and safe refactoring cycles, so I can iterate on overall architecture or any selected components.

I really envy on people who can get it right first time. I just can't, despite having 20 years of programming under my seat. And when time is tight and I need to accept obviously bad design, that what makes me burning out.

shuvuvt5
0 replies
3h18m

Nobody gets it right the first time.

Good design evolves from knowing the problem space.

Until you've explored it you don't know it.

I've seen some really good systems that have been built in one shot. They were all ground up rewrites of other very well known but fatally flawed systems.

And even then, within them, much of the architecture had to be reworked or also had some other trade off that had to be made.

noisy_boy
0 replies
4h48m

I think once you are an experienced programmer, beyond being able to break down the target state into chunks of task, you are able to intuit pitfalls/blockers within those chunks better than less experienced programmers.

An experienced programmer is also more cognizant of the importance of architectural decisions, hitting the balance between keeping things simple vs abstractions and the balance between making things flexible vs YAGNI.

Once those important bits are taken care of, rest of it is more or less personal style.

ChrisMarshallNY
0 replies
5h17m

I don’t think this methodology works, unless we are very experienced.

I wanted to work that way, when I was younger, but the results were seldom good.

Good judgment comes from experience. Experience comes from bad judgment.

-Attributed to Nasrudin

ben_w
4 replies
7h8m

While that may be true sometimes, I think that ignores the fact that most people can't keep a whole lot of constraints and concepts in their head at the same time.

Indeed, and this limitation specifically is why I dislike VIPER: the design pattern itself was taking up too many of my attention slots, leaving less available for the actual code.

(I think I completely failed to convince anyone that this was important).

kragen
3 replies
6h28m

are you talking about the modal text editor for emacs

kragen
0 replies
5h15m

surely. thanks

ben_w
0 replies
3h41m

Correct, that.

The most I've done with emacs (and vim) is following a google search result for "how do I exit …"

makerdiety
2 replies
4h8m

Mathematics was invented by the human mind to minimize waste and maximize work productivity. By allowing reality mapping abstractions to take precedence over empirical falsifications of propositions.

And what most people can't do, such as keeping in their heads absolutely all the concepts of a theoretical computer software application, is an indication that real programmers exist on a higher elevation where information technology is literally second nature to them. To put it bluntly and succinctly.

For computer software development to be part of thinking, a more intimate fusion between man and machine needs to happen. Instead of the position that a programmer is a separate and autonomous entity from his fungible software.

The best programmers simulate machines in their heads, basically.

marcosdumay
1 replies
2h29m

The best programmers simulate machines in their heads, basically.

Yes, but they still suck at it.

That's why people create procedures like prototyping, test driven design, type driven design, paper-prototypes, API mocking, and etc.

makerdiety
0 replies
30m

The point is that there are no programmers that can simulate machines in their heads. These elite engineers only exist in theory. Because if they did exist, they would appear so alien and freakish to you that they would never be able to communicate their form of software development paradigms and patterns. These rare type of programmers only exist in the future, assuming we're on a timeline toward such a singularity. And we're not, except for some exceptions that cultivate a colony away from what is commonly called the tech industry.

EDIT: Unless you're saying that SOLID, VIPER, TDD, etc. are already alien invaders from a perfect world and only good and skilled humans can adhere to the rules with uncanny accuracy?

kragen
2 replies
6h31m

yeah, i just sketched out some code ideas on paper over a few days, checked and rechecked them to make sure they were right, and then after i wrote the code on the computer tonight, it was full of bugs that i took hours and hours to figure out anyway. debugging output, stepping through in the debugger, randomly banging on shit to see what would happen because i was out of ideas. i would have asked coworkers but i'm fresh out of those at the moment

i am not smart enough this week to debug a raytracer on paper before typing it in, if i ever was

things like hypothesis can make a computer very powerful for checking out ideas

porksoda
1 replies
6h7m

I've no coworkers either, and over time both I and my code suffer for it. Some say thinking is at it's essence a social endeavor.

porksoda
0 replies
6h3m

I should say rather, Thinking.

devsda
2 replies
5h59m

I agree this is how it often goes.

But this also makes it difficult to give accurate estimates because you sometimes need to prototype 2,3 or even more designs to workout the best option.

writing code should be actually seen as part of the "thinking process".

Unfortunately most of the times leadership dont' see things this way. For them the tough work of thinking ends with architecture or another layer down. Then the engineers are responsible only for translating those designs into software just by typing away with a keyboard.

This leads to mismatch in delivery expectations between leadership and developers.

marcosdumay
0 replies
2h33m

If you know so little that you have to make 3 prototypes to understand your problem, do you think designing it by any other process will make it possible make an accurate estimate?

lwhi
0 replies
5h21m

In my opinion, you shouldn't need to prototype all of these options .. but you will need to stress test any points where you have uncertainty.

The prototype should provide you with cast iron certainty that the final design can be implemented, to avoid wasting a huge amount of effort.

swat535
1 replies
4h56m

Right, I always thought this is what TDD is used for, very often I design my code in tests and let it kind of guide my implementation.

I kind of imagine what the end result should be in my head (given value A and B, these rows should be X and Y), then write the tests in what I _think_ would be a good api for my system and go from there.

The end result is that my code is testable by default and I get to go through multiple cycles on Red -> Green -> Refactor until I end up being happy with.

Does anyone else work like this?

shuvuvt5
0 replies
3h15m

TDD comes up with some really novel designs sometimes.

Like, I expect it should look one way but after I'm done with a few TDD cycles I'm at a state that's either hard to get there or unnecessary.

I think this is why some people don't like TDD much, sometimes you have to let go of your ideas, or if you're stuck to them, you need to go back much earlier and try again.

I kind of like this though, makes it kind of like you're following a choose your own adventure book.

jbverschoor
1 replies
7h34m

Pen, paper, diagrams.

lwhi
0 replies
5h20m

Xmind and draw.io

wredue
0 replies
2h3m

I don’t like that line at all.

Personally, I think good developers get characters on to the screen and update as needed.

One problem with so much upfront work is how missing even a tiny thing can blow it all up, and it is really easy to miss things.

vorticalbox
0 replies
7h40m

This is what I do I right off the bat wrote down an line or two about what I need to do.

Then I break that down into small and smaller steps.

Then I hack it together to make it work.

Then I refactor to make it not a mess.

tippytippytango
0 replies
1h13m

I also find it critical to start writing immediately. Just my thoughts and research results. I also like to attempt to write code too early. I'll get blocked very quickly or realize what I'm writing won't work, and it brings the blockage to the forefront of my mind. If I don't try to write code there will be some competing theories in my mind and they won't be prioritized correctly.

tibbar
0 replies
2h25m

The secret to designing entire applications in your head is to be intimately familiar with the underlying platform and gotcha's of the technology you're using. And the only way to learn those is to spend a lot of time in hands-on coding and active study. It also implies that you're using the same technology stack over and over and over again instead of pushing yourself into new areas. There's nothing wrong with this; I actually prefer sticking to the same tech stack, so I can focus on the problem itself; but I would note that the kind of 'great developer' in view here is probably fairly one-dimensional with respect to the tools they use.

theptip
0 replies
1h21m

Agree with your point. I think “developers that do 90% of their thinking before they touch the keyboard are really good” is the actual correct inference.

Plenty of good developers use the code as a notepad / scratch space to shape ideas, and that can also get the job done.

prerok
0 replies
7h24m

I could not agree more, it's rare to write a program where you know all the dependencies, libraries you will use and the overall effect to other parts of the program by heart. So, gradual design process is best.

I would point out, though, that that part also touched understanding requirements, which is many times a very difficult process. We might have a technical requirement conjured, by someone less knowledgeable about the inner workings, from a customer requirement and the resolution of the technical requirement may not even closely address the end-users' use-case. So, a lot of time also goes into understanding what it is that the end-users actually need.

nick__m
0 replies
4h32m

I completely agree with you. This article is on the right track but it completely ignore the importance of exploratory programming to guide that thinking process.

mihaic
0 replies
3h26m

I'd like to second that, especially if combined with a process where a lot of code should get discarded before making it into the repository. Undoing and reconsidering initial ideas is crucial to any creative flow I've had.

lwhi
0 replies
7h23m

I think first on a macro level, and use mind maps and diagrams to keep things linked and organised.

As I've grown older, the importance of architecture over micro decision has become blindingly apparent.

The micro can be optimised. Macro level decisions are often permanent.

layer8
0 replies
1h7m

That quote is already a quote in the article. The article author himself writes:

What is really happening?

• Programmers were typing on and off all day. Those 30 minutes are to recreate the net result of all the work they wrote, un-wrote, edited, and reworked through the day. It is not all the effort they put in, it is only the residue of the effort.

So at least there the article agrees with you.

jayd16
0 replies
1m

I'd take the phrase with a grain of salt. What's certain true is that you can't just type your way to a solution.

Whether you plan before pen meets paper or noodle, jam and iterate is a matter of taste.

bakuninsbart
0 replies
4h38m

Same, and since we are on the topic of measuring developer productivity; usually my bad-but-kinda-working prototype is not only much faster to develop, it also has more lines of code, maximizing my measurable productivity!

andsmedeiros
0 replies
3h24m

I don't think the quote suggests that a programmer would mentally design a whole system before writing any code. As programmers, we are used to thinking in problems as steps needing resolution and that's exactly the 90% there. When you're quickly prototyping to see what fits better as a solution to the problem you're facing, you must have already thought what are the requirements, what are the constraints, what would be a reasonable API given your use case. Poking around until you find a reasonable path forward means you have already defined which way is forwards.

Aerroon
0 replies
5h35m

The way I read that is that only 10% of the work is writing out the actual implementation that you're sticking with. How you get there isn't as important. Ie someone might want to take notes on paper and draw graphs while others might want to type things out. That's all still planning.

Almondsetat
33 replies
11h51m

Developers need to learn how to think algorithmically. I still spend most of my time writing pseudocode and making diagrams (before with pen and paper, now with my iPad). It's the programmers' version of the Abraham Lincoln's quote "Give me six hours to chop down a tree and I will spend the first four sharpening the axe."

n4r9
11 replies
11h36m

Question in my head is, can LLMs think algorithmically?

Kwpolska
7 replies
10h14m

LLMs can't think.

tasuki
4 replies
8h41m

Source?

Kwpolska
3 replies
7h56m

LLMs string together words using probability and randomness. This makes their output sound extremely confident and believable, but it may often be bullshit. This is not comparable to thought as seen in humans and other animals.

kragen
2 replies
6h20m

unfortunately that is exactly what the humans are doing an alarming fraction of the time

yifanl
1 replies
5h51m

One of the differences is that humans are very good at not doing word associations if we think they don't exist, which makes us able to outperform LLMs even without a hundred billion dollars worth of hardware strapped into our skulls.

kragen
0 replies
5h16m

that's called epistemic humility, or knowing what you don't know, or at least keeping your mouth shut, and in my experience actually humans suck at it, in all those forms

FeepingCreature
1 replies
9h40m

LLMs can think.

tasuki
0 replies
8h40m

Source?

datascienced
1 replies
9h29m

Like a bad coder with a great memory, yes

f1shy
0 replies
7h19m

The problem is the word “producing” of the parent comment, where it should be “reproducing”.

deepnet
0 replies
7h49m

Interesting question.

LLMs can be cajoled into producing algorithms.

In fact this is the Chain-of-Thought optimisation.

LLMs give better results when asked for a series of steps to produce a result than when just asked for the result.

To ask if LLMs “think” is an open question and requires a definition of thinking :-)

dclowd9901
8 replies
11h23m

I don’t really know what “think algorithmically means,” but what I’d like to see as a lead engineer is for my seniors to think in terms of maintenance above all else. Nothing clever, nothing coupled, nothing DRY. It should be as dumb and durable as an AK47.

weatherlite
2 replies
10h40m

We need this to be more prevalent. But the sad fact is most architects try to justify their position and high salaries by creating "robust" software. You know what I mean - factories over factories, micro services and what not. If we kept it simple I don't think we would need many architects. We would just need experienced devs that know the codebase well and help with PRs and design processes, no need to call such a person 'architect', there's not much to architect in such a role.

Tade0
1 replies
10h22m

I was shown what it means to write robust software by a guy with a PhD in... philosophy out of all things(so a literal philosophiae doctor).

Ironically enough it was nothing like what some architecture astronauts wring - just a set of simple to follow rules, like organizing files by domain, using immutable data structures and pure functions where reasonable etc.

Also I hadn't seen him use dependent types in the one project we worked together on and generics appeared only when it really made sense.

Apparently it boils down to using the right tools, not everything you've got at once.

namaria
0 replies
7h14m

I love how so much of distributed systems/robust software wisdom is basically: stop OOP. Go back to lambda.

OOP was a great concept initially. Somehow it got equated with the corporate driven insanity of attaching functions to data structures in arbitrary ways, and all the folly that follows. Because "objects" are easy to imagine and pure functions aren't? I don't know but I'd like to understand why corporations keep peddling programming paradigms that fundamentally detract from what computer science knows about managing complex distributed systems.

tasuki
2 replies
8h42m

Nothing clever, nothing coupled

Yes, simple is good. Simple is not always easy though. A good goal to strive for nevertheless.

nothing DRY

That's interesting. Would you prefer all the code to be repeated in multiple places?

f1shy
0 replies
7h21m

Bit OP but probably means “no fancy silver bullet acronyms”.

dclowd9901
0 replies
1h57m

Depends. I haven’t come up with the rubric yet but it’s something like “don't abstract out functionality across data types”. I see this all the time: “I did this one thing here with data type A, and I’m doing something similar with data type B; let’s just create some abstraction for both of them!” Invariably it ends up collapsing, and if the whole program is constructed this way, it becomes monstrous to untangle, like exponentially complicated on the order of abstractions. I think it’s just a breathtaking misunderstanding of what DRY means. It’s not literally “don’t repeat yourself”. It’s “encapsulate behaviors that you need to synchronize.”

Also, limit your abstractions’ external knowledge to zero.

jpc0
0 replies
10h33m

In my mind this is breaking down the problem into a relevant data structure and algorithms that operate on that data structure.

If for instance you used a tree but were constantly looking up an index in the tree you likely needed a flat array instead. The most basic example of this is sorting, obviously but the same basic concepts apply to many many problems.

I think the issue that happens in modern times, specially in webdev, is we aren't actually solving problems. We are just glueing services together and marshalling data around which fundamentally doesn't need to be algorithmic... Most "coders" are glorified secretaries who now just automate what would have been done by a secretary before.

Call service A (database/ S3 etc), remove irrelevant data, send to service B, give feedback.

It's just significantly harder to do this in a computer than for a human to do it. For instance if I give you a list of names but some of them have letters swapped around you could likely easily see that and correct it. To do that "algorithmically" is likely impossible and hence ML and NLP became a thing. And data validation on user input.

So algorithmically in the modern sense is more, follow these steps exactly to produce this outcome and generating user flows where that is the only option.

Human do logic much much better than computers but I think the conclusion has become that the worst computer program is probably better at it that the average human. Just look at many niche products catered to X wealth group. I could have a cheap bank account and do exactly what is required by that bank account or I can pay a lot of money and have a private banker that I can call and they will interpret what I say into the actions that actually need to happen... I feel I am struggling to actually write what's in my mind but hopefully that gives you an idea...

To answer your nothing clever , well clever is relative. If I have some code which is effectively a array and an algorithm to remove index 'X' from it, would it be "clever" code to you if that array was labeled "Carousel" and I used the exact same generic algorithms to insert or remove elements from the carousel?

For most developers these days they expect to have a class of some sort with a .append and .remove function but why isn't it just an array of structs which use the exact same functions as every single other array... That people generally will complain that that code is "clever" but in reality it is really dumb. I can see it's clearly an array being operated on but OOP has caused brain rot and developers actually don't know what that means... Wait maybe that was OPs point... People no longer think algorithmically.

---

Machine learning, Natural Language Processing

jffhn
0 replies
8h51m

I don’t really know what “think algorithmically means,”

I would say thinking about algorithms and data structures for algorithmic complexity not to explode.

Nothing clever

A lot of devs use nested loops and List.remove()/indexOf() instead of maps, etc., the terrible performance gets accepted as the state of the art, and then you have to do complex workarounds not to call some treatments too often, etc., increasing the complexity.

Performance yields simplicity: a small increase in cleverness in some code can allow for a large reduction in complexity in all the code that uses it.

Whenever I do a library, I make it as fast as I can, for user code to be able to use it as carelessly as possible, and to avoid another library popping up when someone wants better performances.

throwaway9021
3 replies
11h23m

Do you have any resources for this? especially for the adhd kind - I end up going down rabbit holes in the planning part. How do you deal with information overload and overwhelm OR the exploration exploitation dilemma?

f1shy
1 replies
8h49m

There are 2 bad habits in programming: people that start writing code the 1st second, and people that keep thinking and investigating for months without writing any code. My solution to that: just force to do the opposite. In your case: start writing code immediately. Ni matter how bad or good. Look the youtube channel “tsoding daily” he just goes ahead. The code is not always the best, but he gets things done. He does research offline (you can tell) but if you find yourself doing just research, reading and thinking, force yourself to actually start writing code.

lkuty
0 replies
6h53m

Or his Twitch videos. That he starts writing immediately and that we're able to watch the process is great. Moreover the tone is friendly and funny.

fifilura
0 replies
10h9m

I wonder if good REPL habits could help the ADHD brain?

It still feels like you are coding so your brain is attached, but with rapid prototyping you are also designing, moving parts around to see where they would fit best.

Barrin92
3 replies
10h31m

it's an odd analogy because programs are complex systems and involve interaction between countless of people. With large software projects you don't even know where you want to go or what's going to happen until you work. A large project doesn't fit into some pre-planned algorithm in anyone's head, it's a living thing.

diagrams and this kind of planning is mostly a waste of time to be honest. You just need to start to work, and rework if necessary. This article is basically the peak of the bell curve meme. It's not 90% thinking, it's 10% thinking and 90% "just type".

Novelists for example know this very well. Beginners are always obsessed with intellectually planning out their book. The experienced writer will always tell you, stop yapping and start typing.

The_Colonel
1 replies
10h3m

Your part of your comment doesn't fit with the rest. With complex projects, you often don't even know exactly what you're building, it doesn't make sense to start coding. You first need to build a conceptual model, discuss it with the interested parties and only then start building. Diagrams are very useful to solidify your design and communicate it to others.

rocqua
0 replies
7h36m

There's a weird tension between planning and itterating. You can never forsee anywhere close to enough with just planning. But if you just start without a plan you can easily work yourself into a dead end. So you need enough planning to avoid the dead ends, whilst starting early enough so you get your reality checks so you have enough information to get to an actual solution.

Relevant factors here are how cheaply you can detect failure (in terms of time, material, political capital, team morale) and how easily you can backtrack out of a bad design decision (in terms of political capital, how much other things need to be redone due to coupling, and other limitations).

The earlier you can detect bad decisions, and the easier you can revert them, the less planning you need. But sometimes those are difficult.

It also suggests that continuous validation and forward looking to detect bad decisions early can be warranted. Something which I myself need to get better at.

mkl
0 replies
5h51m

Novelists for example know this very well. Beginners are always obsessed with intellectually planning out their book. The experienced writer will always tell you, stop yapping and start typing.

This is not true in general. Brandon Sanderson for example outlines extensively before writing: https://faq.brandonsanderson.com/knowledge-base/can-you-go-i...

dorkwood
2 replies
11h25m

Does it really take four hours to sharpen an axe? I've never done it.

xandrius
0 replies
5h7m

10/20 minutes to sharpen a pretty dull kitchen knife with some decent whetstones.

Also, as someone famous once said: if I had 4 hours to sharpen an axe, I'd spend 2 hours preparing the whetstones.

misswaterfairy
0 replies
11h21m

Doing it right, with only manual tools, I believe so, remembering back to one of the elder firefighters that taught me (who was also an old-school forester).

Takes about 20 minutes to sharpen a chainsaw chain these days though...

raincole
0 replies
8h17m

I still use pen and paper. Actually as I progress with my career and knowledge I use pen and paper more and digital counterparts less.

It might be me not taking my time to learn Mathematica/Julia tho...

AlotOfReading
9 replies
11h31m

    I'm confident enough to tout this number as effectively true, though I should mention that no company I work with has so far been willing to delete a whole day's work to prove or disprove this experiment yet.
Long ago when I was much more tolerant, I had a boss that would review all code changes every night and delete anything he didn't like. This same boss also believed that version control was overcomplicated and decided the company should standardize on remote access to a network drive at his house.

The effect of this was that I'd occasionally come in the next morning to find that my previous day's work had been deleted. Before I eventually installed an illicit copy of SVN, I got very good at recreating the previous day's work. Rarely took more than an hour, including testing all the edge cases.

scotty79
2 replies
11h5m

Was your work better or worse second time around?

AlotOfReading
1 replies
10h53m

Probably a bit of both, but hindsight helped. It doesn't usually end up exactly the same though. Regardless, whatever I wrote worked well enough that it outlived the company. A former client running it reached out to have it modified last year.

bernardlunn
0 replies
10h14m

With writing the second version is definitely better, sucks having to redo but improvement makes it worth while.

devsda
2 replies
6h20m

The bigger problem here is the manager getting involved with code.

Even when done with good intentions, managers being involved in code/reviews almost always ends up being net negative for the team.

paulryanrogers
1 replies
5h33m

Why?

devsda
0 replies
4h55m

There are many reasons. First a manager is not a peer but brings in a sense of authority into the mix so the discussions will not be honest. Manager's inputs have a sense of finality and people will hesitate to comment or override them even when they are questionable.

There are human elements too. Even if someone has honest inputs, any (monetary or otherwise) rewards or lack of them will be attributed to those inputs (or lack of them). Overall, it just encourages bad behaviours among the team and invites trouble.

These should not happen in an ideal world but as we are dealing with people things will be far from ideal.

smackeyacky
0 replies
11h27m

Crikey what a sociopath to work for. I’m sorry this happened to you.

datascienced
0 replies
9h35m

Bad boss or zen teacher, we will never know!

dailykoder
0 replies
10h59m

I don't have a big sample size, but 2/2 of my first embedded jobs both used network shares and copy+paste to version their code. Because I had kind-of PTSD from the first job, I right off asked the boss on the second job if they had a git repository somewhere. He thought that git is the same as Github and told me they don't want their code to be public.

When they were bought of by some bigger company, we got access to their intranet. I digged through that and found a gitlab instance. So then I just versioned my own code (which I was working on mostly on my own), documented all of it on there, even installed a gitlab runner and had a step-by-step documentary on how to get my code working. When they kicked me out (because I was kind of an asshole, I assume), they asked me to hand over my code. I showed them all of what I did and told them how to reproduce it. After that the boss was kinda impressed and thanked me for my work. Maybe I had a little positive impact on a shitty job by being an asshole and doing stuff the way that I thought would be the right way to do it.

Edit: Oh, before I found that gitlab instance I just initialized raw git repositories on their network share and pushed everything to that

skilled
8 replies
11h45m

This is laid out pretty early on by Bjourne in his PPP book[0],

We do not assume that you — our reader — want to become a professional programmer and spend the rest of your working life writing code. Even the best programmers — especially the best programmers — spend most of their time not writing code. Understanding problems takes serious time and often requires significant intellectual effort. That intellectual challenge is what many programmers refer to when they say that programming is interesting.

Picked up the new edition[1] as it was on the front page recently[2].

[0]: https://www.stroustrup.com/PPP2e_Ch01.pdf

[1]: https://www.stroustrup.com/programming.html

[2]: https://news.ycombinator.com/item?id=40086779

silisili
3 replies
10h50m

I think this is mostly right, but my biggest problem is that it feels like we spend time arguing the same things over and over. Which DB to use, which language is best, nulls or not in code and in DB, API formatting, log formatting, etc.

These aren't particularly interesting, and sure it's good to revisit them time and again, but these are the types of time sinks I find myself in in the last 3 companies I've worked for that feel like they should be mostly solved.

In fact, a company with a strong mindset, even if questionable, is probably way more productive. If it was set in stone we use Perl, MongoDB, CGI... I'd probably ultimately be more productive than I've been lately despite the stack.

zrm
0 replies
10h9m

What you're referring to is politics. Different people have different preferences, often because they're more familiar with one of them, or for other possibly good reasons. Somehow you have to decide who wins.

pavlov
0 replies
9h31m

> “If it was set in stone we use Perl, MongoDB, CGI... I'd probably ultimately be more productive than I've been lately despite the stack.”

Facebook decided to stick with PHP and MySQL from their early days rather than rewrite, and they’re still today on a stack derived from the original one.

It was the right decision IMO. They prioritized product velocity and trusted that issues with the stack could be resolved with money when the time comes.

And that’s what they’ve done by any metric. While nominally a PHP family language, Meta’s Hack and its associated homegrown ecosystem provides one of the best developer experiences on the planet, and has scaled up to three billion active users.

fifilura
0 replies
10h14m

I disagree! These decisions are fundamental in the engineering process.

Should I use steel, concrete or wood to build this bridge?

The mindless coding part starts one year later when you found that your mongoDB does not do joins, and you start implementing this as an extra layer in the client side.

jeffreygoesto
2 replies
11h31m

The hardest part is finding out what _not_ to code, either before (design) or after (learn from prototype or the previous iteration) having written some.

thunderbong
1 replies
11h0m

No code is faster than no code!

layer8
0 replies
56m

Sometimes you have to write it to understand why you shouldn’t have written it.

mkl
0 replies
6h7m

*Bjarne

MartijnBraam
8 replies
10h59m

Who hasn't accidentally thrown away a days worth of work with the wrong rm or git command? It is indeed significantly quicker to recreate a piece of work and usually the code quality improves for me.

nickff
4 replies
10h56m

I’ve often found it alarming to see how much better the re-do is. I wonder whether I should re-write more code.

f1shy
2 replies
7h11m

Absolutely. Parallel to thinking LOC is a good metric, comes with "we have to reuse code" Because lots of people think, writing the code is very expensive. It is not!

kragen
1 replies
6h12m

writing it is not expensive. however, fixing the same bug in all the redundant reimplementations, adding the same feature to all of them, and keeping straight the minor differences between them, is expensive

BossingAround
0 replies
3h48m

Not only fixing the same bug twice, but also fixing bugs that happen because of using the same functionality in different places. For example, possible inconsistency that results from maintaining state in multiple different locations can be a nightmare, esp. in hard-to-debug systems like highly parallelized or distributed architecture.

jll29
0 replies
6h9m

In the software engineering literature, there is something known as "second system effect": the second time a system is designed it will be bloated, over-engineered and ultimately fail, because people want to do it all better, too much so for anyone's good.

But it seems this is only true for complete system designs from scratch after a first system has already been deployed, not for the small "deleted some code and now I'm rewriting it quickly" incidents (for which there is no special term yet?).

timvdalen
0 replies
10h56m

Yes, that can often result in a better-designed refactored version, since you can start with a fully-formed idea!

kragen
0 replies
6h14m

i've never lost work to a wrong git command because i know how to use `git reflog` and `gitk`. it's possible to lose work with git (by not checking it in, `rm -r`g the work tree with the reflog in it, or having a catastrophic hardware failure) but it is rare enough i haven't had it happen yet

datascienced
0 replies
9h27m

Not for ages and definitely not since Github- just keep committing and pushing as a backup

lordnacho
6 replies
10h31m

This is why domain knowledge is key. I work in finance, I've sat on trading desks looking at various exchanges, writing code to implement this or that strategy.

You can't think about what the computer should do if you don't know what the business should do.

From this perspective, it might make sense to train coders a bit like how we train translators. For example, I have a friend who is a translator. She speaks a bunch of languages, it's very impressive. She knows the grammar, idioms, and so on of a wide number of languages, and can pick up new ones like how you or I can pick up a new coding language.

But she also spent a significant amount of time learning about the pharmaceutical industry. Stuff about how that business works, what kinds of things they do, different things that interface with translation. So now she works translating medical documents.

Lawyers and accountants are another profession where you have a language gap. What I mean is, when you become a professional, you learn the language of your profession, and you learn how to talk in terms of the law, or accounting, or software. What I've always found is that the good professionals are the ones who can give you answers not in terms of their professional language, but in terms of business.

Particularly with lawyers, the ones who are less good will tell you every possible outcome, in legalese, leaving you to make a decision about which button to press. The good lawyers will say "yes, there's a bunch of minor things that could happen, but in practice every client in your positions does X, because they all have this business goal".

---

As for his thought experiment, I recall a case from my first trading job. We had a trader who'd created a VBA module in Excel. It did some process for looking through stocks for targets to trade. No version control, just saved file on disk.

Our new recruit lands on the desk, and one day within a couple of weeks, he somehow deletes the whole VBA module and saves it. All gone, no backup, and IT can't do anything either.

Our trader colleague goes red. He calms down, but what can you do? You should have backups, and what are you doing with VBA anyway?

He sits down and types out the whole thing, as if he were a terminal screen from the 80s printing each character after the next.

Boom, done.

anal_reactor
2 replies
9h20m

This is why domain knowledge is key.

Yeah but in my country all companies have a non-compete clause which makes it completely useless for me to learn any domain-specific knowledge because I won't be able to transfer it to my next job if current employer fires me. Therefore I focus on general programming skills because these are transferable across industries.

globular-toast
0 replies
9h11m

The transferable skill is learning and getting on top of the business, then translating that to code. Of course you can't transfer the actual business rules; every business is different. You just get better and better at asking the right questions. Or you just stick with a company for a long time. There are many businesses that can't be picked up in a few weeks. Maybe a few years.

_dain_
0 replies
1h1m

cripes what country is that

sotix
0 replies
4h29m

This is why domain knowledge is key. > Lawyers and accountants are another profession where you have a language gap.

I fully agree with you. However, my experience as a software engineer with a CPA is that, generally speaking, companies do not care too greatly about that domain knowledge. They’d rather have a software engineer with 15 years working in accounting-related software than someone with my background or similar and then stick them into a room to chat with an accountant for 30 minutes.

shkkmo
0 replies
6h3m

This is why domain knowledge is key

In the comment thread, I keep seeing prescriptions over and over for the one way that programming should work.

Computer programming is an incredibly broad discipline that covers such a broad range of types of work. I think it is incredibly hard to make generalizations that actually apply to the whole breadth of what computer programming encompases.

Rather than trying learn or teach one perfect one single methodology that applies accross every sub field of programming, I think that one should aim to build a toolbag of approaches and methodologies along with an understanding where they tend to work well.

otar
0 replies
10h21m

This is why domain knowledge is key.

Very true. There’s a huge difference developing in a well known vs. new domain. My mantra is that you have to first be experienced in a domain to be able to craft a good solution.

Right now I am pouring most of my time in a fairly new domain, just to get an experience. I sit next to the domain experts (my decision) to quickly accumulate the needed knowledge.

ludston
4 replies
9h45m

Agree and disagree. Certain programming domains and problems are mostly thinking. Bug fixing is often debugging, reading and comprehension rather than thinking. Shitting out CRUD interfaces after you've done it a few times is not really thinking.

Other posters have it right I think. Fluency with the requisite domains greatly reduces the thinking time of programming.

makeitdouble
1 replies
9h26m

I'd wager the more technically fluent people get the more they spend time on thinking about the bigger picture or the edge cases.

Bug fixing is probably one of the best example: if you're already underwater you'll want to bandaid a solution. But the faster you can implement a fix the more you'll have leeway, and the more durable you'll try to make it, including trying to fix root causes, or prevent similar cases altogether.

ludston
0 replies
9h6m

Fluency in bug fixing looks like, "there was an unhandled concurrency error on write in the message importing service therefore I will implement a retry from the point of loading the message" and then you just do that. There are only a few appropriate ways to handle concurrency errors so once you have done it a few times, you are just picking the pattern that fits this particular case.

One might say, "yes but if you see so many concurrency related bugs, what is the root cause and why don't you do that?" And sometimes the answer is just, "I work on a codebase that is 20 years old with hundreds of services and each one needs to have appropriate error handling on a case by case basis to suit the specific service so the root cause fix is going and doing that 100 times."

guax
1 replies
9h33m

Debugging is not thinking? Reading, understanding and reasoning about why something is happening is THE THING thinking is about.

Fluency increases the speed in which you move to other subjects but does not reduce your thinking, you're going to more complex issues more often.

ludston
0 replies
9h3m

It's not just thinking though. You're not sitting at your desk quietly running simulations in your head, and if a non programmer was watching you debug it would look very busy.

hubraumhugo
4 replies
10h22m

how can you experiment with learning on-the-job to create systems where the thinking is optimized?

Best optimization is less interruptions as reasearch shows their devastating effect on programming:

- 10-15 min to resume work after an interruption

- A programmer is likely to get just one uninterrupted 2-hour session in a day

- Worst time to interrupt: during edits, searches & comprehension

I've been wondering if there's a way to track interruptions to showcase this.

[0] http://blog.ninlabs.com/2013/01/programmer-interrupted/

phreack
0 replies
9h42m

This is why I work at night 80% of the time. It's absolutely not for everyone, it's not for every case, and the other 20% is coordination with daytime people, but the amount of productivity that comes from good uninterrupted hours long sessions is simply unmatched. Once again, not for everyone, probably not for most.

jll29
0 replies
6h43m

I once led a project to develop a tool that tracks how people use their time in a large corporation. We designed it to be privacy-respecting, so it would log that you are using the Web browser, but not the specific URL, which is of course relevant (e.g. Intranet versus fb.com). Every now and then, a pop up would ask the user to self-rate how productive they feel, with a free-text field to comment. Again, not assigned to user IDs in order to respect privacy, or people would start lying to pretend to be super-human.

We wrote a Windows front end and a Scala back end for data gathering and roled it out to a group of volunteers (including devs, lawyers and finace people even). Sadly the project ran out of time and budget just as things were getting interesting (after a first round of data analysis), so we never published a paper about it.

We also looked at existing tools such as Rescue Time ( https://www.rescuetime.com/ ) but decided an external cloud was not acceptable to store our internal productivity data.

devsda
0 replies
5h42m

If you ask a manager to hold an hour's meeting spread across 6 hours in 10 min slots you will get the funniest looks.

Yet developers are expected to complete a few hours of coding task in between an endless barrage of meetings, quick and short pings & syncups over slack/zoom.

For the few times I've had to work on the weekends at home, I've observed that the difference in the quality of work done over a (distraction free) weekend is much better than that of a hectic weekday.

atoav
0 replies
10h13m

This and a high demand for my time is why I am roughly a magnitude more productive when I am in home office. Nobody bothers me there and if they do I can decide myself when to react.

If you want to tackle particularly hard problems and you get an interruption every 10 to 20 minutes you can just shelve the whole thing, because chances are you will just produce bullshit code that produces headache down the line.

willrftaylor
3 replies
10h13m

Funnily enough this happened to me.

Earlier in my career I had a very intense, productive working day and then blundered a rebase command, deleting all my data.

Rewriting took only about 20 minutes.

However, like an idiot, I deleted it again, in the exact same way!

This time I had the muscle memory for which files to open and where to edit, and the whole diff took about 5 minutes to re-add.

On the way out to the car park it really made me pause to wonder what on earth I had been doing all day.

thunfischtoast
1 replies
9h40m

Sometimes you really wonder where your time went. You can spend 1 hour writing a perfect function and then the rest of the day figuring out why your import does work in dev and not in prod.

I also once butchered the result of 40 hours of work through a loose git history rewrite. I spent a good hour trying different recovery options (to no avail) and then 2 hours typing everything back in from memory. Maybe it turned out even better then before, because all kind of debugging clutter was removed.

BossingAround
0 replies
3h53m

Sometimes, I spend an hour writing a perfect function, and then spend another hour re-reading the beauty of it, just to be pointed out how imperfect the function is in the PR review :))

alex_smart
0 replies
8h6m

In case you didn’t know, doesn’t delete the commit. You can use `git reflog` to find the commits you were recently on and recover your code.

sibeliuss
3 replies
9h20m

I certainly notice folks who code about 30 minutes a day line-wise, but that's just because they're distracted, or don't care.

Also, very very rarely is someone just sitting around and pondering the best solution. It happens, and yes it's necessary, but that's forgetting that for so much work the solution is already there, because one has already solved it a thousand times!

This article is straight gibberish except for perhaps a small corner of the industry, or beginners.

teekert
1 replies
9h18m

It's also: "Damn I just wrote this whole new set of functions while I could have added some stuff to this existing class and it would have been more elegant... Let me start over..."

Writing (code) is thinking.

sibeliuss
0 replies
1h57m

Exactly. The code is the feedback loop.

sph
0 replies
9h15m

To me, it's the exact opposite. It's beginners who spend a lot of time coding, because of inexperience, and bad planning. The first thing they do when they have a problem, is to open their editor and start coding. [1]

I have been in this career for 20 years, I'm running my solo company now, and I'd say I spend on average 2 hours coding a day. I spent 10 hours a day just thinking, strategizing, but also planning major features and how to implement them efficiently. Every time I sit down to code something without having planned it, played with it or left it to simmer in my subconscious for a couple days, I over-engineer or spend time trying an incorrect approach that I will have to delete and start again. When I was an employee, the best code was created when I was allowed to take a notepad, a cup of coffee and play with a problem away from my desk, for however long I needed.

One hour of thinking is worth ten hours of terrible code.

---

1: If our programming languages were better, I would do the same. But apart from niche languages like Lisp, modern languages are not made for exploratory programming, where you play and peel a problem like an onion, in a live and persistent environment. So planning and thinking are very important simply because our modern approach to computing is suboptimal and unrefined.

kstenerud
3 replies
9h45m

This is why I just don't care about my keyboard, mouse, monitor etc beyond a baseline of minimum comfort.

Typing at an extra 15 wpm won't make a lick of difference in how quickly I produce a product, nor will how often my fingers leave the keyboard or how often I look at the screen. Once I've ingested the problem space and parameters, it all happens in my head.

t43562
1 replies
9h0m

I often feel that having a "comfortable" keyboard/mouse/monitor is more important than a fast CPU or a fancy graphics card - just because of that slight extra feeling of pleasure/ease that lasts all day long :-).

The advantage of them is that my monitors and keyboards usually last a long time so putting money into them is not as wasteful as putting it into some other components.

One thing that surprised me though is that I recently bought a KVM to switch from desktop to laptop instead of a second monitor and this turned out to be both better and much cheaper. I gave away an older monitor to a relative and found that not having to turn to look at a 2nd monitor was actually nicer. Initially I really didn't want to do this and really wanted another screen but I had to admit afterwards that 1 screen + KVM was better for me.

RAM and disc space just matter up to the point of having enough so that I'm not wasting time trying to manage them to get work done.

ripe
0 replies
5h51m

May I ask which brand of KVM you selected? I have a Dell laptop and want a "docking" configuration for my desk, the simpler the better.

sanitycheck
0 replies
9h26m

It probably depends on the project?

When I'm writing something from scratch in a few months I can bash it all out on a small laptop - it is (as you say) all in my head, I just need to turn it into working code.

If I'm faced with some complicated debugging of a big existing system, or I've inherited someone elses project, that gets much easier with a couple of giant monitors to look at numerous files side by side - plus a beefier machine to reduce compile/run times as I'll need to do that every few mins.

You may care more about picking a keyboard & mouse/trackpad/trackball/etc if/when you start to experience pain in your wrists/hands and realise the potential impact on your career if it worsens! Similar situation with seating and back pain.

divan
3 replies
7h41m

Great article! I've posted it in other comments before, but it's worth repeating:

The best explanation I've seen is in the book "The Secret Life of Programs" by Jonathan E. Steinhart. I'll quote that paragraph verbatim:

---

Computer programming is a two-step process:

1. Understand the universe.

2. Explain it to a three-year-old.

What does this mean? Well, you can't write computer programs to do things that you yourself don't understand. For example, you can't write a spellchecker if you don't know the rules for spelling, and you can't write a good action video game if you don't know physics. So, the first step in becoming a good computer programmer is to learn as much as you can about everything else. Solutions to problems often come from unexpected places, so don't ignore something just because it doesn't seem immediately relevant.

The second step of the process requires explaining what you know to a machine that has a very rigid view of the world, like young children do. This rigidity in children is really obvious when they're about three years old. Let's say you're trying to get out the door. You ask your child, "Where are your shoes?" The response: "There." She did answer your question. The problem is, she doesn't understand that you're really asking her to put her shoes on so that you both can go somewhere. Flexibility and the ability to make inferences are skills that children learn as they grow up. But computers are like Peter Pan: they never grow up.

shkkmo
2 replies
6h34m

Edit: Aften writing this long nitpicky comment, I have though of a much shorter and simpler point I want to make: Programming is mostly thinking and there are many ways to do the work of thinking. Different people and problems call for different ways of thinking and learning to think/program in different ways will give more tools to choose from. Thus I don't like arguments that there is one right way that programming happens or should happen.

I'm sorry, but yout entire comment reads like a list of platitudes about programming that don't actually match reality.

Well, you can't write computer programs to do things that you yourself don't understand.

Not true. There are many times where writing software to do a thing is how I come to understand how that thing actually works.

Additionally, while an understanding of physics helps with modeling physics, much of that physics modeling is done to implement video games and absolute fidelity to reality is not the goal. There is often an exploration of the model space to find the right balance of fidelity, user experience and challenge.

Software writing is absolutely mostly thinking, but that doesn't mean all or even most of the thinking should al always come first. Computer programming can be an exploratory cognitive tool.

So, the first step in becoming a good computer programmer is to learn as much as you can about everything else. Solutions to problems often come from unexpected places, so don't ignore something just because it doesn't seem immediately relevant.

I'm all about generalist and autodidacts, but becomming one isn't a necessary first step to being a good programmer.

The second step of the process requires explaining what you know to a machine that has a very rigid view of the world, like young children do.

Umm... children have "rigid" world views? Do you know any children?

Let's say you're trying to get out the door. You ask your child, "Where are your shoes?" The response: "There." She did answer your question.

Oh, you don't mean rigid, you mean they can't always infer social subtexts.

Flexibility and the ability to make inferences are skills that children learn as they grow up. But computers are like Peter Pan: they never grow up.

Computes make inferrences all the time. Deriving logical conclusions from known facts is absolutely something computers can be programmed to do and is arguably one of their main uses cases.

I have spent time explaining to things to children of various ages, including 3 year olds, and find the experience absolutely nothing like programming a computer.

divan
0 replies
2h59m

Are you replying to me or to the author of the quote? :)

Software writing is absolutely mostly thinking, but that doesn't mean all or even most of the thinking should al always come first. Computer programming can be an exploratory cognitive tool.

Absolutely, explaining something to the child also can be exploratory cognitive tool.

MichaelZuo
0 replies
2h35m

I would say very young children up until they acquire concepts like a theory of mind, cause and effect happening outside of their field of observation, and so on, are pretty rigid in many ways like computers. It's a valuable insight.

Or at least they don't make mistakes in exceptionally novel and unusual ways until they're a bit older.

demondemidi
3 replies
10h54m

Programming is mostly planning.

When you work for companies that take programming seriously (e.g., banks, governments, automotive, medical equipment, etc.), a huge development cycle occurs before a single line of code is written.

Here are some key development phases (not all companies use all of them):

1. high level definition, use cases, dependencies; traceability to customer needs; previous correction (aka failures!) alignment

2. key algorithms, state machines, flow charts (especially when modeling fail-safety in medical devices)

3. API, error handling, unit test plan, functional test plan, performance test plan

4. Alignment with compliance rules; attack modeling; environmental catastrophe and state actor planning (my experience with banks)

After all of this is reviewed and signed-off, THEN you start writing code.

This is what code development looks like when there are people's, business's, and government's lives/money/security on the line.

So. Much. Planning.

hgomersall
1 replies
10h24m

And it's a terrible way to make anything, much less software. It's more forgivable when the cost of outer iteration is high because you're making, say, a train, but even then you design around various levels of simulation, iterating in the virtual world. The idea that you can nail down all the requirements and interfaces before you even begin is why so many projects of the type you describe often have huge cost overruns as reality highlights all the changes that need to be made.

Falmarri
0 replies
9h39m

I see this so often. It's how terrible software is written because people are afraid to change direction or learn anything new mid project.

I rewrite most of my code 2-3 times before I'm done and I'm still 5x faster than anyone else, and significantly higher quality and maintainability as well. People spend twice as long writing the ugliest, hackiest code as they would have to just learn to do it right

tasuki
0 replies
8h22m

companies that take programming seriously (e.g., banks, governments, automotive, medical equipment, etc.)

(Some) banks (sometimes) hire armies of juniors from Accenture. I wouldn't say they take programming seriously.

My government had some very botched attempts at creating computer systems. They're doing better these days, creating relatively simple systems. But I wouldn't say they're particularly excellent at programming.

bradley13
3 replies
11h31m

I would absolutely agree, for any interesting programming problem. Certainly, the kind of programming I enjoy requires lots of thought and planning.

That said, don't underestimate how much boilerplate code is produced. Yet another webshop, yet another forum, yet another customization of that ERP or CRM system. Crank it out, fast and cheap.

Maybe that's the difference between "coding" and "programming"?

qwery
1 replies
10h57m

Maybe that's the difference between "coding" and "programming"?

I know I'm not alone in using these terms to distinguish between each mode of my own work. There is overlap, but coding is typing, remembering names, syntax, etc. whereas programming is design or "mostly thinking".

runesoerensen
0 replies
10h46m

I usually think of coding and programming as fairly interchangeable words (vs “developing”, which I think encapsulates both the design/thinking and typing/coding aspects of the process better)

marginalia_nu
0 replies
8h4m

Implementing known solutions is less thinking and more typing, but on the other hand it feels like CoPilot and so on is changing that. If you have something straightforward to build, you know the broad strokes of how it's going to come together, the actual output of the code is so greatly accelerated now that whatever thinking is left takes a proportionally higher chunk of time.

... and "whatever is left" is the thinking and planning side of things, which even in its diminished role in implementing a known solution, still comes into play every once in a while.

anon115
3 replies
11h41m

re·con·nais·sance noun military observation of a region to locate an enemy or ascertain strategic features.

darby_eight
2 replies
11h38m

Did you want to grace us with any particular relevance of this knowledge or do you just wanna keep it to yourself?

Sadly, after all these years programming, I have yet to discern any real vulnerability in the US government.

qwery
0 replies
11h23m

Just guessing/reading: it's a metaphor.

You could see the recon as the thinking from the article. The enemy and terrain are the code and various risks and effects associated with changing it.

082349872349872
0 replies
11h25m

I think it was meant as a parallel; historically light cavalry (explorers) and heavy cavalry (exploiters) had different ideals*, different command structures, and when possible even used different breeds of horses.

Compare the sorts of teams that do prototyping and the sorts of teams that work to a Gantt chart.

* the ideal light cav trooper was jockey-sized, highly observant, and was already a good horseman before enlisting; the ideal heavy cav trooper was imposing, obedient, and was taught just enough horsemanship to carry out orders but not so much that he could go AWOL.

spc476
2 replies
9h14m

At my previous job, I calculated that over the last year I worked there, I wrote 80 lines of non-test, production code. 80. About one line per 3-4 days of work. I think I could have retyped all the code I wrote that year in less than an hour.

The rest of the time? Spent in two daily stand up meetings [1], each at least 40 minutes long (and just shy of half of them lasted longer than three hours).

I should also say the code base was C, C++ and Lua, and had nothing to do with the web.

[1] Because my new manager hated the one daily standup with other teams, so he insisted on just our team having one.

BossingAround
1 replies
3h50m

Were the intense daily meetings any help? I can imagine that if there's a ticket to be solved, and I can talk about the problem for 40 minutes to more experienced coworkers, that actually speeds up the necessary dev time by quite a lot.

Of course, it will probably just devolve into a disengaged group of people that read emails or Slack in another window, so there's that.

xvilka
0 replies
1h51m

80% of meetings are useless. Especially long ones.

pqwEfkvjs
2 replies
6h50m

So the waterfall model is the best after all?

thom
0 replies
6h46m

I'm not sure that follows. If most programming is thinking, then it makes sense to minimise the amount of thinking that is wasted when circumstances later change.

machine_coffee
0 replies
6h29m

Wouldn't say so.

An iterative model which has been up-front loaded with a firm architecture, feature elaboration, a rough development and testing plan, resources allocated and some basic milestones to hit so that upper mgmt. can get an idea when useful stuff might land.

The development _process_ can be as agile-y as you like, so long as the development of features moves incrementally with each iteration towards the desired end-goal.

But you have to be strict about scope.

mock-possum
2 replies
11h32m

Wait… you have the diffs… why are you retyping the lost code by hand? What am I missing?

qwery
0 replies
11h22m

I think the diffs are evidence for the author's claim that the retyping would be a relatively easy job.

9dev
0 replies
11h7m

They specifically mentioned the diffs being physically printed, like, on paper. Also, it’s just a convoluted example to highlight the core idea.

matthewsinclair
2 replies
6h29m

This is also relevant in the context of using LLMs to help you code.

The way I see it, programming is about “20% syntax and 80% wisdom”. At least as it stands today.

LLMs are good (perhaps even great) for the 20% that is syntax related but much less useful for the 80% that is wisdom related.

I can certainly see how those ratios might change over time as LLMs (or their progeny) get more and more capable. But for now, that ratio feels about right.

hnthrow289570
1 replies
6h6m

Sadly like half of that wisdom has to do with avoiding personal stress caused by business processes.

matthewsinclair
0 replies
3h28m

Well, exactly. That’s kinda my point. Programming is so much more than just writing code. And sadly, some of that other stuff involves dealing with humans and their manifest idiosyncrasies.

ken47
2 replies
11h11m

Good programming is sometimes mostly thinking, because "no plan survives first contact with the enemy." Pragmatic programming is a judicious combination of planning and putting code to IDE, with the balance adapting to the use case.

wruza
0 replies
8h36m

This. Programming is mostly reconnaissance, not just thinking. If you don’t write code for days, you’re either fully aware of the problem surface or are just guessing it. There’s not much to think about in the latter case.

datascienced
0 replies
9h31m

The first run with the IDE is like completing a level of a game the first time. The second time it will be quicker.

I agree we can expand thinking to “thinking with help from tools”.

uraura
1 replies
8h20m

That's also why it is difficult for me to tell what I've done in the standup meeting on another day. If I tell people I just spend time on "thinking", they will say it is too vague.

f1shy
0 replies
7h15m

If you say "I was analysing the problem, and evaluating different solutions, and weighting the pros and cons, for example, I thought about [insert option A] but I see [insert contra here]. So I also thought about [option B]... I'm still researching in [insert source]". I do not think that should be a problem. Is what I say constantly in our daily's. And of course, eventually I deliver something.

t43562
1 replies
9h26m

This is a good article to send to non-programmers. Just as programmers need domain knowledge, those who are trying to get something out of programmers need to understand a bit about it.

I think I recognise that tiny diffs that I might commit can be the ones that take hours to create because of the debugging or design or learning involved. It's all so easy to be unimpressed by the quantity of output and having something explained to you is quite different from bashing your head against a brick wall for hours trying to work it out yourself.

figassis
0 replies
9h13m

This. The smallest pieces of code I’ve put out were usually by far the most time consuming, most impactful and most satisfying after you “get it”. One line commits that improve performance by 100x but took days to find, alongside having to explain during syncs why a ticket is not moving.

lofaszvanitt
1 replies
8h32m

Everyone knows this who is a programmer. So this article is for other people, like managers?, who don't know how to measure their effectiveness...

BossingAround
0 replies
3h46m

Also junior developers, who might not realize that they are paid to think, not write LOCs.

khaledh
1 replies
6h52m

"Weeks of coding can save you hours of planning." - Unknown.

runald
0 replies
6h29m

"Everyone has a meal plan until they get a fruit punch in the face." -Tyson?

kevindamm
1 replies
5h6m

Author listed a handful of the thinking aspects that take up the 11/12 non-motion work.. but left out naming things! The amount of time in conversation about naming, or even renaming the things I've already named.. there's even a name for it in the extreme, bikeshedding. Even sometimes I'll be fixated on how to phrase the comments for a function or even reformat things for line lengths to fit.

Programming is mostly communicating.

ivanjermakov
0 replies
5h2m

Yep, with seniority programming gradually goes from problem solving to product communication and solution proposition

doganugurlu
1 replies
9h30m

In the 2020s, we still have software engineering managers that think of LOC as a success metric.

“How long would it take you to type the 6 hours work of diff?” is a great question to force the cognitively lazy software manager to figure out how naive that is.

Nowadays I feel great when my PRs have more lines removed than added. And I really question if the added code was worth the added value if it’s the opposite.

masswerk
0 replies
6h37m

Conversely, how long would it take the average manager to re-utter any directions they gave the previous day?

cranium
1 replies
9h44m

Now with Github Copilot it's even worse/better – whether you like to type or not. Now it's 1) think about the problem, 2) sketch types and functions (leave the body empty), 3) supervise Copilot as it's generating the rest, 4) profit.

marginalia_nu
0 replies
7h23m

I think this is true for certain categories of coding more so than others.

Copilot is above all fantastic for outputting routine tasks and boilerplate. If you can, through the power of thinking, transform your problem into a sequence of such well-formed already-solved tasks, it can basically be convinced to do all the key-pressing for you.

avodonosov
1 replies
6h8m

If so, is it a good idea to base programmer interview on live coding session?

porksoda
0 replies
5h49m

I love this type of interview. I code with the goal of showing how it's done, not how much I can do, and they very soon realize that they won't see finished code on "this" call. It's an opportunity to teach non-coders that hidden intricacies exist in even the smallest ticket. Usually just a few minutes in they're fatigued, and they know who I am.

zubairq
0 replies
11h11m

I liked "Code is just the residue of the work"

ysofunny
0 replies
4h32m

same as writting

xvilka
0 replies
2h16m

they have to recover from the previous day's backup

This is how I know it's a work of fiction. Sadly, even these days daily backups in most organizations, even IT ones are an impossible dream.

xondono
0 replies
5h56m

“Programming is mostly thinking” is one of these things we tell ourselves like it is some deep truth but it’s the most unproductive of observations.

Programming is thinking in the same exact way all knowledge work is thinking:

- Design in all it’s forms is mostly thinking

- Accounting is mostly thinking

- Management in general is mostly thinking

The meaningful difference is not the thinking, it’s what are you thinking about.

Your manager needs to “debug” people-problems, so they need lots of time with people (i.e. meetings).

You are debugging computer problems, so you need lots of time with your computer.

There’s an obvious tension there and none of the extremes work, you (and your manager) need to find a way to balance both of your workloads to minimize stepping on each others toes, just like with any other coworker.

vladsiv
0 replies
9h22m

Great article, thanks for sharing!

travisgriggs
0 replies
2h52m

IMHO Programming is mostly iterating (which involves thinking).

The title, read alone, with only a brief skim of the article might give PHBs the notion that you can just sit developers in a room and let them “think” for 5.5 hours and the hand them their keyboards for the last 30 minutes and get the product you want.

tmilard
0 replies
8h12m

...and getting stuck.

thatjoeoverthr
0 replies
7h47m

I’ve faced this viscerally with copilot. I picked it up to type for me after a sport injury. Last week, though, I had an experience; I was falling ill and “not all there.” None of the AI tools were of any help. Copilot is like a magical thought finisher, but if I don’t have the thought in the first place, everything stops.

tengbretson
0 replies
3h47m

This really tracks for me. Personally, I have abyssal typing skills and speed. Not only has this never limited my productivity as a software developer, in some ways it has forced me to be a better developer.

sesm
0 replies
6h30m

I thought so too, but now I have different opinion.

Programming has both analytical and creative part, which should be balanced. If you are over-analytical you don't get stuff done quickly.

sam_goody
0 replies
7h34m

Famuosly described as the tale of the two programmers - the one who spends their time thinking will provide exponentially better work, though because good work look obvious they will often not receive credit.

Eg. https://realmensch.org/2017/08/25/the-parable-of-the-two-pro...

richrichie
0 replies
7h45m

It is an iterative process, unless “to code” is narrowly defined as “entering instructions” with a keyboard. Writing influences design and vice versa.

A good analogy that works for me is writing long form content. You need clear thought process and some idea of what you want to say, before you start writing. But then thinking also gets refined as you write. This stretches further: a English lit major who specialises as a writer (journalist?) writing about a topic with notes from a collection experts and a scientist writing a paper are two different activities. Most professional programming is of the former variety admitting templates / standardisation. The latter case requires a lot more thinking work on the material before it gets written down.

progx
0 replies
10h4m

And most of the time thinking about things that should not be done.

perrygeo
0 replies
2h27m

Writers get a bit dualistic on this topic. It's not "sitting in a hammock and dreaming up the the entire project" vs. "hacking it out in a few sprints with no plan". You can hack on code in the morning, sit in a hammock that afternoon, and deliver production code the next day. It's not either-or. It's a positive feedback loop between thinking and doing that improves both.

openrisk
0 replies
9h19m

Various programming paradigms (modular programming, object-oriented, functional, test-driven etc) have developed to reduce precisely this cognitive load. The idea being that it is easier to reason and solve problems that are broken down into smaller pieces.

But its an incomplete revolution. If you look at the UML diagram of a fully developed application its a mess of interlocked pieces.

Things get particularly hard to reason about when you add concurrency.

One could hypothesize that programming languages that "help thinking" are more productive / popular but not sure how one would test it.

msephton
0 replies
5h44m

I agree that programming is mostly thinking. But I don't agree with the how the post justifies it.

macpete42
0 replies
7h15m

Code is more flexible than text for drafting the idea. Later on it will expand into a fully working solution. Defining non functional interface definitions and data objects for "scribbling" the use case works best for me.

lysecret
0 replies
9h4m

So is good writing actually.

knorker
0 replies
6h32m

If I give you the diff, how long will it take you to type the changes back into the code base and recover your six-hours' work?

The diff will help, but it'll be an order of magnitude faster to do it the second time, diff provided or not.

For the same reason.

jshobrook
0 replies
39m

In other words: the more useful Copilot is, the easier your job is.

interpol_p
0 replies
4h51m

In my experience quite a lot of programming is thinking, though quite a lot is also communication, if you work in a team

Typing out comments on PRs, reproducing issues, modifying existing code to test hypotheses, discussing engineering options with colleagues, finding edge cases, helping QA other tickets, and so on. I guess this is all "thinking," too, but it often involves a keyboard, typing, and other people

A lower but still significant portion is "overhead." Switching branches, syncing branches, rebasing changes, fixing merge conflicts, setting up feature flags, fixing CI workflows, and so on

Depending on the size of the team, my gut feel for time consumption is:

Communication > Thinking > Writing Code > Overhead

hgyjnbdet
0 replies
10h44m

Off topic. I'm not a developer but I do write code at work, on which some important internal processes depend. I get the impression that most people don't see what I do as work, engaged as they are in "busy" work. So I'm glad when I read things like this that my struggles are those of a real developer.

f1shy
0 replies
9h57m

A good explanation of this is given on SICP. Ist about solving the problem, not getting the computer to do something.

danielovichdk
0 replies
6h50m

Well of course it is. With progress in knowledge, there comes a stage before that. That stage we can call learning. I don't care how you bend it, learning involves thinking in its base of it all.

So thinking does not merely make you a better programmer. It makes you a better programmer at every given time . Your progression never stops and like I just said, it's simply called learning.

cmrdporcupine
0 replies
5h52m

Most software developer jobs are not really programming jobs. By that I mean the amount of code written is fairly insignificant to the amount of "other" work which is mainly integration/assembly/glue work, or testing/verification work.

There definitely are some jobs and some occasions where writing code is the dominant task, but in general the trend I've seen over the past 20 years I've been working has been more and more to "make these two things fit together properly" and "why aren't these two things fitting together properly?"

So in part I think we do a disservice to people entering this industry when we emphasize the "coding" or "programming" aspects in education and training. Programming language skills are great, but very important is systems design and integration skills.

chadmulligan
0 replies
8h42m

That’s an iteration of Peter Naur’s « Programming as Theory Building » that has been pivotal in my understanding of what programming really is about.

Programming is not about producing programs per se, it is about forming certain insights about affairs of the world, and eventually outputing code that is nothing more than a mere representation of the theory you have built.

cess11
0 replies
6h4m

I tend to spend a lot of time in REPL or throwaway unit tests, tinkering out solutions. It helps me think to try things out in practice, sometimes visualising with a diagramming language or canvas. Elixir is especially nice in this regard, the language is well designed for quick prototypes but also has industrial grade inspection and monitoring readily available.

Walks, hikes and strolls are also a good techniques for figuring things out, famously preferred by philosophers like Kierkegaard and Nietzsche.

Sometimes I've brought it up with non-technical bosses how development is actually done, didn't work, they just dug in about monitoring and control and whatnot. Working remote is the way to go, if one can't find good management to sell to.

burgerrito
0 replies
6h58m

* (2014)

ajuc
0 replies
8h12m

Sadly, overnight the version control system crashes and they have to recover from the previous day's backup. You have lost an entire day's work. If I give you the diff, how long will it take you to type the changes back into the code base and recover your six-hours' work?

I accidently deleted about 2 weeks of my work once. I was a junior dev and didn't really understood how svn works :). It took about 2 days to finish the job, but I had no diff and it wasn't 100% ready, so after recreating what I had I still had to spend about a day to finish it.

TL;DR: I think 11/12th is a reasonable estimate.

agentultra
0 replies
2h39m

Too true.

I was introduced to a quote from cartoonist, Guindon: "Writing is nature’s way of letting you know how sloppy your thinking is." From Leslie Lamport.

Although in my experience, thinking is something that is not always highly valued among programmers. There are plenty of Very Senior developers who will recommend, "just write the code and don't think about it too hard, you'll probably be wrong anyway." The whole, "working code over documentation and plans," part of the Agile Manifesto probably had an outsized influence on this line of reasoning. And honestly it's sometimes the best way to go: the task is trivial and too much thought would be wasted effort. By writing the code you will make clear the problem you're trying to solve.

The problem with this is when it becomes dogma. Humans have a tendency to avoid thinking. If there's a catchy maxim or a "rule" from authority that we can use to avoid thinking we'll tend to follow it. It becomes easy for a programmer to sink into this since we have to make so many decisions that we can become fatigued by the overwhelming number of them we have to make. Shortcuts are useful.

Add to this conflict the friction we have with capital owners and the managerial class. They have no idea how to value our work and manage what we do. Salaries are games of negotiation in much of the world. The interview process is... varied and largely ineffective. And unless you produce value you'll be pushed out. But what do they value? Money? Time? Correctness? Maintainability?

And then there's my own bone to pick with the industry. We gather requirements and we write specifications... some times. And what do we get when we ask to see the specifications? A bunch of prose text and some arrows and boxes? Sure, some one spent a lot of time thinking about those arrows and boxes but they've used very little rigor. Imagine applying to build a sky scraper and instead of handing in a blueprint you give someone a sketch you drew in an afternoon. That works for building sheds but we do this and expect to build sky scrapers! The software industry is largely allergic to formalism and rigor.

So... while I agree, I think the amount of thinking that goes into it varies quite a lot based on what you're doing. 11/12? Maybe for some projects. Sometimes it's 1/2. Sometimes less.

JKCalhoun
0 replies
5h23m

Somehow the post seemed to miss the amount of time I seem to spend on "busy work" in coding. Not meetings and that sort of thing, but just boring, repetitive, non-thinking coding.

Stubbing in a routine is fast, but then adding param checking, propagating errors, clean up, adding comments, refactoring sections of code into their own files as the code balloons, etc.

There's no thinking involved in most of the time I am coding. And thank god. If 11/12 of my time was spent twisting my brain into knots to understand the nuances of concurrency or thread-locking I would have lost my hair decades earlier.