return to table of content

How I Program in 2024

aetherspawn
92 replies
15h28m

When you have no tests your problems go away because you don’t see any test failures.

Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

Thus, when you delete your tests, the only person you are fooling is probably yourself :(

From reading your page I get the impression you are more burnt out from variation/configuration management which is completely relatable… I am too. This is a hard problem. But user volume is required to make $$. If the problem was easy then the market would be saturated with one size fits all solutions for everything.

IgorPartola
29 replies
13h58m

I think this is highly domain dependent. I currently am working on codebase that has tests for a part of it that are an incredibly useful tool at helping me refactor that particular part. Other parts are so much UI behavior that it is significantly faster to catch bugs by manual testing because the UI/workflow either changes so fast that you don’t write tests for it (knowing they’ll be useless when the workflow is redesigned in the next iteration) or so slow that that particular UI/workflow just doesn’t get touched again so refactors don’t happen to it to introduce more bugs.

I have never found tests to be universally necessary or helpful (just like types). They are a tool for a job, not a holy grail. I have also never met a codebase that had good test coverage and yet was free of bugs that aren’t then found with either manual testing or usage.

Somewhat hyperbolically and sarcastically: if you are good enough to write perfect tests for your code, just write perfect code. If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful? :)

saithound
6 replies
11h33m

If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful? :)

I did like the rest of the post, but this is not hyperbole. It's just a disingenuous argument, and one that looks orthogonal to your point that "tests are a tool for a job".

If you aren't perfect at magnetizing iron, and you need a working compass, you better magnetize two needles and use one to test the other. The worse you are at magnetizing iron, the more important it is that you do this if you want to end up with a working compass.

lelanthran
4 replies
10h54m

If you aren't perfect at magnetizing iron, and you need a working compass, you better magnetize two needles and use one to test the other. The worse you are at magnetizing iron, the more important it is that you do this if you want to end up with a working compass.

This is modern testing in a nutshell - it's ineffective but the author of the test can't actually tell that!

Using this analogy, if you created 10 magnetised needles using the wrong process and getting the wrong result, then all 10 would agree with each other and your test passes, but your needle is still broken.

saithound
3 replies
10h15m

I don't think you understand how magnets work.

Hint: if you think the way to test whether a needle is magnetized using another possibly magnetized needle is by building both needles into two separate compasses, you're nowhere close.

lelanthran
2 replies
10h5m

Hint: if you think the way to test whether a needle is magnetized using another magnetized needle is by building both needles into two separate compasses, you're nowhere close.

I thought it was clear from my post that I do not think this.

I also think you are missing the point.

saithound
1 replies
9h41m

You wrote:

if you created 10 magnetised needles using the wrong process and getting the wrong result, then all 10 would agree with each other and your test passes

This suggests that you do think soemthing like this. Again, the way you test wheher you successfully magnetized a needle using another potentially magnetized needle is not by checking whether they "agree with each other" in the end application.

latexr
0 replies
8h53m

This suggests that you do think soemthing like this.

Or it suggests they’re continuing the analogy (which isn’t perfect) to make a different point.

Again, the way you test (…) is not (…)

Twice you’ve spent the majority of words in your post telling someone they’re wrong without explaining the correct methodology. That does not advance the conversation, it’s the equivalent of saying “nuh-uh” and leaving. If you disagree, it’s good form to explain why.

It doesn’t take long to say the failed magnetisation would leave all needles pointing in disparate directions, not the same consistent wrong direction. Unless there’s something else in your test that is so strong and wrong that it causes that problem, in which case the analogy starts working again.

rickdeckard
0 replies
9h32m

I don't get this analogy.

Apart from the fact that in your example the produce is validated using the exact same produce, you are actually describing the perfect test:

Two magnetized needles will validate each other, so both the product (needle#1) and the test-setup (needle#2) will be confirmed as valid in one step. If one is not working, the other will self-validate by pointing at the earth magnetic field instead...

Cthulhu_
4 replies
11h19m

IMO if your implementation is that unstable (you mentioned the UI/workflow changes fast) it isn't worth writing a test for it, but also, I don't think it shoud be released to end-users because (and this is making a big assumption, granted), it sounds like the product is trying to figure out what it wants to be.

I am a proponent of having the UI/UX design of a feature be done before development gets started on it. In an ideal XP/agile environment the designers and developers work closely together and constantly iterate, but in practice there are so many variables involved in UX design and so many parties that have an opinion, that it'll be a moving target in that case, which makes development work (and automated tests) an exercise in rework.

happysadpanda2
1 replies
7h2m

Chiming in as an end-user of software: please try to minimize the amount of times I need to re-learn the user interface you put in front of me.

happysadpanda2
0 replies
7h0m

Aaaaaaand I replied to the wrong comment, mea culpa!

softfalcon
0 replies
2h19m

I agree with you that the ideal is to have UI/UX work resolved before starting dev work.

In my experience, this has never happened. I’ve moved around hoping that somewhere, leadership has fixed this problem and nope. It never happens.

There are just too many unknowns and never enough time for design to stabilize. It’s always a mad dash to collect whatever user info you can before you slap together a final mock-up of the interface and expected behaviour.

rtpg
0 replies
10h24m

I think there's a great balance here in these environments:

- write tests as part of figuring out the implementation (basically: automate the random clicking you're doing to test things anyways)

- Make these tests really loose

- Just check them in

Being unprecious about test coverage means you just write a handful of "don't blow up" tests for features, that help you get the ball rolling and establish at least a baseline of functionality, without really getting in the way.

bubblebeard
3 replies
12h7m

Types are there to ensure against human error and reduce the amount of code we need to write.

Tests exist to guarantee functionality and increase productivity (by ensuring intended functionality remains as we refactor/change our code/UI).

There may be cases where some tests are too expensive to write, but I have never come across this myself. For example, in functional tests you would attempt to find a secure way to distinguish elements regardless of future changes to that UI. If your UI changes so much between iterations that this is impossible it sounds like you need to consider the layout a little more before building anything. I’m saying that based on experience, having been involved in several projects where this was a problem.

Having said that, I’m myself horrible at writing tests for UI, an area I’m trying to improve myself, it really bothers me :)

whstl
2 replies
10h3m

Tests can be expensive to write if they are an afterthought, and/or the code is not written in way that is easy to test.

UI tests can be cheap but they require some experience in knowing how to write a testable UI. One way of achieving that is writing them as early as possible, of course. Which is not always possible :/

protomolecule
1 replies
7h52m

the code is not written in way that is easy to test

Which isn't devoid of downsides either

whstl
0 replies
2h58m

That's a good point. Sometimes more ergonomic APIs can be harder to test.

256_
3 replies
13h19m

Somewhat hyperbolically and sarcastically: if you are good enough to write perfect tests for your code, just write perfect code. If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful? :)

Well obviously, you just write tests for the tests. :3

It's called induction.

wilgertvelinga
0 replies
2h5m

It's actually called mutation testing. And luckily it's almost fully automated.

derf_
0 replies
5h45m

> Well obviously, you just write tests for the tests. :3

I had a friend whose first job out of school (many years ago) was part of a team at Intel assigned to write tests for their testing tools. When it matters enough economically, it will happen. As you can see from recent news, that is still not enough to guarantee a good result.

MikeDelta
0 replies
12h48m

Qui testet ipsos tests?

watwut
1 replies
11h26m

if you are good enough to write perfect tests for your code, just write perfect code.

I have yet to see anyone claim they write perfect tests.

If you aren’t perfect at writing tests, how do you know the tests are complete, bug free,

I never claimed to produce or seen complete tests. I never claimed or seen bug free tests.

and actually useful? :)

I know that whenever I fix something or refactor, test fails and I found a bug in code. I know that when we do not have have the same bag again and then again the same bug and again the same bug.

I know when testers time is saved and they dont have to test repetitive basic stuff anymore and can focuse on more complicated stuff.

lelanthran
0 replies
10h2m

watwut wrote:

I know that when we do not have have the same bag again and then again the same bug and again the same bug.

Well, username checks out :-)

scott_w
1 replies
7h26m

if you are good enough to write perfect tests for your code, just write perfect code. If you aren’t perfect at writing tests, how do you know the tests are complete, bug free, and actually useful?

This sentence makes no sense. Tests are infinitely more straightforward than code. I always go back to my dad's work as a winder before he retired:

After repairing a generator, they'd test it can handle the current that it's expected to take by putting it in a platform and... running electricity through it. They'd occasionally melt all the wiring on the generator and have to rewind it.

By your logic, since they weren't "good enough" to fix it perfectly, how could they know their test even worked? Should they have just shipped the generator back to the customer without testing it?

adamc
0 replies
2h54m

No, they often aren't, and UI can be complex to test.

klyrs
1 replies
13h32m

the UI/workflow either changes so fast that you don’t write tests for it

This is my number one pet peeve in software. Every aspect of every interface is subject to change always; not to mention the bonanza of dickbars and other dark patterns. Interfaces are a minefield of "operator error" but really it's an operational error.

niemandhier
0 replies
12h51m

People are building multimodal transformers, that try to simulate users.

No matter how stupid the ai, if it can break your ai code, you have a bug.

bregma
1 replies
7h46m

Tests are just a way of providing evidence that your software does what it's supposed to. If you're not providing evidence, you're just saying "trust me, I'm a programmer."

Think back to grade school math class and your teacher has given you a question about trains with the requirement "show your work." Now, I know a lot of kids will complain about that requirement and just give the answer because "I did it in my head" or something. They fail. Here's the fact: the teacher already knows the trains will meet in Peoria at 12:15. What they're looking for is evidence that you have learned the lesson of how to solve a certain class of problems using the method taught.

If you're a professional software developer, it is often necessary to provide evidence of correctness of your code. In a world where dollars or even human lives are on the line, arrogance is rarely a successful defense in a court of law.

randomdata
0 replies
5h29m

Not quite. Tests are just a way to document your software's behaviour, mostly so that future people (including future you) working with the software know what the software is intended to do – to not leave them to guess based on observation of how undefined behaviour plays out.

That the documentation is self-validating is merely icing on the cake.

ffsm8
0 replies
9h22m

I feel like our industry kinda went the wrong way wrt UI frontend tests.

It should be much less focused on unit testing and more about flow and state representation, both of which can only be tested visually. And if a flow or state representation changed, that should equate to a simple warning which automatically approves the new representation as the default.

So a good testing framework would make it trivial to mock the API responses to create such a flow, and then automatically do a visual regression of the process.

Cypress component tests do some of this, but it's still a lackluster developer experience, honestly

This is specifically about UI frontend tests. Code that doesn't end up in the DOM are great for unit tests.

whatever1
14 replies
12h15m

A test will only catch an edge case you already thought of. If you thought of it anyway why just not fix the bug instead?

Tests have burned out software engineers who waste the majority of their time deriving tests that will pass anyway. And then a significant code change will render them useless, at which point they have to be rewritten from scratch.

No your program will not be more correct with more tests. Deal with it.

wesselbindt
3 replies
11h17m

A test will only catch an edge case you already thought of. If you thought of it anyway why just not fix the bug instead?

The reason I do this is to prevent the bug from re-occurring with future changes. The alternative is to just remember for every part of the system I work on all edge cases and past bugs, but sadly I simply do not have the mental capacity to do this, and honestly doubt if anyone does.

whatever1
2 replies
10h53m

If a future change is relevant to an existing piece of code then the logic needs to be rethought from scratch. Your past tests have no guarantee that will be still relevant or comprehensive.

So skip the tests and work more on the code instead.

wesselbindt
0 replies
8h56m

To me, advice like "just write your code in a way that you will only ever extend it, not change it" is about as realistic as "just don't write bugs".

quectophoton
0 replies
9h41m

If a requirement changes, the test for that requirement obviously has to change. These tests breaking is normal (you had a requirement that "this is red", and a test ensuring "this is red", but now suddenly higher ups decide that "this is not red", so it's obvious why this test breaking is normal).

If a requirement doesn't change, the test for those requirements should not change, no matter what you change. If these tests break, it likely means they are at the wrong abstraction level or just plainly wrong.

Those are the things I look at. I don't even care if people call stuff "unit tests", "integration tests". I don't care about what should be mocked/faked/stubbed. I don't care about whatever other bikeshedding people want to go on.

E.g. if your app is an HTTP API, then you should be able to change your database engine without breaking tests like "user shouldn't be able to change the email of another user". And you should also be able to change your programming language without breaking any tests for user-facing behavior (e.g. "`GET /preferences` returns the preferences for the authenticated user").

E.g. if your code is a compiler, you should be able to add and remove optimizations without changing any tests, other than those specific to those optimizations (e.g. the test for "code with optimizations should behave the same as code without optimizations" shouldn't change, except for specific cases like compiling only with that optimization enabled or with some specific set of optimizations that includes this optimization).

bubblebeard
2 replies
11h43m

For me at least, designing a test will usually let me discover problems with my code which may otherwise gone unnoticed.

Leaving the tests there once written to help us in future refactoring costs nothing.

Granted, in some languages tests are more complicated to write compared to others. In PHP it’s a nightmare, in Rust it’s so easy it’s hard to avoid doing.

I hear what you are saying though, sometimes writing tests consume more time then is necessary.

yard2010
1 replies
10h40m

I completely agree with what you're saying - tests help me ensure nothing breaks and change stuff fast. But leaving EVERY code behind is a liability. In the best case, it's free, otherwise, it's another point of failure and other engineers might spend time understanding it.

Code is a liability. It has to have a good reason to be there in the first place - in the case of tests, it's worth it because it saves more time on bugs, but this can easily turn into a premature optimization.

bubblebeard
0 replies
6h51m

Very well put! Couldn’t have said it better myself

quectophoton
0 replies
10h4m

Will all your team members also think about those edge cases when changing that part of the code? Will they ensure the behavior is the same when a library dependency is updated?

So, tests catch edge cases that someone else thought of but not everyone might have. This "not everyone" includes yourself, either yourself from the future (e.g. because some parts of the product are not so fresh in your mind), or yourself from now (e.g. because you didn't even know there was a requirement that must be met and your change here broke a requirement over there).

To put an easy to understand example, vulnerability checkers are still tests (and so are linters and similar tools, but let's focus on vulnerabilities). Your post implies you don't need them because you can perfectly prevent a vulnerability from ever happening again once you know about it, both because you write code that doesn't have that vulnerability and because you check that your dependencies don't have that vulnerability.

So, think of tests more like assertions or checksums.

drewcoo
0 replies
16m

A test will only catch an edge case you already thought of.

Property-based tests and model-based tests can catch edge cases I never thought of.

Tests have burned out software engineers who waste the majority of their time deriving tests that will pass anyway.

Burn, baby, burn! We don't need programmers who can't handle testing.

creesch
0 replies
10h2m

I assume you are talking about unit tests here.

Thinking of edge cases is exactly what unit tests are for. They are, when used properly, a way to think about various edge cases *before* you write your code. And then, once you have written your code, validate that it indeed does what you expected to do so beforehand.

The issue I am seeing, more often than not, is that people try to write unit tests after the fact. Which means that a lot of the value of them will be lost.

In addition to that, if you rewrite your code so often that it renders many of your tests invalid I'd argue that there is a fundamental issue elsewhere.

In more stable environments, unit tests help document the behavior of your code, which in turn helps when rewriting your code.

Basically, if you are just writing tests because people told you to write tests, it is no surprise you burn out over them. To be fair, this happens all too often. Certainly with the idiotic requirement added to it that you need 80% coverage without any other context.

If you write tests while understanding where they fit in the process, they can actually be valuable for you.

codr7
0 replies
8h58m

Writing a test is often the best way to reproduce and make sure you fixed a bug.

Keeping them for a while lets you make sure it doesn't pop up again.

10 years later, they probably don't add much value.

Tests are tools, that's like saying 'No, your food won't taste better with more salt.', it depends.

becquerel
0 replies
11h45m

You write the test to prevent the bug from being accidentally reintroduced in the future. I have seen showstopper bugs reintroduced into production multiple times after they were fixed.

ahartmetz
0 replies
9h30m

There are things that are easier to verify than to do correctly. Almost anything that vaguely looks like a proper algorithm has that property. Sorting, balanced trees, hashtables, some kinds of data splicing, even some slightly more complicated string processing.

Sometimes it's also possible to do exhaustive testing. I once did that with a state machine-like piece of code, test transitions from all states to all other states.

7bit
0 replies
11h57m

Do you think the test is written and the bug left in? What a weird take.

And then, you write the test so that future changes (small or big) that causes regressions get noticed before the regression is put into production again. Especially in complex systems, you can define the end result and test if all your cases are covered. You do this anyway manually, so why not just write a test instead?

YZF
14 replies
13h54m

The focus on automated unit/integrations tests is a relatively modern thing (late 90's?). There was some pretty large and extremely reliable software shipped before that focus. Random example is that the Linux kernel didn't have much tests (I think these days there is more testing). Unix likely didn't have a lot of "tests". Compilers tended to have them. Operating systems less so. Games (e.g. I'm sure Doom) didn't tend to have tests.

You need to find a balance point.

I think we know that (some) automated tests (unit, integration, end to end) can help build quality software. We also know good tests aren't always easy to write, bad tests make for harder refactoring and flaky tests can suck a lot of time on large projects. At the same time it's always interesting to try different things and find out what works, especially for you if you're a solo developers.

galaxyLogic
4 replies
13h49m

"Unit-Testing" became popular about the time of Extreme Programming. The reason I think it became so popular was that its proponents programmed in dynamically typed languages like Smalltalk, and later JavaScript. It seems to me that synamic languages needs testing more than statically typed ones.

Jtsummers
3 replies
13h42m

Beck's first xUnit framework was SUnit, for Smalltalk, but Beck's second take was JUnit, which is for Java. Java was and still is a statically typed language.

Tests are there to detect logical correctness of the unit under test, very few type systems can catch errors like using - instead of + in a mathematical formula, for instance. You either need to go into dependently typed languages or languages that otherwise permit embedding proofs (SPARK/Ada).

YZF
2 replies
12h57m

In dynamic languages tests also tend to fill the role of the compiler is I think the parent's point. Dynamic/interpreted language code might have syntax errors or be otherwise incorrect (including type errors) and you often don't find those until they code is run.

igouy
0 replies
1h57m

When this buggy method is compiled (not run) with Smalltalk, errors and warnings are shown. The code cannot be run because it failed to compile.

hnQuestion

| list max |

list := #(1 8 4 5 3).

! Syntax Error: Nothing more expected

1 to: list size do: [:i |

max < (list at: i)

? Uses ifTrue:/ifFalse: instead of min: or max:

ifTrue: [max := (list at: i)].

ifFalse: [max := max].

].

dgb23
0 replies
11h1m

That’s a fair point. However dynamic languages tend to have very good linting that catch many basic type errors.

They can also run way more often during development, down to the function/expression level.

RandomThoughts3
4 replies
10h48m

Most video games have a full team of QA testers doing functional testing on the games as they go along.

Same thing for the kernel, plus some versions are fully certified for various contexts so you can be sure fully formalised tests suites exists. And that’s on top of all the testing tools which are provided (Kunit, tests from user spaces, an array of dynamic and static testing tools).

But I would like to thank all the people here who think testing is useless for their attitude. You make my job easier while hiring.

scarygliders
3 replies
10h28m

But I would like to thank all the people here who think testing is useless for their attitude. You make my job easier while hiring.

That's fine.

I've never written a test in my life. Have my programs ever had bugs? Sure. But I sleep very well at night knowing that I spent all my brain power and time writing actual code that Does Useful Work rather than have wasted significant lengths of my time on this planet on writing test code to test the code that does the Useful Work.

You speak of attitude and smugly "thank" those who don't write tests as that acts as your hire-or-not filter. With an attitude like that, I'd 100% not work for anyone with that attitude anyway.

RandomThoughts3
2 replies
10h20m

I've never written a test in my life. Have my programs ever had bugs? Sure. But I sleep very well at night knowing that I spent all my brain power and time writing actual code that Does Useful Work rather than have wasted significant lengths of my time on this planet on writing test code to test the code that does the Useful Work.

And that’s why I never want to have to work with you on anything shipping to a user ever.

Don’t get me wrong, the field is riddled with people who think testing is beside them and wash their hand with the quality of what they ship and what they put their users through. That’s an issue to fix not a situation we should tolerate.

scarygliders
1 replies
9h27m

Don’t get me wrong, the field is riddled with people who think testing is beside them and wash their hand with the quality of what they ship and what they put their users through. That’s an issue to fix not a situation we should tolerate.

See, this is my point. It's not that testing is beside me, it's that my stuff gets tested anyway.

Here's the test: Does it fucking work or not?

You do that by running the thing. If it explodes, find out why and fix it. Job done. No thought or line of code was wasted in writing tests, all brain power was used to initially write a piece of code - which initially had a bug of course - and then said bug was fixed.

My code gets tested. By people using it. Or by me testing it as I write it ("does it fucking work").

There is really only one test.

You can choose to expend your brainpower and time on this planet writing code that will never actually be run by an end-user, or you can just write the fucking code that the end-user will run. That's how I work. Write it and run it. That's the test.

Test code written to test Useful Working Code is time wasted. It's like putting stabiliser wheels on bicycles - you're either gonna be stuck forever riding a bike with stabilisers, or you grow up and rip them off and have a few falls on the bike then become confident and competent enough to ride that bike without them. And have more freedom and time to experiment and go places you couldn't when they were put on.

So yeah. I definitely wouldn't work with people who like wasting my and their time on this Earth.

Write it. Run it. It either does what it's supposed to or not. If it doesn't, find out why and fix it. Or discover that your function/code abstraction/thought was shit in the first place then write it differently - oh and that's the worst part about writing code that tests the Code That Does The Work; say you discover that the function you're writing was a load of bollocks and needs to be highlighted and simply erased - there goes all that test code you spent brainpower and time, with it, too. And now you have to spend even more time writing new test code to test the Code That Actually Does Useful Work.

No thanks. And goodbye.

RandomThoughts3
0 replies
6h13m

My code gets tested. By people using it.

Users are not guinea pigs. They deserve better.

Write it. Run it. It either does what it's supposed to or not. If it doesn't, find out why and fix it

That's called functional testing and that's actually testing. You are one step removed from actually formalising what you do and getting non regression testing for free. At that point, I think you are either arguing fot the sake of it and do actually realise that testing is important or somehow confuse testing with unit testing which is only a narrow subset of it.

amluto
1 replies
13h50m

Random example is that the Linux kernel didn't have much tests (I think these days there is more testing).

As the author of many of Linux’s x86 tests: many of those tests would fail on old kernels, and a decent number of those failures are related to very severe bugs. Linux has worked well for many years, but working well didn’t mean it wasn’t buggy.

YZF
0 replies
13h2m

As was said in another comment, tests don't prove the lack of bugs. There is no software of enough complexity without bugs.

Working is something ;) Lots of software barely does that and there is certainly plenty of software with tests that doesn't meet the no-test Linux quality bar.

That said, tests certainly have their place in the world of software quality, so thanks for your work!

dgb23
0 replies
11h7m

My old man who will always gladly mention that „we did this already in the 80‘s and it was called frobniz“ whenever I bring up a technique, architecture etc. would beg to differ.

When I asked him about TDD he said they did practically the same thing. Forgot how it was called though.

One recent gem was when he shared a video where they explained the recent crowdstrike debacle: „Look they’re making the same mistakes as 40 years ago. I remember when we dynamically patched a kernel and it exploded haha…“.

In any case, writing tests before writing the implementation was a thing during the 80‘s as well for certain projects.

Ma8ee
0 replies
9h34m

Of course there were tests, just not automated tests!

In better run organisations they had test protocols, that is, long lists of tests that had to be run by manual testers before any new version could be released. Your manager had to make sure these testers were scheduled well in advance before the bi-annual release date of your latest version of the software.

So that listing old software and claim that they didn't have much tests is misleading, to say the least.

akkartik
10 replies
14h51m

When you have no tests your problems go away because you don’t see any test failures.

The flip side of this is the quote that "tests can show the presence of bugs, but never their absence". It better fits my experience here; every few months I'd find a new bug and diligently write a test for it. But then there was a new bug in a few months, discovered by someone in the first 10 minutes of using it.

I'm sure I have bugs to discover in the new version. But the data structures I chose make many of the old tests obsolete by construction. So I'm hopeful that I'm a few bugs away from something fairly stable at least for idle use.

Tests are definitely invaluable for a large team constantly making changes to a codebase. But here I'm trying to build something with a frozen feature set.

monkpit
8 replies
14h33m

If your tests break or go away when your implementation changes, aren’t those bad tests by definition?

Jtsummers
5 replies
13h57m

A lot of tests don't survive implementation changes, that doesn't make them "bad tests by definition". It means their value came and went. Think of it like scaffolding. You need it for a time, then the time is up, and it's removed. That doesn't make it bad, it was still necessary (or at least useful) for a time.

When there's an implementation change you'll likely end up discarding a fair number of unit tests and creating new ones that reflect the new implementation details. That's just natural.

seanmcdirmid
3 replies
13h8m

A lot of tests, especially unit tests, are just change detectors and get updated/go away when change happens, that is just WAI. It is fairly hard to write non change detection tests, it requires for you to really reason abstractly about the contract of your module, or to write integration tests that are moving a bunch of things at once.

zem
1 replies
12h29m

small, fine-grained black box tests can be really good for this. in my last project, a type checker, the vast majority of the test suite was code snippets and assertions about expected errors the checker needed to catch, and it was an invaluable aid when making complex changes to the implementation.

seanmcdirmid
0 replies
29m

Anything that transforms or processes text, like a compiler or type checker, is pretty easy to test. You get into trouble with user interfaces, however.

watwut
0 replies
11h24m

If that is the case too often, I ditch them and write integration tests for that part.

codr7
0 replies
9h1m

Yeah, especially when you're exploring new ground.

Unit tests are awesome for fleshing out APIs; but once the fundamentals are in place, the tests no longer add any value.

akkartik
1 replies
14h17m

I have two answers:

1. Yes. To the same extent that we are all bad people by definition, made of base material and unworthy urges.

I'd love to have better programmers show me how I can make my tests better. The code is out there.

2. Even if I have good tests "by definition", a radical rewrite might make old tests look like "assert(2x1 == 2), assert (2x2 == 4)". Tests exist in a context, and radically changing the context can change the tests you need.

---

This is not in OP, but I do also have a problem of brittle tests in my editor. In this case I need to test a word-wrapping algorithm. This depends intimately on pixel-precise details of the font. I'd love for better programmers than me to suggest how I can write tests that are robust and also self-evidently correct without magic constants that don't communicate anything to the reader. "Failure: 'x' started at x=67 rather than x=68." Reader's thought: "Why is this a problem?" etc. Comments appreciated on https://git.sr.ht/~akkartik/lines.love/tree/main/item/text_t.... The summary at https://git.sr.ht/~akkartik/lines.love/tree/main/item/text_t... might help orient readers.

AdieuToLogic
0 replies
13h26m

> If your tests break or go away when your implementation changes, aren’t those bad tests by definition?

1. Yes. To the same extent that we are all bad people by definition, made of base material and unworthy urges.

Good and bad are forms of judgement, so let's eschew judgement for the purposes of this reply :-).

I'd love to have better programmers show me how I can make my tests better.

Better is also a form of judgement and, so, I will not claim I am or am not. What I will claim to do is offer my perspective regarding:

This is not in OP, but I do also have a problem of brittle tests in my editor.

Unfortunately, brittle tests are the result of being overly specific. This is usually due to tests enforcing implementation knowledge instead of verifying a usage contract. The example assertions above are good examples of this (consider "assert (canMultiply ...)" as a conceptual alternative). What helps mitigate this situation is use of key abstractions relevant to the problem domain along with insulating implementation logic (note that this is not the same as encapsulation, as insulation makes the implementation opaque to collaborators).

In your post, you posit:

Types, abstractions, tests, versions, state machines, immutability, formal analysis, all these are tools available to us in unfamiliar terrain.

I suggest they serve a purpose beyond when "in unfamiliar terrain." Specifically, these tools provide confidence in system correctness in the presence of change. They also allow people to reason about the nature of a system, including your future-self.

Perhaps most relevant to "brittle tests" are the first two you enumerated - types and abstractions. Having them can allow test suites to be defined against the public contract they provide. And as you rightly point out in your post, having the wrong ones can lead to problems.

The trick is, when incorrect types and/or abstractions are identified, this presents an opportunity to refine understanding of the problem domain and improve key abstractions/collaborations accordingly. Functional testing[0] is really handy to do this fairly rapidly when employed early and often.

HTH

0 - https://en.wikipedia.org/wiki/Functional_testing

creesch
0 replies
10h11m

Automated tests ideally don't entirely replace manually executed tests. What they do replace is repetitive regression tests that don't need to be executed manually.

In an ideal world this opens up room for exploratory testing where someone goes "off-script" and focuses specifically on those areas that are not covered by your automated tests.

The thing is that automated tests aren't really tests, even though we call them that. They are automated checks at specified points, so they only check the outcome at those point in time. So yeah, they are also completely blind from the sort of thing a human* might easily spot while using the application.

*Just to be ahead of the AI bros, we are not there yet, hold your horses.

h1fra
4 replies
9h13m

I'm puzzled by people debating tests. why such hate? They catch bugs, prevent breaking changes, and ensure API stability. I have never seen tests preventing me from refactoring anything. I guess it depends on the company and the processes :thinking:

swat535
0 replies
1h48m

Because writing good tests is very hard and many engineers are simply mediocre so they write brittle tests that require a lot of time to fix and don't actually test the right things (e.g too many mocks) or simply overconfident (like some people in the thread) that their code will always work.

Also the TDD cultists are partially to blame for this attitude as well. Instead of focusing on teaching people how to write valuable tests, they decided to preach dogma and that frustrated many engineers.

I'm firmly in the circle of writing tests of course, I don't think a system that is not tested should ever be in production (and no, you opening your browser on a local machine to see if it works is not sufficient testing for production..).

eithed
0 replies
8h14m

Tests are tools - you won't be using screwdriver for everything, even though it's a tool that useful in many things.

Having said that - tests, codebase and data consistency, static types are things I'd not want to be without

codr7
0 replies
9h7m

There are different kinds of tests.

Integration tests at the outer edges often gives you most bang for buck.

Granular, mocked unit tests often add little value and will become a maintenance burden sooner or later.

And some of it is unconscious; maybe having that big, comfy test suite is preventing the software from evolving in optimal directions; because it would just be too much work and risk.

HelloNurse
0 replies
9h0m

I think there is a mostly psychological "problem": tests are not perceived as progress (unless you are mature enough to treat quality assurance as an objective) and finding them fun to write or satisfying to run is an unusual acquired taste.

lelanthran
3 replies
10h56m

When you have no tests your problems go away because you don’t see any test failures.

Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

It's a trade-off. Most of the business world ran on, and to some extent still runs on, Excel programs.

There are no tests there, but for the non-tech types who created these monsters, spending time on writing a test suite has a very real cost - there's less time to do the actual job they were hired for!

So, yeah, each test you write means one less piece of functionality you add. You gotta make the trade-off between "acceptably (in frequency and period) buggy" and "absolutely bullet-proof no matter what input is thrown at it".

With Excel programs, for example, if the user sees an error in the output, they fix the input data, they don't typically fix the program. It has to be a dealbreaker bug before they will dive into their code again to fix the program.

And that is acceptable to them.

Ma8ee
2 replies
9h22m

There are no tests there, but for the non-tech types who created these monsters, spending time on writing a test suite has a very real cost - there's less time to do the actual job they were hired for!

Not spending time on writing tests has a very real cost - a lot of time is spent on figuring out why your forecast was way off, or your year end figures don't add up.

Not to mention how big parts of the world are thrown into austerity, causing hundred of thousand dead, due to errors in your published research [0].

[0] https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt#Metho...

lelanthran
1 replies
8h42m

> It's a trade-off.

> spending time on writing a test suite has a very real cost

Not spending time on writing tests has a very real cost

Yes. That's what "trade-off" means.

Ma8ee
0 replies
6h28m

My point is that there isn't a tradeoff between getting "real work" done or writing tests. Either you write tests, or you spend the even more time mitigating the consequences of not writing tests. You can't save time by not writing tests (except for the most trivial cases).

osigurdson
2 replies
13h8m

Tests written for pure functions are great. Tests written for everything else may be helpful but might not be.

Ma8ee
1 replies
9h29m

You need tests for all part of the functionality you care about. I write tests for making sure that what is persisted is what we get back. Just the other day I found a bug due to our database didn't care about the timezone offset for our timestamps.

osigurdson
0 replies
4h36m

Not suggesting that testing other things isn't useful but not as straightforward and not as obviously beneficial as pure function testing. It is easy to just dogmatically pile on tests but they may not be helpful.

6510
2 replies
8h11m

When you have no tests your problems go away because you don’t see any test failures.

Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

I wasn't a very fast typist, I could do about 180 strokes per minute. My teacher, a tiny 80 year old lady, talked the whole time to intentionally distract her 5-6 students. It was a hilarious experience. One time, when I had an extra slow day, the monologue was about her learning to type, the teaching diploma required 300 strokes per minute, from print, hand writing and dictation. Not on such a fancy electronic type writer! We had mechanical type writers! And no correction lint! She was not the fastest in her class by far and many had band-aids around smashed fingers. Trying to read type, not listen and not burst out in laughter I think she forced me down to 80 strokes per minute. Sometimes she had me sit next to a girl doing 450 strokes per minute. Sounded like a machine gun. They would have casual conversation with eye contact. I should not have noticed it, I was suppose to be typing.

When writing code and think about those "inevitable" bugs I always think of the old lady, who had 1000 ways of saying: you only think you are trying hard enough... and: we had no correction lint....

Take a piano, there is no backspace. You are suppose to get it right without mistakes.

If you have all of those fancy tools to find bugs, test code, the ability to quickly go back and forwards, of course there will be plenty mistakes.

If they need to be there no one knows.

Yossarrian22
1 replies
4h19m

World class best in the world gymnasts still fall off a balance beam from time to time.

Mistakes are inevitable, it’s why whiteout and then word processors were made

6510
0 replies
1h33m

Pain is a great teacher.

ChrisMarshallNY
1 replies
9h53m

> Never have I tested anything and NOT found a bug, and most things I tested I thought were already OK to ship.

I have found that, in my own case, every time I’ve written a unit test, it has exposed bugs.

I don’t usually do the TDD thing, where I write failing tests first (but I do it, occasionally), so these tests are usually against code that I already think works.

That said, I generally prefer test harnesses to unit tests[0]. They still find bugs, but the workflow is less straightforward. They also cause me to do more testing, as I develop, so the bugs are fixed in situ, so to speak.

[0] https://littlegreenviper.com/testing-harness-vs-unit/

drewcoo
0 replies
20m

That said, I generally prefer test harnesses to unit tests[0].

That's a strange redefinition of harness.

The larger-scoped tests are more often called integration or even system tests.

And while I'm here, those are slow tests that are harder to debug and require more maintenance (often maintenance of an entire environment to run them in!). Unit tests are closer to what they test, fast, and aren't tied to an environment - they can be run on every push.

jjice
0 replies
4h56m

Completely agree on tests. It's much more enjoyable for me to write some automated tests (unit or integration) and be able to re-run them over and over again than it is for me to manually run some HTTP requests against the server or something. While more work up front, they stay consistent and I can feel more comfortable with my code when I release.

It's also just more fun to write code (even a test) than it is to manually run some tests over and over again, at which point I eventually get lazy and skip it for that last "simple, inconsequential" commit.

Coming from a place where we never wrote tests, I introduce way fewer bugs and feel way more confident every day, especially when I change code in an existing place. One trick is to not go overboard and to strike an 80/20 balance for tests.

dgb23
0 replies
8h48m

I watched a video by Russ Cox that was recommended in a recent thread, Go Testing By Example:

https://www.youtube.com/watch?v=X4rxi9jStLo

There's _a lot_ of useful advice in there. But what I wanted to mention specifically is this:

One of the things he's saying is that you can sometimes test against a simpler (let's say brute force) implementation that is easier to verify than what you want to test.

There's a deeper wisdom implied in there:

The usefulness of tests is dependent on the simplicity of their implementation relative to the simplicity of the implementation of what they are testing.

Or said more strongly, tests are only useful if they are simpler than what they test. No matter how many tests are written, in the end we need to reason about code. Something being a "test", doesn't necessarily imply anything useful by itself.

This is why I think a lot of programmers are wary of:

- Splitting up functions into pieces, which don't represent a useful interface, just so the tests are easier to write.

- Testing simple/trivial functions (helpers, small queries etc.) just for coverage. The tests are not any simpler than these functions.

- Dependency inversion and mocking, especially if they introduce abstractions just in order to write those tests.

I don't think of those things in absolute terms though, one can have reasons for each. The point is to not lose the plot.

devjab
0 replies
8h52m

It depends a lot on what you work on and how you program. Virtually none of our software has actual coding errors, and when developers write new parts or change them, it’s always very obvious if something breaks. Partly because of how few abstractions we use, partly because of how short we keep our chains. Letting every function live in isolation and almost never being used by multiple parts of the software. Both the lack of abstractions and the lack of reuse is against a lot of principles, and it’s not exactly like we refuse to do either religiously, but the only real principle we have is YAGNI, and if you build and abstraction before you need it you’re never going to pass a code review. As far as code reuse goes, well, in the perfect world it’s sort of stupid to have a lot of duplicate code. In a world where a lot of code is written on a Thursday afternoon by people who are tired, their babies kept them awake, the meetings were horrible, management doesn’t do the right things and so on. Well, in that world it’s almost always better to duplicate code so that it doesn’t eventually become a complicated abstract mess. It shouldn’t, and I’m sure it doesn’t in some places, I’ve just never worked in such a place. I have worked with a lot of people who followed things like clean code religiously and the results were always unwieldy code where even small changes would take weeks to implement. Which is completely counterproductive to what the actual business needs. The benefit of YAGNI is that it mostly applies to tests as well, exactly because it’s basically impossible to make changes without knowing exactly what impact you’re having on the entire system.

What isn’t easy is business logic, and here I think tests are useful. Or at least they can be. Because far too often, the business doesn’t have a clue what they want up front. Even more often the business logic will change so rapidly that tests automated tests become virtually useless since you’re going to rely on acceptance tests anyway.

Like I said, I’m not religious about it. I sometimes write tests, but in my anecdotal experience things like full test-coverage is an insane waste of time over a long period.

datavirtue
0 replies
5h17m

He was basically starting over. Definitely need to delete the tests. One of the issues with enterprise development is choking the project with tests and other compliance shit as soon as people start coding. Any project should be in a workable/deployable state before you commit to tests.

munchler
42 replies
15h33m

Giving up tests and versions, I ended up with a much better program.

I can’t understand how anyone would willingly program without using source code control in 2024. Even on a single-person project, the ability to work on multiple machines, view history, rollback, branch, etc. is extremely valuable, and costs almost nothing.

Maybe I’m misunderstanding what the author means by “versions”?

xelxebar
12 replies
14h59m

As programmers we are inundated with choice and options. Our tooling and whatever the zeitgeist considers "best tooling" tends to err on the side of making $THING easier to do.

But having 1000 easy options always available introduces severe cognitive burden to pick the correct choice. That's part of the reason why we as an industry have enshrined all shorts of Best Practices and socially shame the non-adherents.

Don't get me wrong, bad architecture and horrible spaghetti code is terrible to work with. However, questioning the things that feel Obviously Correct and exploring different and austere development environments that narrow our set of available choices and tools can sincerely operate to sharpen our focus on the end goal problem at hand.

As for version control, branching encourages cutting a program into "independent features"; history encourages blind usage of potentially out-of-date functional units; collaborative work reifies typically-irrelevant organizational boundaries into the code architecture (cf Mel Conway); etc.

Version control's benefits are also common knowledge, but there are real tradeoffs at the level of "solving business problem X". It's telling that such tradeoffs are virtually invisible to us as an industry.

sethherr
11 replies
14h53m

branching encourages cutting a program into "independent features"

But, you can choose not to branch then?

I’m really confused about the trade offs of version control. I can understand trade offs of branching strategies, but at its most fundamental (snapshots of your code at arbitrary times), I can’t think of any drawbacks?

tmn
8 replies
14h31m

I’m working on a feature that is a moderate refactoring and extending of an existing feature. I’m in some sense taking extra burden by ‘sculpting’ my change out of the existing code and the working backwards to come up with the logically contained and discrete commits to get from where I started to where I want to go.

I would be nice to just make my change without having to show it in a series of discrete steps.

I’m not actually opposed to this standard, but trying to show one perceivable downside that op may be alluding to (I’m not actually sure?)

sethherr
7 replies
14h27m

Thats not version control, that’s something you’ve chosen to do with version control.

You could just check in your code every night. And, vs not having those commits (even without messages) - what could possibly be the downside?

tmn
4 replies
14h26m

Maybe your confusion is in your assumption of what’s being discussed

sethherr
3 replies
14h19m

I’m discussing version control.

tmn
2 replies
14h5m

And everyone else is discussing behaviors that are down stream of version control

drawfloat
0 replies
11h56m

But only if you choose to use them. I agree with the other commenter, it's very hard to see what trade offs there are to pressing a button to initialise a repo at the start, then committing any changes at the end of each session/intermittently so there's a copy of current progress somewhere?

If the OP is referring to version control because they're needing to handle multiple branch types, switching between versions etc that is much more involved....but also makes it even harder to see how you can manage that by simply dropping version control entirely?

From the article, it does seem like it's not about any sort of specific feature they use, but rather the sheer basic "save versions of code" aspect of VC:

"Version control kept me attached to the past"

To go back to an earlier comment, this honestly sounds like burnout to me if you're having temporal anxiety from saving code.

arthens
0 replies
5h2m

If it's your personal project, you are in charge of deciding which "behaviors that are down stream of version control" you want to adopt. If you are applying unnecessarily complex processes for a given project, that's on you.

tmn
1 replies
14h23m

This is all in a professional environment requiring code review for actual submission. I need to follow this process to actually deliver

sethherr
0 replies
14h17m

This sounds like you’re discussing code review and coding standards, not version control.

xelxebar
1 replies
11h17m

You're, perhaps unintentionally, moving the goalposts a bit. "Version control" doesn't just mean database of code snapshots. It simultaneously connotes all the related functions and development processes we have around version control.

Are you familiar with the artistic practice of adding "artificial" constraints in order to promote creativity and productivity? See Gadsby, the novel written without using the letter "e", or anything produced by Oulipo.

The point is that we have a superabundance of choice with software architecture and programming tools. One subset of those tools comprises things provided by version control. Give yourself a version control-less, limited development environment and see how it influences the way you think about and practice coding. There will be sharp edges, but if you give it an honest attempt, you will also very likely discover novel and better ways of doing more with less.

There are many things you can try. Disable syntax highlighting in your editor; try exclusively using a line editor such as ed; flatten your codebase into a single directory; code everything in a single file; organize your data structures to minimize pointer chasing; support extreme cross-platform compatibility (10 OSes?); write platform-independent code using "only assembly" (a la Forth, sectorlisp, or whatever); write a thing and then nuke and rewrite 5 times; etc.

IMHO, value in the above is most easily discovered by retaining a strong introspective eye throughout your personal development process. Where are the pain points? What processes force you to think about non-end goal issues? When does coding feel the most glorious? When did you have the deepest insights? Blah blah blah.

arthens
0 replies
5h6m

You're, perhaps unintentionally, moving the goalposts a bit. "Version control" doesn't just mean database of code snapshots. It simultaneously connotes all the related functions and development processes we have around version control.

Not OP, but I'd argue you are the one moving the goalpost here.

If someone says they are not using "version control", I'm going to assume that they are not using git (or similar) at all. Any other meaning would be so arbitrary to be almost useless. No one can guess where you draw the line in the sand between "I'm not using any version control tool" to "I'm technically using a version control tool but I'm not doing version control because I don't do X,Y,Z".

I personally can't imagine writing any non trivial piece of code without using git. Even in its more basic form, the advantages are overwhelming. But at no point of my 20+ years of development I've ever applied the same rigorous version control rules of professional environments to my personal projects. At best I've used branches to separate features (rarely, and mostly when I got tired of working on a problem and wanted to work on a different one for some time), and PRs to have an opportunity to review the changes I made to see if I forgot to do something. At "worst" I simply used it as a daily snapshot tool (possibly with some notes about what's left to do) or as a checkpoint after getting something complicated working.

If the author has finally figured out rigorous source control can be unnecessary and counterproductive on small projects - good on them! But if that's the case then say that. Calling the fine tuning of which process you want (or don't want) to use "no version control" is just misleading.

shermanyo
11 replies
15h25m

I think in this case, the author means coding version logic into the app itself. eg. versioned API endpoints for backwards compatibility

resonious
5 replies
15h21m

He specifically mentions version control and avoiding merge conflicts, so I'm pretty sure it's stuff like git that he's finding himself cautious about.

jay_kyburz
3 replies
11h34m

How do you get a merge conflict with yourself?

lnenad
0 replies
8h36m

By trying really hard

codr7
0 replies
8h54m

Thanks a bunch, now the coffee is on my keyboard.

wtetzner
0 replies
3h41m

That makes sense, but then why not just work on trunk and don't worry about branching?

shepherdjerred
3 replies
15h22m

I don't think so:

Back in 2015 I was suspicious of abstractions and big on tests and version control. Code seemed awash in bad abstractions, while tests and versions seemed like the key advances of the 2000s.

In effect I stopped thinking about version control. Giving up tests and versions, I ended up with a much better program.

Version control kept me attached to the past. Both were counter-productive. It took a major reorientation to let go of them.
hetman
1 replies
15h2m

Your quotes seem to reinforce parent's assertion he's not talking about version control in the form of tooling but some kind of versioning in the code itself: "...while tests and versions..."

g15jv2dp
0 replies
14h45m

Holy cherry-picking batman.

mattacular
0 replies
15h18m

I don't get what they mean by "Version control kept me attached to the past."

You don't have to look at the history to use other features of version control. Typically everything is moving forwards in a repository.

layer8
0 replies
15h19m

This is about a desktop text editor built with LUA on a C++-based native framework for writing 2D games: https://git.sr.ht/~akkartik/lines2.love Very unlikely to have versioned API endpoints involved.

raincole
3 replies
15h22m

The author is probably experiencing mental fatigue or even burnout about programming.

If version control bothers you that much I'd say it's a good sign that you need to take a break.

akkartik
2 replies
14h29m

This seems very far from my subjective experience. The little platform-independent programs I write for myself and publish are a source of spiritual rejuvenation that support my day job in a more conventional tech org with a large codebase, large team and constantly changing requirements.

I'm not "bothered" by version control. I've not even stopping using it. As I say in the post, I just don't think about it much, worrying about merge conflicts and so on, when I'm programming. I've stopped leaning on version history as a tool for codebase comprehension. (More details: https://akkartik.name/post/wart-layers)

This comment may also help clarify what I mean: https://news.ycombinator.com/item?id=41158040

lnenad
1 replies
8h37m

All of your comments are without any arguments against vc. It also seems there is a missunderstanding of your state, you seem to use it but you aren't focused/disciplined in its use?

I'm not "bothered" by version control. I've not even stopping using it. As I say in the post, I just don't think about it much, worrying about merge conflicts and so on

How is using VC, especially in a solo project, "bothering"? It really does seem you just hate the tooling around modern software development and you just want to spit out code that does something for you and yourself. Which, again, is fine, but it's usually not a good idea if you are making something for other people/users.

akkartik
0 replies
2h33m

But I said VC is not "bothering"!

Perhaps I should replace the word "versions" in my post with "workflows". In some situations the workflows I settle into contribute to a feeling of being stuck in a local optimum. Throwing away familiar and comfortable workflows can help find a global optimum. It's just the first step, though. It takes hard work to build everything all at once. But it can be valuable for some projects if you aren't happy with where you are.

bugbuddy
3 replies
15h19m

Could this person be intentionally giving bad advice?

bubblebeard
2 replies
11h22m

I think it’s just an alternative way of thinking. It’t not one I agree with, but I can see where the author is coming from. Think he’s just tired of spending time on useless tasks around his projects. For all we know they may be, but I do have hard time viewing testing and version control as overhead xD

codr7
1 replies
8h51m

I'm pretty sure he's trying to find his balance, because it is always a balance and we tend to err big on the other side.

bubblebeard
0 replies
6h58m

Yeah exactly

nine_k
2 replies
14h49m

The author does not seem to have to support any professional / paying users, and wants freedom to experiment more than a guarantee of a known working version. The author also does not seem to work on large systems, or do significant teamwork (that is, not being the only principal author).

In such a situation, all these tools may not provide a lot of value. A flute player in a large orchestra playing a complex symphony needs notes and/or a conductor; a flute player playing solo against a drum machine, or, playing free jazz, does not much need notes, and would likely be even hindered by them.

imiric
1 replies
11h56m

Tests and version control still have immense value when working solo.

Tests help with ensuring that you don't introduce regressions, and that you can safely refactor. It's likely that you test changes manually anyway, so having automated tests simply formalizes this, and saves you time and effort in the long run.

Version control helps you see why a change was done, and the ability to revert changes, over longer periods of time. We tend to forget this even after a few weeks, so having a clean version control history is also helpful for the future version of you.

Not having the discipline to maintain both, and choosing to ignore them completely, is just insane to me. But, hey, whatever works for OP. I just wouldn't expect anyone else to want to work with them.

The only scenario where I could conceive not using either is in very small projects with a short lifespan: throwaway scripts, and the like. The author is writing their own language and virtual machine, which don't really align with this. Knowing their philosophy, I would hesitate to use anything they made, let alone contribute to it.

akkartik
0 replies
1h43m

Whatever floats your boat, but just to be clear my own language and virtual machine do have tests. The value of tests depends on the domain. Graphics and games benefit less from tests. My graphical text editor straddles the worlds.

I'm still using version control as I've clarified elsewhere. I wasn't expecting this post to gain such a broad audience; I realize now it is really about how one's workflows can keep one stuck in a rut, a local optimum.

akkartik
2 replies
14h42m

I'm trying to build something small with a quickly frozen feature set. I've chosen to build on a foundation that changes infrequently. There is more background at https://akkartik.name/freewheeling.

You're absolutely right that this approach doesn't apply to most programs people build today, with large teams and constantly mutating requirements.

I do still have source control. As I say in OP, I just stopped worrying about causing merge conflicts with other forks. (And I have over 2 dozen of them now; again, see the link above for details.) So I have version control for basic use cases like backups or "what did I just change?" or getting my software on new machines. I've just stopped thinking of version control, narrowly for this program, as a way to help _understand_ and track what changed. (More details on that: https://akkartik.name/post/wart-layers) One symptom of that, just as an example of what I mean: I care less about commit message hygiene. So version control still exists, but it's lower priority in my mind as a part of "good programming practice" for the narrow context of programs like this with frozen feature sets, intended to turn into durable artifacts that last decades.

pseudoramble
0 replies
6h23m

This context helps me understand more what you're getting at quite a bit. I dunno if I could manage the same approach but I at least appreciate how you're thinking about it. Thanks!

galaxyLogic
0 replies
13h45m

O the joys of solo-programming! I do it too and the thing I find interesting about it is I think a lot about how to program better like you are. If I was working on a team I would probably not think much about it, I would be doing just what my boss tells me to do.

shepherdjerred
1 replies
15h23m

Yeah, this is not good advice for the average person, even for solo projects.

codr7
0 replies
8h52m

I agree, and the author probably does as well.

I didn't get the feeling it was meant as general advice.

voiper1
0 replies
10h51m

Yep, commit your code when it "works". Then I can safely go off on a hair brained experiment, knowing I can easily throw away the changes to get back to what worked.

behnamoh
22 replies
15h36m

Types, abstractions, tests, versions, state machines, immutability, formal analysis, all these are tools available to us in unfamiliar terrain. Use them to taste.

How did people program in Lisp for decades? I like types and such, and have even gone so far as to write Python like it's Rust. But in the end I realized dynamic languages have an appeal for a reason, and by using types all over the place, I was not getting the benefits of a dynamic language like Python.

When context is mostly static, dynamic languages shine. Context could be, for example, the structure of the directory. If I want to read a file and I know that the file exists, throwing a bunch of type checks about file reading operation is just overkill and slows down the development.

seer
13 replies
15h13m

Hmm, rarely have I thought types were a burden, rather than help, maybe I’m weird.

Maybe I spend effort in making sure my types are useful and easy to work with, but one previous TypeScript I got to a state where _all_ of my database queries were automatically typed, and all of my requests and responses too, so both input output were guaranteed to be correct bg the compiler, so any bugs or errors that were left were business logic.

It was incredibly liberating - like pairing with someone who was junior but very pedantic. I ended up writing almost no unit test and only having integration level tests, cause the job of those test was mostly covered by types.

And writing the code itself was such a pleasure - you get immediate feedback if your program is correct _as you type it_. The most bizarre consequence of all this to me was writing a program for almost 2 hours, hundreds of lines of code, and then executing it and having it do exactly what you wanted on the first compile. That was both scary but exciting!

One can get over-constrained with types for sure, where you’re sending more time “fighting the types” rather than writing your code. But this is all just learning, once you understand how the typesystem works it all becomes easy to work with.

It was the same story with tests - once I started testing everything- it wasn’t easy to adopt my code to be testable, took effort, but then I learned how to make code pure, move state to the edges, manage dependencies etc, and all of those are useful practices in their own right, regardless if you write the tests or not.

Same with types - schema design and invariants, state machines, edge cases in type conversion and how to lock and manage external dependencies. Sets, unions etc - it became the way I reason about code with of without types and my code is better for it.

I also assume that types would make AI generated code much better _and more reliable_ because of the additional information and structure that it provides, so I recon they are here ti stay.

fendy3002
11 replies
15h0m

have you worked with java or C#? both are where types sometimes become a burden.

No, you cannot just do something like this:

``` return { data, message: "OK" }; ```

you need to declare a class or struct that match the definition. There's mapper and builder everywhere. Adding / deleting 1 column from a datatype can force you to make changes in 5 places due to mapper / builder.

seer
4 replies
14h49m

As a matter of fact - no I haven’t. My experience with types is TypeScript, Elixir and a bit of Scala.

I always try to have “clojure style” types where I don’t try to be too prescriptive - don’t lock things down if you don’t need to, just the minimum possible types to make sure the code I’m writing is correct, and nothing more - Rich Hicky’s talks on Clojure’s Spec was an eye opener.

I have been told by some java devs though that it is a matter of style - while unconditional it is possible to write java code with much less boilerplate if one uses newer language features and actually tries to hold the cruft at bay. Is that true?

behnamoh
2 replies
14h32m

My experience with types is TypeScript, Elixir and a bit of Scala.

Dude, Elixir only recently introduced some sort of a type system...

seer
0 replies
10h42m

While not part of the compiler, the dialyzer’s types are quite nice, even if sometimes it is a bit clunky - I’ve noticed that most of the time when I thought it was “wrong” it actually wasn’t and had picked up on some bug / misunderstanding in my code, though error messages could have been better

ianleeclark
0 replies
11h41m

Elixir has had dialyzer + type hints for years

fendy3002
0 replies
14h45m

idk, haven't use newer java or C# after 2014-ish, though I think I've read C# has support some dynamic typing, never explore that. Never touch Elixir and Scala too so cannot comment on that.

TS though, the type declaration is amazing with it's union, optional and intersection type!

kmoser
1 replies
14h15m

The flip side of this "burden" is that it prevents you from returning an arbitrary data structure that is not expected by the caller and may cause errors further down the call stack if you return the wrong type. The whole point of static typing in the first place is to eliminate this kind of footgun.

fendy3002
0 replies
12h8m

have you tried typescript? the code above will make the function declaration become like this:

function foo (): {data: MyCustomObject, message: string}

so foo().data and foo().message is a valid object, while foo().bar will throw build error

AdieuToLogic
1 replies
14h37m

have you worked with java or C#? both are where types sometimes become a burden.

No, you cannot just do something like this: > ``` return { data, message: "OK" }; ```

Yes, in Java you can:

  return new Object() {
    String message = "OK";
    Object data = itsValue;
    };
There are many reasons to dislike Java and it is nowhere near my programming language of choice. The specific semantic deficiency you chose happens to be an invalid one.

dasyatidprime
0 replies
10h54m

And how does the caller extract those fields?

qball
0 replies
12h7m

Yes you can. C# has had anonymous types since language version 3.0, released nearly 20 years ago; everyone who has used .Select() at the end of a LINQ query has used this.

phito
0 replies
12h9m

You can do that in C# with dynamic types. But realistically it's almost never used because it's kinda dirty.

To each their own I guess, I've never felt burdened by types but I feel like I'm programming in the dark when using a non typed language.

AdieuToLogic
0 replies
14h59m

Regarding programming with a mature static type system:

It was incredibly liberating - like pairing with someone who was junior but very pedantic.

This is exactly what is happening. To achieve the same level of semantic confidence across a code-base in a dynamically typed language (such as JavaScript, Perl, Python, Ruby, etc.) would take the effort of a diligent junior programmer.

Which, in a way, is what a strongly-typed language compiler does IMHO.

BigJono
6 replies
15h25m

The very next dot point is agreeing with you: "The ideal quantity to use these tools is tiny, much more miniscule than any of us is trained to think".

And I agree tbh. The TS community has turned me off static typing for life. It's just wall to wall bikeshedding from the least productive people I've ever had the displeasure of working with.

trustinmenowpls
5 replies
15h21m

Honestly, that explains perfectly why all MS apps have gone from useable if a bit slow or buggy to just unusable buggy messes that crash constantly and make my machine feel like its running on molasses.

nl
4 replies
15h13m

It doesn't "explain" it at all (and indeed many of reject the idea that MS apps - in general - are worse. They've had a bad reputation for decades, and that isn't based on nothing).

behnamoh
3 replies
15h1m

Teams + the entire Office suite are two examples of Microsoft products going downhill over time.

miffy900
1 replies
14h9m

That's true but that has absolutely nothing to do with types in programming languages; it has more to do with MS's culture of always favouring backward compatibility and never culling any features in every version.

behnamoh
0 replies
13h5m

it has more to do with MS's culture of always favouring backward compatibility and never culling any features in every version.

The new Office suite are unbelievably buggy and slow not because of backward compatibility, but because of numerous useless "features" Microsoft added which made the software essentially bloatware. By the time Excel opens up on my Mac, I can open several Google Sheets and start working.

nl
0 replies
5h33m

Has Teams ever been good? From the first time I used it it has always been one of the worst pieces of software I use.

Office seems fine to me - not great software, but I haven't noticed any particular decline (not a heavy user, but have been using it since Word v2 in the 1990s)

And in any case what does this have to do with Typescript and the (over?) use of types?

tmtvl
0 replies
10h18m

I quite like Lisp 'cause I can do silly things like:

  (defun sum-triplet? (list)
    (declare (type List list))
    (and (= (length list) 3)
         (destructuring-bind (a b c)
             list
           (and (numberp a)
                (numberp b)
                (numberp c)
                (= (+ a b) c)))))

  (deftype Sum-Triplet ()
    '(and List
          (satisfies sum-triplet?)))

layer8
12 replies
15h23m

This is quite a confused article.

I really wonder what about it made it be upvoted to first place.

rectang
8 replies
15h3m

I keep trying to figure out the joke.

namaria
7 replies
10h54m

Author successfully drove engagement with psychological baits like bashing commonly accepted tools and practices and being intentionally obscure so a lot of people would comment about it.

codr7
6 replies
8h45m

Or, author has so much more experience than you, that his conclusions can't possibly make sense in your world. Not saying that's the case, but it's certainly possible. The more wisdom, the less need for rules and conventions.

That being said, I do feel like we have to learn to communicate over these boundaries if we want to evolve faster, as opposed to mostly repeating the same mistakes over and over.

namaria
1 replies
8h25m

Yeah I am sure author has transcended such pedestrian things as versioning and testing code.

codr7
0 replies
7h47m

No one has claimed that.

It was simply suggested that in some situations, maybe they're not as important as we tend to assume. And it takes experience to see those patterns.

layer8
1 replies
3h59m

Even if that’s the case, the exposition is quite poor and hard to follow. It doesn’t exhibit a lot of clarity of thinking on the author’s part, or at least it doesn’t translate to his writing. That’s what I meant by “confused”.

akkartik
0 replies
12m

It's not really designed for a broad audience, so I share your surprise that it got upvoted so much. Writing for a broad audience takes me a lot of effort, which isn't always worthwhile.

FWIW this trail might help fill in context:

https://akkartik.name/freewheeling (this was designed for a broad audience, so is like a snapshot backup where the links below are incremental backups)

https://akkartik.name/post/2022-03-31-devlog

https://akkartik.name/post/2024-06-05-devlog

https://akkartik.name/post/2024-06-07-devlog

https://akkartik.name/post/2024-06-09-devlog

https://akkartik.name/post/2024-07-10-devlog

https://akkartik.name/post/2024-07-22-devlog

Sorry to throw a bunch of links at you :)

swat535
0 replies
1h44m

Making strong assertions without any evidence or data to back it up is not "wisdom". I agree with other people: the author is simply burnt out by software (which is fine) and is jut YOLOing his code.

lnenad
0 replies
7h32m

Strong opinion that there is no wisdom in not using anything other than code to produce software for yourself. It's a personal choice. Selling it like it's an epiphany is definitely kind of a weird move.

For your personal projects you can choose any language, define any constraints, do whatever you like which is what I think the author is trying to communicate here, and that is fine. But sprinkling a bit of huge discovery/realization on top is not so much.

bubblebeard
2 replies
11h2m

On the one hand this may be an article from a developer experimenting with different tools and techniques to advance themselves in life.

On the other hand it may just be the author wanted to gaslight ppl into a debate xD

082349872349872
1 replies
8h43m

Given that the author has been exploring these themes* throughout the years since I first encountered them, I've got a strong weighting for the former.

* with varied approaches; I even recall a "test all the things" experiment

bubblebeard
0 replies
7h20m

Yes I think so too, I was just trying to inject a little comic relief :)

curry798
5 replies
15h35m

Is it recommended to learn shell if you are a beginner?

leptons
1 replies
15h31m

What you should learn, is knowing when shell is the right tool for the job.

curry798
0 replies
15h26m

It dawned on me, thank you.

seer
0 replies
14h58m

There was a saying that technology usually has inertia, if it was used for 30 years, and is used actively now, it will probably still be in use 30 years in the future.

I learned vim by necessity after shying away from this weird old tech for years, when I was forced to work on a solaris server where there was no other way to edit the code at all. It was pain and suffering for a few hours - we really wanted to fix something that day as I was working on a machine that we were “not allowed to ssh into” been driven to a different city in order to sit in front of it.

But after that day I’ve been using vim almost every day. It is not my daily driver, always felt more productive in TextMate, SublimeText and now VS Code, but it is still incredibly useful.

Any remote server I ssh into there is no question what I can or cannot do - can easily edit everything I want to. And I use it for various quick edit tasks in the shell.

Now learning shells wasn’t so dramatic for me but same rules apply, I don’t feel uncomfortable anywhere - that pod that is misbehaving in your cluster - well just ssh into it and poke around! You need to tie a few commands together as there isn’t something that does _exactly_ what your company needs - just whip up a quick bash script! - zero dependencies and can be deployed anywhere - your mac, the server the ci is running on, even windows machines!

So general rule is - if it was used for 50 years and is used now, it is probably worth learning.

fragmede
0 replies
15h26m

I won't comment on which shell to learn, but you'll end up spending a lot of time in it, so learning your chosen shell well will pay off dividends for the rest of your life.

arendtio
0 replies
6h23m

Yes and no.

Shell scripting is incredibly powerful and omnipresent. So you want to know the basics about pipes, loops and the like.

But the language itself is broken by design (error handling is a mess; whitespaces create headaches daily; sub-shells can be a pain; ...). So, creating reliable scripts can be a challenge, and you do not want to become an expert on how to write large programs with the shell. Other languages, e.g. Python, are much better at this.

My favorite site in this context is https://shellhaters.org. It has a list of links to the POSIX standard so that you can easily look up functionality that is part of it (and should be present on all POSIX-compliant operating systems).

If you know everything on https://learnxinyminutes.com/docs/bash/ you most likely know more than you need.

brokegrammer
4 replies
8h53m

There are some interesting ideas in this article. Not using source control and removing tests resulting in a better program is quite fascinating.

It's a shame that there are so many rude comments. It seems like there are many close minded folks lurking here, forgetting that experimentation is essential in tech.

lnenad
2 replies
8h33m

Not using source control and removing tests resulting in a better program is quite fascinating.

Can you clarify what is exactly fascinating here? They seem to be writing simple programs, used only by themselves. In these scenarios of course you don't *have* to use good eng practices.

brokegrammer
0 replies
6h31m

I don't know because no studies have been done about the so called good engineering practices.

If a big company with 10 teams of 20 engineers each blogs about how they're able to ship good code with testing or source control, I won't be any more fascinated that I am here because it sort of makes sense since no one can prove that source control or testing improves the end product.

akkartik
0 replies
1h7m

You seem to think of writing simple programs used only by myself (and people I have a relationship with, and people who want to have a relationship with me) as some sort of special situation that doesn't require "good engineering practices." I think of it as the most basic situation of all.

The most foundational engineering practice of all: tailor interventions to the context.

082349872349872
0 replies
8h34m

It's also a shame that Kartik explicitly states his goals and his problem domain, yet folks react as if he'd been making comments about their goals and their problem domain.

shepherdjerred
3 replies
15h7m

At first glance I thought the author was plain wrong, but I think there is some good insight here.

This workflow works very well for the author. Most of us can probably think of a time when Git or automated tests frustrated us or made us less productive. There are similar solutions that are simpler and get out of the way, e.g. backing up code with Dropbox, FTP, whatever.

The above is works well because the author is optimizing for their productivity on a passion project where they collaborate with few others.

Automated tests are useful, but it sounds like the author likes creating programs so small that the value might not surface. I think that automated tests still have value even in this context, but I think we can all agree that automated tests slow you down (though many would argue that you see eventual returns).

Version control and automated tests solve real problems. It would be insane to start a project without VC today, and automated tests are a best practice for a reason. But, for the authors particular use case, this sounds reasonable.

---

Aside from the controversial bits around VC/tests, I think items 7/8/9 perfectly capture my mindset when writing/refactoring a large program. Write, throw it away, write again.

fendy3002
1 replies
14h54m

Disagree on VC, even for solo project and no multiple version branching. Human make mistakes, knowing what you change in the last 3 weeks for >100k LOC project are godsend. It helps to find and fix issues. The better feature is branching out, because you can do what you want while still having a way to go back to previous stable.

As for automated tests? That's fine.

yellowapple
0 replies
1h48m

I think it's still worth asking "which VC?" through that lens, though. Git was designed for developing the Linux kernel - with countless LOC and contributors and commits pouring in constantly. It happened to also be readily suitable for GitHub's model of "social" FOSS development, with its PRs and such (a model that most other Git hosting systems have adopted).

...but that ain't applicable to all projects, or possibly even most projects. The vast majority of my FOSS contributions have been on projects with one or maybe two primary authors, and without all that many PRs. What is Git, or any particular Git repository host (GitHub included), really offering me?

I need to track changes (so I can revert them if necessary), I need to backup the code I'm writing, and I need to distribute said code (and possibly builds thereof). Just about any VCS can do those things. I ended up trying Fossil for various new projects, and I'm liking it enough that I plan on migrating my existing projects into Fossil repos (with Git mirroring) at some point, too. It's unsurprisingly more optimized toward the needs of the SQLite development team - a small cathedral rather than a Linux-style giant bazaar - and considering that all my projects' development "teams" are tiny cathedrals it ain't terribly surprising that Fossil would be the right fit.

fragmede
0 replies
14h1m

imo taking the time to learn enough git to setup an ignore file, then run be able to run git init; git add -A, git commit -a -m "before I changed the foo function to use bar" and then go back to older revisions is well worth it. you don't have to master it, but just having a commit message and a version to get back to has saved my bacon more times than I can remember, nevermind more advanced operations.

miffy900
3 replies
14h43m

Giving up tests and versions, I ended up with a much better program.

This is one of those sentences that is clearly an opinion but stated as if it were some undeniable, incontrovertibly true statement of fact.

In your opinion, you have a better program - but give the code or repository to another dev or a group of devs and I'm sure you'll hear very different things...

inimino
2 replies
12h41m

The person who wrote both the original and new versions isn't qualified to say one is better than the other?

cocok
0 replies
11h11m

I'm stealing this for all my future code reviews.

PeeMcGee
0 replies
9h15m

If they are the only user or developer, sure. Otherwise they are the least qualified to say it's better -- like how I'd be the least qualified to declare myself winner of a handsome contest.

mattlondon
3 replies
7h20m

I have noticed a few articles recently on HN that talks about dropping tests because they are too slow or holding them back or just extra cognitive load.

This kinda beggars belief for me. I wonder who these people are - do they have the "battle scars" from working on complex or big systems? Are they reasonably junior or new to the profession with less than 10 years experience?

Next up? Fuck structural engineers, it's just going to slow us down building this bridge...

If you are doing something for fun, sure do whatever you want. I write zero tests for my own pet projects. But please in professional environments please don't ignore hard-won lessons in reliability and engineering-velocity because you don't want to have to do the extra work to update your tests. Your customers and colleagues (potentially years in the future) will thank you.

septimus111
0 replies
7h13m

There is adverse selection at play. The top/world-class programmers are too busy to write blogs.

beezlebroxxxxxx
0 replies
6h51m

Tech is a relatively immature industry. And a lot of time and effort and money in it is devoted to non-critical products.

I'm not directing this at the OP, because they have actually thought about it even if I disagree with them, but there are a lot of people working in tech and in software who do not care about product quality at all. They're paid a lot of money and exclusively focus on shipping ASAP, quality be damned, so they keep their metrics looking good and the $$$ flowing. Add in the industries tendency for very short term tenure at jobs and you end up in a situation where people think what they're doing is "optimal" simply because it keeps them getting $$$ --- product quality is just secondary. Their "craftsmanship" is their job-hopping. (I don't have a problem with job-hopping if the products and code are still good --- they usually aren't.)

They usually don't need to care about a bridge lasting 6 decades, but then they're writing critical software for infrastructure or airplanes and, unfortunately, they can actively resist a lot of the hard learned lessons people had to make in those industries because they just want to move fast (and leave after ~2 years).

The culture isn't there yet.

akkartik
0 replies
1h14m

It's my failure as a writer, because this is not one of those articles.

OP is about how I thought I had the answers in the past but was wrong, and how I have new answers and am still wrong in ways I will find out about. So what beggars belief for me is anyone reading it and thinking I'm offering any sort of advice for others in all situations. What here gives you a sense it's at all related to professional environments? My first bullet was, "building for others is hard so don't even try." If you have ideas for what I can reword to make it even clearer, definitely let me know.

AdieuToLogic
3 replies
15h10m

In 2022 I started working on Freewheeling Apps. I started out with no tests, got frustrated at some point and wrote thorough tests for a core piece, the text editor.

This is a primary motivation for having a reasonable test suite - limiting frustration. Test suites gives developers confidence to evolve a system. When done properly, contributors often form an opinion similar to:

But I struggled to find ways to test the rest, and also found I was getting by fine anyway.

This is also a common situation. As functional complexity increases, the difficulty to test components or the system as a whole can become prohibitive.

Now it's 2024, and a month ago I deleted all my tests. ... In effect I stopped thinking about version control. Giving up tests and versions, I ended up with a much better program.

This philosophy does not scale beyond one person and said person having recent, intimate, memory of all decisions encoded in source code (current or historical). Furthermore, given intimate implementation knowledge, verifying any change by definition must be performed manually.

gavinhoward
1 replies
15h3m

This philosophy does not scale beyond one person and said person having recent, intimate, memory of all decisions encoded in source code (current or historical). Furthermore, given intimate implementation knowledge, verifying any change by definition must be performed manually.

As a one-man programming team, you are correct. And quite frankly, I shudder to think of not programming with a test suite or version control, even though I work alone!

Docs, tests, and version control reduce what I have to remember about the code context. Yes, I have to remember the details of the code in front of me, but if I document it, test it, and check it in with a good commit message describing the why and how and whatever, then I can discard that code from my memory and move on to the next thing.

AdieuToLogic
0 replies
14h53m

All of the tools and artifacts you reference as important contribute to the same goal, whether it is for me or a future-you:

Understanding.

082349872349872
0 replies
8h38m

This philosophy does not scale beyond one person ... having recent, intimate, memory of all decisions encoded in source code

Some time ago on HN, I ran across a tale of someone who never merged code unless they'd written it all that day. If they got to the end of the day without something mergeable, well, that just meant they didn't understand the problem well enough to express it in under a day, and they tried afresh the following morning.

Anyone else remember this, or am I confusing sites/anecdotes again?

zombiwoof
2 replies
13h59m

I worked with a guy who was so obsessed with testing he never even bothered to ask what the feature or problem the code was to solved

He happily and condescendingly told every else how much they sucked because he had 1000% test coverage

When he released he had tons of bugs because his code wasn’t going what it was supposed to

His answer: yelling at product and tech leads for not being clear

The rest of us had tests but spent as much time asking clarifying questions

The guy above is one of the reasons I just lost all interest in software. This was a major FAANG company and his smooth talking continues today with management none the wiser because “he has the tests” Arby’s

qngcdvy
0 replies
11h48m

Seems to intersect with my experience. The best guys I've worked with had test...to some extend...especially in places that did some work you could easily get wrong by not thinking about a small edge case. Yet, none of them had or pursued 100% coverage as they were all clearly aware of that there is no actual benefit in that number, but that it can also mean harm by heavily slowing down dev speed and tying down your feature set because you're too lazy to always port some useless tests.

aulin
0 replies
12h34m

Our field is burdened by complexity. Some people cannot function properly without illuding themself they can tame it. So they cling to rules, best practices, tools hoping that adopting them to the letter will protect them from the uncertainties of our job.

I've seen the opposite too, devs not only not writing any test, but not trying to run a single line of the code they wrote. Reason being I cannot test all the edge cases so I won't test it at all. QA will open a bug. And somehow getting praised by management for being faster than others to ship changes.

slowmovintarget
2 replies
2h34m

Can I just say... I love the return of the term "programming," "to program," and "programmer." "Coder" and "coding" was popular for a while, and before Steve Balmer put his stamp on it, "developers" and "development." But when I started, before 32-bit Windows was a thing, I was a programmer.

If the Primeagen has helped popularize the term again, great, thank you.

akkartik
1 replies
1h25m

I've always been a programmer. Because it was good enough for Dijkstra.

slowmovintarget
0 replies
1h18m

I like that take, and wholeheartedly agree.

ramzez
2 replies
11h33m

The author should really setup SSL on his website and make it secure to browse to begin with.

strken
0 replies
11h18m

Looks fine to me. TLS 1.3 with a cert from Let's Encrypt.

akkartik
0 replies
11h25m

I do have SSL. It's just optional and it seems the submitter chose http.

codr7
2 replies
9h9m

I too keep wondering where this path leads.

One thing is clear to me though, creating (software) by yourself is a completely different activity from doing it in a team.

About testing. Tests are means, not ends. What we're looking for is confidence I think. So when I feel confident about an implementation, I'll test less. And if I desperately need to make sure something keeps working, I'll add a few integration tests at the outer edges that are not so affected by refactorings and thus won't slow me down as much. E.g poking a web backend from the outside, as opposed to testing the internals. Unit tests are good for fleshing out the design of new API's, but those tests are pretty much useless once you know where you're going.

sebstefan
0 replies
8h52m

Plus there's so many good reasons to have tests in a single person project

* Hotwiring if statements with "true ||" to go straight to the feature you're building takes time, and you're gonna have to tear it down later. Just build a test and run it, that way you get to keep it for regression testing

* If you're shipping something big, or slow, (which can just mean 'I use qt' sometimes) and launching the app/building the app takes ages, just make a test. A single test loads quicker and runs quicker

* If you're debugging and reproducing the bug takes 45 seconds, just write a test. It automates away the most boring part of the job, keeps your flow going, allows you to check the status of the bug as often as you want without having to think about if it's worth it or not, and, same as #1, you get to keep the test for regression testing

alentred
2 replies
9h30m

We are all a bit overwhelmed by the complexity of the field of software engineering. Arguably sometimes accidental. But I don't agree that rejecting all the ideas we have come up with over the decades is a solution. On the other hand, not all solutions should be taken to the letter or used "too much". "Overwhelming" is by definition what happens when something is used "too much". By all means, please, write tests, use the VCS, use abstractions, but *know why you use them*, and when the "why" doesn't hold - reassess.

devjab
0 replies
8h45m

I think a major source of the problem is academia. I’m an external examiner for CS students in Denmark, and they are basically still taught the OOP and onion architecture way of building abstractions up front. Which is basically one of the worst mantras in software development. What is even worse is that they are taught these things to a religious degree.

What is weird to me is that there is has been a lot of good progression in how professionals write software over the years. As you state, abstractions aren’t inherently bad for everything. I can’t imagine not having some sort of base class containing “updated”, “updated_by” and so on for classic data which ends up in a SQL db. But in general I’ll almost never write an abstraction unless I’m absolutely forced to do so. Yet in academia they are still teaching the exact same curriculum that I was taught 25 years ago.

It’s so weird to sit there and grade their ability to build these wild abstractions in their fancy UML and then implement them in code. Knowing that like 90% of them are never going to see a single UML diagram ever again. At least if they work in my little area of the world. It is what it is though.

akira2501
0 replies
9h10m

The only reason I started to _actually_ use git was magit. I wish there were command line level "porcelains" for everything. A standard '--help=ui' output and 'dialog' style interface and it could be automatic.

It's not so much being overwhelmed by the complexity, it's just that there's a limit to the amount of active muscle memory I can utilize, and I have to make the cut somewhere.

pmontra
1 replies
9h40m

My favorite example for point number 3 "Small changes in context (people/places/features you want to support) often radically change how well a program fits its context." is K9 Mail, which is becoming the Android version of Thurderbird now.

It started with an unconventional UI with a home page listing email accounts and for each account the number of unread and total messages. There was a unified inbox but it was not forced on users.

I remember that I explicitly selected this app because it fit my needs: one personal account, one work account, several work accounts that my customers gave me. I wanted those account to stay separated.

Probably a lot of K9 users picked that app precisely for the same reason because there were many complaints when the developer migrated to a conventional Android UI with a list of accounts sliding from the left and an extra tap to move from an account to another. If we had liked that kind of UI chances are that we won't have picked K9 to start with.

So one small change (but probably a lot of coding) destroyed the fitness of the app to its users. I keep using the old 5.600 version, the latest with the old UI, and I sideload it to any new device I buy.

Furthermore, to make things even more unusual, I only use POP3 to access my accounts (I preview on phone, delete stuff, possibly reply BCCing myself, eventually download on my laptop) and K9 fit perfectly that workflow. I don't need anything fancy. An app from the 90's would be good enough for me.

hiAndrewQuinn
1 replies
12h58m

Just dropping by to say I adore this author and Mu is one of my favorite projects. A modern Lisp machine, kinda! In QEMU! So much fun!

akkartik
0 replies
12h50m

Thank you so much, you made my day.

ein0p
1 replies
13h51m

If you don’t have tests you don’t know if your shit works, and your team size can be at most 1. I even write broad coverage tests in my private repo to have a modicum of assurance that when I change things the remaining code still works.

akkartik
0 replies
32m

1. It's only a modicum of assurance.

2. There are many ways to get a modicum of assurance. Types, tests, formal methods, cleanroom software engineering, NASA's IV&V, many others that I'm sure I'm forgetting.

So there are many ways to "know if your shit works" and none of them support turning off one's brain entirely (for the part beyond the "modicum"). What I did here is to explore some of the other approaches that I have long neglected.

andrewstuart
1 replies
13h18m

If you are writing tests but have no users then you are wasting your time and money.

AdieuToLogic
0 replies
12h46m

If you are writing tests but have no users then you are wasting your time and money.

If you have users but are not writing tests, then your users are your tests and you are wasting their time and money.

SethMurphy
1 replies
8h41m

I have always found integration tests most important in order to test business logic when your customers pay for your trust and especially when they rely on your code for revenue while interacting with a third party. However, they should be thrown away immediately after proving your coded logic matches business requirements as they are slow and lose value and become tech debt quickly. Unit tests, if needed, should be even more temporary in my opinion. Often a CLI can be sufficient as a "unit test" during the development process.

djeastm
0 replies
5h31m

However, they should be thrown away immediately after proving your coded logic matches business requirements as they are slow and lose value and become tech debt quickly.

Can you expand on why integration tests should be thrown away once validated? Isn't the idea that when you make a change later, these tests will ensure you haven't introduced a regression?

Jerry2
1 replies
15h9m

I wish someone would share their programming workflow when using LLMs... I feel like I'm falling behind in this area.

passion__desire
0 replies
12h15m

The important task for you, now that llms write code, is to know the theory very well and have list of things to try out. The good thing about coding is we have very fast and tight feedback loop. You should be in a position to cross-question llm responses and that is possible only when you know your stuff.

https://x.com/jdnoc/status/1791145173524545874

xelxebar
0 replies
15h18m

Data-orientation, abstraction avoidance, holistic rewrites. The values espoused by OP rhyme heavily with the stance I've begun to take after reading and writing significant amounts of APL.

The best code I've seen mercilessly elides anything that doesn't serve an architectural level, problem domain-relevant concern. GADTs and hash tables, and all our nice CS tools work much better when applied as cognitive tools in a domain-specific manner as opposed to reified language syntax or library APIs, as the latter necessarily introduces cross-domain concerns and commensurate incidental complexity.

The most blatant example of this in APL is using arrays and tables for everything. Trees? Can be efficiently encoded as arrays. Hash tables? Same. Tuples? Just a pair of vectors. Etc. APL's syntax really shines in this instance, since data interaction patterns become short, pithy APL expressions instead of zoos of library functions. Using direct expressions makes specialization much easier, by simply ignoring irrelevant concerns.

Anyway, APL aside, I'd really like to see our software engineering zeitgeist move more toward an optimistic refining our understanding of the human practice of software engineering and away from pessimistic and programming-centric problem avoidance.

(The above really came out more treatisy than intended. Oh well.)

shove
0 replies
15h29m

I’m really curious whether my agree:disagree ratio will be higher in the article or in the comments.

richrichie
0 replies
8h51m

Building durably for lots of people is too hard, just don't even try. Be ruled by what you know well, who you know well and Dunbar's number.

Wikipedia on Dubar:

A replication of Dunbar's analysis on updated complementary datasets using different comparative phylogenetic methods yielded wildly different numbers. Bayesian and generalized least-squares phylogenetic methods generated approximations of average group sizes between 69–109 and 16–42, respectively. However, enormous 95% confidence intervals (4–520 and 2–336, respectively) implied that specifying any one number is futile.

https://en.m.wikipedia.org/wiki/Dunbar%27s_number

nickelpro
0 replies
9h45m

Insanity-grade takes end-to-end, not a single word of this should be taken seriously

irjustin
0 replies
14h51m

I appreciate the honest answers by the OP. Even if we all think there's fundamental flaws with what was given up.

For me, ChatGPT saved a lot of my mental load. I don't think about individual lines NEARLY as much. Obviously you need to understand what the program is doing and be smart about it, but you can really focus on business problems.

It spits out something like 40% of my code and 70% of tests. I've started dropping whole files into it and tell it how to combine a new code.

huijzer
0 replies
12h47m

Most software out there is incurably infected by incentives to serve lots of people in the short term.

Great quote! You can even replace “software” with “businesses” and the quote still works.

hellectronic
0 replies
6h43m

IMHO there are tests and there are tests. I had to work with codebases that had awful tests. They broke frequently because they were badly written. They used a lot of mocking when mocking was not appropriate. This tests were written for the purpose to have tests not for the purpose to really test the domain. I do not write tests for simple cases, like method in class A just delegates to method in class B.

For a one man show - go on do not write tests, especially if you do not know where you will end up with the software. But in teams I find a lot of value in (good written) tests, preventing bugs and documenting bugs. Sure you can over-engineer it, as everything else too.

BUT working without version control? Good that it works for you. I think version control is one of the MUST USE tools.

ggm
0 replies
14h18m

There is a class of problem where you know the goal, and code which produces the goal which you can test independently is demonstrably ok. Of course the next run with different parameters may well be wrong, but if they aren't on the goal-path you don't much have to worry.

I do sometimes code in this pattern. I have high confidence in charts from Google and Akamai about some data I have exposure to (a variant of the inputs unique to my situation not in their hands) and when the curves I make conform in general trend to the ones they make over the time series, I am pretty sure I have this right. If the critique is in the fine differences I do some differential on it. If the critique is in the overall shape of the curve, if mine is like theirs, why do you think I am so wrong?

arendtio
0 replies
11h41m

My approach to programming in 2024 is a bit different: When I want to code a new module, I start by talking to an AI about the requirements and let the AI generate the tests and the code.

Sadly, the AI isn't capable of generating working code for all scenarios, so eventually, I take over the code and finish/fix it.

The workflow itself can be quite frustrating (who doesn't love fixing bugs in other people's code?), and the act of coding isn't as much fun as it used to be, but the AI also shows me new algorithms, which is a great way of learning new things.

Let's just say I am looking forward to 2025 ;-)

_gabe_
0 replies
3h50m

I can sympathize with the authors love/hate relationship with tests, but I can’t help feeling like it’s because we as developers so often test the completely wrong things.

I don’t typically write tests, but they do make sense for a few cases (specifically end to end tests that look for well defined outputs from well defined inputs). I was inspired by Andreas Kling’s method of testing Ladybird, where he would find visual bugs, recreate the bug in a minimum reproducible example, fix the bug, then codify the example into a test and make sure the regression was captured in his test suite[0]. This led to a seemingly large suite of tests that enabled him to continue modifying the browser without fear of regressing somewhere.

I used this method of testing while I was writing a code highlighter that used TextMate grammars. Since TextMate grammars have a well defined output for some input of code + grammar, I was able to mimic that output in my own code highlighter and then compare it to TextMate’s output for testing purposes. I wrote a bunch of general purpose tests, then ran into a bunch of bugs where I would have mismatched output. As I fixed those bugs, I would add the examples to my test suite.

Anyways, my code highlighter was slow, and I wanted to re-architect it to speed it up. I was able to completely change the way it worked with complete confidence. I had broken tests for a while in the middle of the refactor, but eventually I finished the refactor. As I started to fix the broken tests, there was a domino effect. I only had to fix a few tests and that ended up automatically correcting the rest. Now, I have a fast code highlighter and confidence that it’s at least bug for bug parity with the slow version :)

[0]: https://youtu.be/W4SxKWwFhA0?si=PJs_7drb3zVxq0ub

GTP
0 replies
3h15m

I don't agree with point 3:

"Small changes in context (people/places/features you want to support) often radically change how well a program fits its context. Our dominant milieu of short-termism doesn't prepare us for this fact."

My opinion here is that short-termism is precisely a consequence of the hardness of predicting/keeping up with these small changes: businesses prefer to be able to adapt quickly to new scenarios rather than risking being stuck in the wrong direction.

0x008
0 replies
10h25m

this article must be written with the intention to troll HN