I started programming at 10 and now I'm 50, and right now it feels like I've reached this point -- it's boring, I have trouble keeping up, I feel the things work lets me work on are not important. Interesting work goes to younger colleagues.
The problem is, I have a family and finding fulfilling work that you have no experience in, in this country, at 50, is close to impossible.
So for now I consider myself lucky and try to rediscover the fun things in programming.
I'm a bit older, I don't really feel trouble keeping up but looking at the landscape it's just not that interesting anymore. So many "new" ideas are actually old ideas but the people pushing them are too young to know that.
I don't have any doubt in my ability to learn new languages and frameworks, but running in that hamster wheel just gets boring after a while.
What are some examples of new ideas that are old?
Lambdas (in the cloud): see CGI scripts and inetd.
Containers: see BSD jails, Solaris zones.
WASM: see JVM and Smalltalk VM.
Async / futures / actors: see Erlang, Lua, Oz.
The cool type system of Typescript: see OCaml and Haskell.
Numpy: see APL.
Through the list above, there's usually a 20 to 40-year gap between the first availability and the turning into "new hotness".
Generally agreed.
About WASM, it is not the first sandboxed bytecode interpreter but the first that runs in a browser and that has usable toolchains to compile not “browsers first” languages into it. I’d argue that that’s where the novelty is.
Did Java applets arguably not do this 20+ years ago?
I thought those were interpreted by the JVM, which was subject to security issues. WASM faces no such security issues, no?
The JVM is fairly good at sandboxing, as these things go. Turns out sandboxing arbitrary software is an extremely hard problem (as the WASM folks are starting to encounter in the wild)
WASM also has potentials for security exploits, but those selling it are quite silent on those.
Everything Old is New Again: Binary Security of WebAssembly
https://www.usenix.org/conference/usenixsecurity20/presentat...
Just one of the many articles that are slowly surfacing, now that WebAssembly is interesting enough as possible attack vector.
While there is a sandbox, you can attack WASM modules the same way as a traditional process via OS IPC, by misusing the public API in a way that corrupts internal memory state (linear memory accesses aren't bound checked), thus fooling future calls to follow execution paths that they shouldn't. With enough luck, one gets an execution path that e.g. validates an invalid credential as good.
The JVM implemented properly should not have security issues. The class library however... (i.e. it's a lot easier to sandbox things if you start without any classes that interact outside the sandbox).
At least in so far as the higher level (DOM, browser runtime) and lower level (memory access, to the extent that it's mediated by the WASM VM) have no security issues...
The VM itself is pretty tight, but abstractions have a nasty habit of being leaky.
Oh, sweet summer child
Maybe you know this better than me. Were non-JVM native languages available 20 years ago for Java applets?
My conception of it is that they were pretty much Java only (with Clojure and Scala also available in the later years before they got deprecated?). Is this conception wrong?
Yes, you could write applets in other languages. The choice was rather narrow, but you could use [Python], [Scala], or [JRuby].
[Python]: https://www.jython.org/jython-old-sites/archive/21/applets/i...
[Scala]: https://cs.trinity.edu/~mlewis/ScalaApplet/scalaWebApplet/We...
[JRuby]: https://www.jruby.org/getting-started — offers to run as an applet in the first few lines.
also JavaScript. See https://en.wikipedia.org/wiki/Nashorn_(JavaScript_engine)
Well yeah but the problem was that you still needed that runtime, WASM should solve this.
Java, Flash, Silverlight, ActiveX? There were loads of technologies to run different languages in a browser, but they were all proprietary to a point; none of them were a web standard, they all needed separate installation or a specific browser, and they were all basically black boxes in the browser. Whereas (from what I understand) wasm is a built-in browser standard.
There was (is?) also asm.js, which IIRC was a subset of JS that removed any dynamicness so it would be a lot faster than vanilla JS. But again, no broadly carried / w3c standard.
I'm a bit hesitant to describe $NEW_CONCEPT/TECH as just $OLD_CONCEPT/TECH. Echoes of older things in a new context can really amount to something different. Yes, VMware didn't create the idea of virtualization and Docker et al didn't create containerization but the results were pretty novel.
I'd rather say that good ideas keep on returning, no matter whether they are remembered or getting reinvented.
It's not that those who reapplied the old concept in new circumstances are not innovators; they are! Much like the guy who rearranged the well known thread, needle, and needle eye and invented the sewing machine, completely transforming the whole industry.
But seeing the old idea resurfacing again (and again) in a new form gives you that feeling of a wheel being reinvented, in a newer and usually better form, but still very recognizable.
The plumbing behind Docker is not particularly novel but the porcelain was imho a major advance.
There were plenty of ways to do "containers" (via vservers, jails, zones etc) but the concept of image never caught on before Docker.
You could sling tarballs of chroots around and at times this did happen but it was a sort of sysadmin thing to do, there was no coherent "devex".
There's always a push and pull between old and new tech and I agree some of the hot new tech is regurgitated old tech, but most of your examples aren't really comparable.
I would say that my examples are rhymes, different developments of the sane theme. They are not literal repetitions, of course; comparable, not identical.
about 15 years ago the joke was, `cat /etc/services | mail apply@ycombinator` as at the time it seemed like startups were just doing file transfer, email, network file systems, etc. it wasn't far off, as unix is file based, and the internet is also file based.
And to a point they were correct; file transfer 15 years ago was closely linked to piracy and dodgy websites that scam you into pressing an ad instead of a download button. It's only thanks to e.g. dropbox / cloud file storage suppliers, wetransfer, etc that that bit has been resolved.
Dunno about email though, the last real innovation in that space that I can remember with lasting impact was gmail. There were a few more tidbits like inbox (RIP), the inbox zero methodology, and Airmail (?) but none of them really took off.
It's not every day that we see Oz mentioned here! I was very involved in writing the Mozart/Oz 2.0 VM.
I also wrote a "toy" (read: for school) dialect of Scala compiling to Oz and therefore turning every local variable or field into what Scala calls a Future, for free. Performance was abysmal, though! But in terms of language idioms, it was quite nice.
---
Unrelated: about Wasm, none of what it does is new, obviously. What's interesting about it is that
a) browser vendors agree to do it together, and
b) the design grows to accommodate many source languages. This used not to be the case but the eventual arrival of WasmGC significantly redistributed the cards of the game.
Relevant background here: I'm the author of the Scala to JavaScript compiler, and now co-author of the Scala to Wasm compiler.
Couple more:
(1)
Garbage collection in every high level language: Java, which was the first mainstream language to do it-- people were seriously using cpp for high level business logic at the time, and were suspicious of GC for its performance.
But Java itself got it from LISP, which had introduced GC without it ever going mainstream decades prior
(2)
No SQL had already been tried as hierarchical databases in the 70s or 80s iirc. Relational model won because it was far more powerful. Then in the early 2010s, due to a sudden influx of fresh grads and boot campers etc, who often hada poor grasp on SQL, schemaless stuff became very popular... And thankfully the trend died back down as people rediscovered the same thing. Today's highly scalable databases like Spanner and Cassandra don't ostentatiously abandon relational calculus, they reimplement a similar model even if it isn't officiallu SQL
(3)
And then there's the entire cycle that's gone back and forth several times of client based vs server based:
First there were early ENIAC type computers that werr big single units. I would consider that similar to thick client.
Then as those developed we had a long era of something more similar to cloud, in that a single computer developed processes to support many partitioned users who submitted punch card batches.
That developed even further into the apex at the time of cloud style computing: terminal systems like ITS, MULTIcS, and finally in the 70s, UNIX.
Then the PC revolution of the 80s turned that totally on its head and we went back to very very thick client, in fact often no servers at all (having a modem was an optional accessory)
We stuck with that through the 90s , the golden age of desktop software.
A lot of attempts were made to go back to thinner clients but the tech wasn't there yet.
Then of course came the webapp revolution started by Gmail's decision to exploit a weird little used API called XMLHttpRequest. The PC rapidly transformed over the next decade from a thick client to a thin vessel for a web browser, as best exemplified by the Chromebook, where everything happens in the cloud -- just like it did in the mainframe and terminal days 50 yeara ago...
The trend could stay that way or turn around -- it's always depended in hardware performance balance changes.
I have to say that all your “old” ideas (they are all from the 90s AFAICT) seem new to me ;)
For example, for Haskell [1990] (ok, not so much the type system bits, but…), see FP [1977] (https://en.m.wikipedia.org/wiki/FP_(programming_language))
Basically all of Tailwind CSS. Inline styles are nothing new, neither are utility classes, or the scalability issues of inline styles that led to Tailwind reinventing classes with their `@apply` macro for creating component classes.
Edit for another: RPC calls are really old and went out of style maybe 15 or 20 years ago in most codebases. Most of the modern JavaScript metaframeworks are now using RPC calls obscured by the build/bundling process.
RPc calls ala SOAP may have been obsoleted but things like gRPC were and are the building blocks of many large companies.
Sure, I'm not saying RPC isn't used today or that it doesn't solce specific problems.
It is a reinvention of an old idea though. There was around 15 years where RPC rotted on the vine until Google brought it back for (mostly) the enterprise scale, and another 6 or 7 years before JavaScript frameworks rediscovered it again for fullstack web applications.
… Eh? The predecessor to gRPC seems to have started internally at Google in 2001, and Google open-sourced it in 2015. In 2001, CORBA was all the rage; by the mid-noughties this had been replaced with SOAP, and maybe Thrift rpc in trendier places. I gather there was a whole parallel Microsoft ecosystem with DCOM and things, though that wasn’t my world and I don’t know much about it. But the point is that there hasn’t been a time where some form of RPC wasn’t in fairly common use since at least the early 90s.
The details change, and each one tries to solve the problems of the past (typically by inventing exciting new problems), but conceptually none of these things are _that_ different.
I may have completely missed a generation of RPC tooling. I was thinking specifically about web development in this context, but in general I don't remember hearing anything about RPC use between the early 2000s and mid to late teens (other than legacy systems maybe).
Thank you for mentioning Tailwind. Every time some young dev talks about how Tailwind is "forward thinking" I just want to scream into a pillow. This is also the case now that SSR is becoming popular again.
SSR is the most mindblowing of the lot, it's gone full circle.
I mean granted, I've worked with e.g. Gatsby for a while which is SSR on the one side but a hydrated SPA with preloading etc on the other making for really fast and low bandwidth websites, but still.
I can deal with the SSR becoming hip again, but can we please settle on either back or front-end rendering? Either was good, but trying to combine the two is evil.
Mono-repos are now coming back with a "hipster" shine to them, with fancy in-repo build systems and what not.
What's funny about this example is that it's arguably not even that much of a time-difference between the two epochs of forgetting and re-learning. It's just that everyone jumped on the microservices bandwagon so much that they couldn't deal with it in a mono-repo context, so they dumped it and convinced the world that many smaller repos was "better". Then they learnt the hard lessons of distributed and complicated version dependencies and coordinating that across many teams and deployments. Their answer to this? Not back to mono-repos, no no no, semantic versioning dude, it's the hip new thing! When that was a bust and no one could get around to being convinced of using it "the right way", they were forced to begrudgingly acknowledge the value of mono-repos. But not before they made a whole little mini-industry of new build or dependency systems to "support" mono-repos as if they're just lots of little repos all under a single version-controlled repo.
These days I get this kind of stuff: "Hey you guys wrote this neat module as part of your project, can you separate it out and we can both share it as a dependency? Because, you know, it's a separate little mini-something inside of their codebase." ...Only to then be told that separating it out would "ruin" their "developer experience" and people would have to, gasp, manage it as a dependency instead of having it in their repo.
/rant. It's really hard not to be shocked and disgusted at this level of industry-level brain rot. I never thought I'd be "that guy" complaining about my lawn, but seriously, our industry is messed up and driven by way too many JS hipsters and their github-resume-based-development.
This is kinda why I really, really dislike the "social coding" meme that went around in the 2010s.
I get it, it's a team sport. It's just that the more people you put on your "team" the less agency everyone feels because responsibility gets diffused and it becomes more about about the "team" and less about actually doing the thing.
I hate having to do this, because then I have to get Nexus working with whatever the package manager in question is (Maven, npm, pip, NuGet all have different ways of publishing packages), setup CI for the publishing and god forbid I also need to manage the Nexus credentials for local installs and possibly even might have a Git submodule somewhere in specific cases, which also confuses some tooling like GitKraken sometimes.
It does prove your point, but honestly dependency management is a pain and I wish it wasn’t so; separating a module from your main codebase and publishing it as a package should be no harder than renaming a class file.
Watching web tech evolve is a good example. So much churn rebuilding the same thing over and over.
And never once reaching parity with desktop UI frameworks. Not even close.
Web frameworks barely even abstract much. You still spend so much time marshaling things in and out of strings everywhere, and cramming information into URLs.
Mind-numbing makework, really.
AI. A lot of the things that are "new" were just waiting on hardware advances and cost reductions.
Similar age point, the problem is not keeping up, is fighting the continuous push to management, which I don't plan to ever do, unless forced by life circunstances.
It appears that the only path left for us in many European countries, is to go freelancer, and I vouch for the same problem regarding skills, forget about having Github repos, or open source contributions, if the technology company X is looking for isn't the one we haved used in the last 5 years or so on day job.
I'm a European in my late forties and have been a freelancer for the last five years but I find it harder and harder to motivate myself to continue working. What really takes all joy out of working as a software engineer these days for me are the endless Scrum ceremonies almost all companies in my area have embraced.
In the old days (say until ~7-8 yrs ago) I didn't have to attend very many meetings but of those I had to go to most were useful/necessary. These days I could probably count the useful meetings I attend in a year on one hand but the amount of Scrum-worship-meetings per week requires two hands.
The same amount of actual work I could do in a week in the old days would now take several months because it needs to be planned in detail. And no, not any technical detail, but rather discussions on how to divide it into stories but without doing any proper technical analysis and then straight ahead to story point guesstimates, yay! Then after a brief period of actual coding it's stuck in code review for weeks because no one will look at a PR unless prodded with a stick.
While I do think that code reviews can some times be beneficial, most of the time they are (in my experience unfortunately) pretty useless. Most comments (and I have to admit I'm guilty to this as well) are more bike-shedding than bug-preventing. Complex bugs are rarely found in code-reviews in my experience.
While these are my experiences during the last 7-8 years or so, it's more or less the same on all the half a dozen companies (or so) I've worked for during that period (which is also a very big reason why I've worked on half a dozen companies in that period).
Doing software right will require a lot of planning, irrespective of whether that planning occurs up front or as you go. If you plan more up front, that will eliminate a whole lot of guesswork when the time to do the programming comes. You need systems analysts -- generalists who understand the business and work well with people -- to come in and characterize, in detail, how the business currently works in terms of systems and subsystems, and then propose and design new systems, again to a high level of detail. Once that's done, inasmuch as you need software, producing the software is a simple matter of translating the detailed requirements into language for the machine.
Unfortunately, modern methods are basically just institutionalized guesswork: this is what Agile is all about. It's a methodology designed by programmers for programmers, in order to bamboozle management and inflate the programmers' own sense of self-importance. The correct way to design a business's internal systems, including but not limited to its software, appears to have been forgotten, except a pastiche of it lives on as a strawman called "Waterfall" for Agilistas to take down.
I'm not opposed to planning but I'm opposed to the kind of meta-planning game that is wont when scrum is involved. I've been in meetings where the thing we're planning is literally to change one line of code and we say as much but the PO still insistently asks if it shouldn't be multiple stories. The whole thing eventually took man days in meetings even though we insisted it was extremely quick. Turns out the whole thing was sold upstream to management as a big feature so a single 1-point story wouldn't cut it.
As a contractor I can at least remind myself that I'm getting paid for sitting through all those meetings but as someone who likes to actually do things I feel like I slowly die inside.
This is actually the hardest part. I can write detailed requirements about the car I need. Create a PowerPoint presentation that shows a schema of the system and subsystems; the engine block, transmission and steering wheel etc. with lines how they are connected.
That's the easy part. Now you need the team of skilled engineers developing the actual car. And you need them to be experienced and good at it.
You need at least one guy who is able to load a complete mental map of everything that's needed to be engineered. Who understands the business requirements and is able to create a vision for the product and technical solution. He needs to understand databases, web services, authentication, authorization, security, performance, web standards back- and front-end solutions. Be smart about what logical components are needed and have an high level idea how they could be implemented technically. Ideally that guy can also open a repository and read what's going on.
Especially with larger corporations there's still so much potential for automation. Yet what we see is a big fragmented mess. Systems and subsystems that are poorly integrated. Exactly the car you'd expect that was designed in PowerPoint by non-engineers.
Tech became too profitable to be left to "those nerds" so now you have very bloated orgs. Though a freelancer should be able to sidestep the grifters unless you're selling yourself as an employee for some reason.
Yeah, my first year as a freelancer was quite sweet actually. Then came the pandemic and me and my spouse got ourselves a vacation house as we couldn't travel any more. While this was a great relief for our mental health during the pandemic, it meant a much higher mortgage so I needed the more stable income.
I've similar experiences about Scrum. In the worst case there's one or more developers, usually junior, in the team that are very eager to improve processes. Eventually it's tenth time you are forced to discuss what's the optimal way to define story points.
I am 63. Canada based. Have no problems keeping up with tech. I am on my own since 2000 and mostly develop new products for various clients. Have couple of product for myself that bring some dosh. The range is very wide. Microcontrollers, Enterprise Backends, Desktop, Multimedia, Browser based, etc. etc. It is not programming per se that keeps me going (I find it boring enough) but designing systems and interactions from scratch and then watching it work.
I’m 55. Started as a 6502 cracker on the C64.
I still get enjoyment out of some coding - C++ on Linux for enterprise applications - but I do miss the “magic”.
Are you involved in the still-thriving C64 demo scene at all? Possible way to reconnect with the magic if not. Especially by attending (in-person, ideally) one of the many demo parties around the world.
There are also parallels with embedded device and FPGA work that I personally find thrilling.
Plus we on the VICE (open source Commodore emulator) team are always looking for devs.
I bought a C64 and SD card a few years ago. I enjoyed running up a few technical masterpieces - like DropZone - but the gaming interest has waned.
I don’t code 6502 nowadays but I’m active on r/c64.
FWIW I find the vscode + kickassembler + VICE toolchain a pretty fun way to iterate on C64 code.
These two perspectives are not incompatible. 49 here and still love programming. But only discovered that after quitting my Google job and spending a year working on my own things. Then housework wasn't getting done because I was writing code instead, and I realized I just love doing it, still, and I'm a far far better programmer than I ever was 20 years ago. I can do things I only dreamt of back then. And faster!
but that's not the same thing as enjoying writing the dreck that many employers want, and keeping up with their endless stack of messy JIRA issues, planning meetings, poor design docs, and management shenanigans....
I sometimes think that big corporations pay more because the actual work there sucks more for an engineer (likely to a manager or a sales, too).
Same age. Got started on a ZX-81 and a university mainframe.
I still enjoy writing code or shall we say solving problems via code. I still get excited about new things. I'm also a manager and I enjoy helping others. What I enjoy less is the politics.
Building things is fun, I don't think this goes away, it was always fun and is still fun.
Good point. It’s the problem solving.
Thing is I’ve solved so many problems, over the years at different companies, that there aren’t many new ones. Obviously, I can knock out the code quickly to the surprise of many. It’s just experience and I’m not a magician.
Like yourself, I enjoy helping others, younger coders in my case, work through their problems.
I guess that’s why I keep having to switch teams to pull them out of the quagmire they’ve gotten themselves into.
The thing is ... The industry needs us. It's making a mess all over and valorizing complexity and novelty. Constantly. Programmers with experience in our age range have, I think, a better sense of how to manage this and encourage simplicity (partially out of necessity). But age and novelty bias in our industry means this knowledge doesn't pass on.
It's tough to tell younger engineers that have cut their teeth swimming in intricacies and edge cases and integration nightmares and constantly surfing on the edge of chaos, and managing it, that they're likely contributing to the problem, not fixing it. But someone needs to.
I can't remember details like I used to, things mark&sweep out of my brain much faster they used to. (Probably not just because I'm older but because as a parent, home owner, and spouse... I just have a lot to manage on top of it.) But.. really... a good system, a well-built system ... should be resilient to that, and people with experience.. that's hopefully what we build.
On the other hand, you have to guard against being that person who is in a perpetual state of "Benn there, done that. Didn't work the last 5 times we tried it." Because sometimes the circumstances/market/tech ecosystem genuinely are different.
The key us in understanding why the previous times failed. What constraints existed then, which possibly no longer exist now.
Projects fail for many reasons. Technical, market, capital, time and so on. But things change. Building an add-on for electric cars would likely fail 20 years ago, again 10 years ago. But now? Or 10 years from now?
Only by -really- understanding what caused a project to fail can you determine if that barrier is no longer in place. Which means you can try again, and potentially find the next barrier or success.
I have been lucky enough to have been the youngest person on every team until my mid 30s. I worked with some truly gifted engineers, who had almost no ego, over the course of my career they just were much older than me.
When I reflect I do cringe a bit at what I was zealous about and things I took way too far. But, I do think the discussion, sometimes debate, around the fancy/new vs tried/true resulted in much better results.
Now that I am old, but not that old, the younger engineers who are passionately discovering new tools and “new” design patterns keep me interested in software development. Being able to share where things come from then we can compare/contrast together. It is rarely a straight copy and it’s fun to see how things get better/worse with reinvention.
So, I think trying to get a mix of ages on a team is really beneficial. Passionate young engineers help prevent the old engineers from getting too jaded.
I too was always the youngest person on every team... until I wasn't, and it seemed like I went from youngest to oldest in a blink and I still can't figure out how that can happen.
I got into the industry during the .com boom with no degree, without finishing university, so kind of jumped the queue, age-wise, I guess.
And yes, I often cringe in remembrance of past-self. I cringe at present self, too, though :-)
I'm in the retirement age bracket. My last experience as a consultant led to disgust with valorizing quantity of work as measured by an arbitrary metric susceptible to simple gaming.
That, and prioritizing "new" features over maintenance because the former were booked as CapEx work, thus amortizable, while maintenance was booked as OpEx, combined with companies wanting to minimize CapEx ratio for accounting purposes.
Notice how little of what is measured and managed has anything to do with building working software to satisfy user needs?
I’m in a similar positions (in my 50s with a family to support). For the most part I can get my boring corporate work done fairly quickly. Then I spend some time each day working on personal programming projects where I get my true satisfaction.
For personal projects I make heavy use of LLMs now and coding is still fun when I do it with the latest and greatest tools. I'm about 5X as productive as I would be if I had to crank out code myself.
I'm used to verifying code with a compiler/interpreter and a unit test -- not by going through my code line by line and declaring to an interviewer "yes I think it's correct". My way of doing things is to just run the damn thing with the right tests and it will tell me if something is wrong.
Unfortunately job interviews these days are still hellbent on whiteboarding Leetcode problems. I'm past that. Unfortunately they aren't. It's this kind of BS -- not being allowed to use the best tools that exist -- that makes me not want to code for work anymore.