Student here: I legitimately cannot understand how senior developers can dismiss these LLM tools when they've gone from barely stringing together a TODO app to structuring and executing large-scale changes in entire repositories in 3 years. I'm not a singulatarian, but this seems like a brutal S-curve we're heading into. I also have a hard time believing that there is enough software need to make such an extreme productivity multiplier not be catastrophic to labor demand.
Are there any arguments that could seriously motivate me to continue with this career outside of just blind hope that it will be okay? I'm not a total doomer, currently 'hopium' works and I'm making progress, but I wish my hopes could at least be founded.
Accountants thought spreadsheets would kill their profession, instead demand for them exploded.
Compilers made it much easier to code compared to writing everything in Assembly. Python made it much easier to code than writing C. Both increased the demand for coders.
Code is a liability, not an asset. The fact that less technical people and people who are not trained engineers can now make useful apps by generating millions of lines of code is also only going to increase the need for professional software engineers.
If you're doing an HTML or even React boot camp, I think you'd be right to be a bit concerned about your future.
If you're studying algorithms and data structures and engineering best practices, I doubt you have anything to worry about.
I've seen it already. A small business owner (one man show) friend of mine with zero developer experience was able to solve his problem (very custom business specific data -> calendar management) in a rough way using ChatGPT. But it got past about 300 lines long and really started to get bad. He'd put dozens of hours of time on his weekend to getting it to where it was by using ChatGPT over and over, but eventually it stopped being able to make the highly specific changes he needed as he used it more and more. He came to me for some help and I was able to work through it a bit as a friend, but the code quality was bad and I had to say "to really do this, I'd need to consult, and there's probably a better person to hire for that."
He's muddling along but is looking for low cost devs to contract with on it now that he's getting value out of it though. And I suspect that kind of story will continue quite a bit as the tech matures.
Don't you think that this tech can only get better? And that there will come a time in the very near future when the programming capabilities of AI improve substantially over what they are now? After all, AI writing 300 line programs was unheard of a mere 2 years ago.
This is what I think GP is ignoring. Spreadsheets couldn't to do every task an accountant can do, so they augmented their capabilities. Compilers don't have the capability to write code from scratch, and Python doesn't write itself either.
But AI will continually improve, and spread to more areas that software engineers were trained on. At first this will seem empowering, as they will aid us in writing small chunks of code, or code that can be easily generated like tests, which they already do. Then this will expand to writing even more code, improving their accuracy, debugging, refactoring, reasoning, and in general, being a better programming assistant for business owners like your friend than any human would.
The concerning thing is that this isn't happening on timescales of decades, but years and months. Unlike GP, I don't think software engineers will exist as they do today in a decade or two. Everyone will either need to be a machine learning engineer and directly work with training and tweaking the work of AI, and then, once AI can improve itself, it will become self-sufficient, and humans will only program as a hobby. Humans will likely be forbidden from writing mission critical software in health, government and transport industries. Hardware engineers might be safe for a while after that, but not for long either.
You are extrapolating from when we saw huge improvements 1-2 years ago. Performance improvements have flatlined. Current AI predictions reminds me of self-driving car hype from the mid 2010s
Multimodality, MoE, RAG, open source models, and robotics, have all been/seen massive improvements in the past year alone. OpenAI's Sora is a multi-generational leap over anything we've seen before (not released yet, granted, but it's a real product). This is hardly flatlining.
I'm not even in the AI field, but I'm sure someone can provide more examples.
Ironically, Waymo's self-driving taxis were launched in several cities in 2023. Does this count?
I can see AI skepticism is as strong as ever, even amidst clear breakthroughs.
You make claims of massive improvements but as an end user I have not experienced such. With the amount of fake and cherrypicked demos in the AI space I dont believe anything until I experience it myself.
No because usage is limited to a tiny fraction of drive-able space. More cherrypicking.
Just because you haven't used text generation with practically unlimited context windows, insight extraction from personal data, massively improved text-to-image, image-to-image and video generation tools, and ridden in an autonomous vehicle, doesn't mean that the field has stagnated.
You're purposefully ignoring progress, and gating it behind some arbitrary ideals. That doesn't make your claims true.
No. The progress is not being ignored. Normal people just have a hard time getting excited for something that is not useful yet. What you are doing here is the equivalent of popular science articles about exciting new battery tech - as long as it doesn’t improve my battery life, I don’t care. I will care once it hits the shelves and is useful to me, I do not care about your list of acronyms.
I was arguing against the claim that progress has flatlined, and when I gave concrete examples of recent developments that millions of people are using today, you've now shifted the goalpost to "normal" people being excited about it.
But sure, please tell me more about how AI is a fad.
You are seeing enemies where there are none. I am merely commenting on AI evangelists insisting I have to be psyched about every paper and poc long before it ever turns into a useable product that impacts my life. I don’t care about the internals of your field, nobody does. Achieve results and I will gladly use them.
We've entered the acceleration age
I don't know about the rest of the developers in the world but my dream come true would be a computer that can write all the code for me. I have piles of notebooks and files absolutely stuffed with ideas I'd like to try out but being a single, measly human programmer I can only work on one at a time and it takes a long time to see each through. If I could get a prototype in 30 seconds that I could play with and then have the machine iterate on it if it showed promise I could ship a dozen projects a month. It's like Frank Zappa said "So many books, so little time."
If that will be the case then in a finite and small amount of time all your ideas will already have a wide range of implementations/variations because everybody will do the same as you.
It is like now LLMs are on the way to take over (or destroy) content on the web and will take over posts on social media thus making anyone create anything so fast that the incentive to put manual labor into a piece of content is becoming irrelevant in some ways. You work days to write a blog post and publish it and in the same time 1000s of blog posts are published along with yours fighting for the attention of the same audience. who might just stop reading completely because of so much similar things.
Honestly, I don't care if other people are creating similar things to what I am. Actually, I prefer if there are more people working on the same things because it means there are other people that I can talk to about those things and collaborate with. Even if I don't want to work with others on a project I'm not discouraged by other implementations existing, there's always something that I would want to do differently from what's out there. That's the whole point of building my own things after all; if I were happy with using whatever bog standard app I could find on the web then why would I need to build it? It isn't just about making it my own either, it's also about the fun of diving into the guts of a system and seeing how things work, having a machine capable of producing that code gives me the fantastic ability to choose only the parts I'm interested in to do a deep dive on while skipping the boring stuff I've done 1000x times before and I don't have to type the code if I don't want to, I can just talk to the computer about the implementation details and explore various options with it. That in itself is worth the time spent on it but the awesome side effect is you get a new toy to play with too.
AI can’t write its own prompts. 10k people using the same prompt who actually need 5000 different things.
No improvements to AI will let it read vague speakers’ minds. No improvement to AI will let it get answers it needs if people don’t know how to answer the necessary questions.
Information has to come from somewhere to differentiate 1 prompt into 5000 different responses. If it’s not coming from the people using the AI, where else can it possibly come from?
If people using the tool don’t know how to be specific enough to get what they want, the tool won’t replace people.
s/the tool/spreadsheets
s/the tool/databases
s/the tool/React
s/the tool/low code
s/the tool/LLMs
What makes you say that? One model can write the prompts of another, and we have seen approaches combining multiple models, and models that can evaluate the result of a prompt and retry with a different one.
No, but it can certainly produce output until the human decides it's acceptable. Humans don't need to give precise guidance, or answer technical questions. They just need to judge the output.
I do agree that humans currently still need to be in the loop as a primary data source, and validators of the output. But there's no theoretical reason AI, or a combination of AIs, couldn't do this in the future. Especially once we move from text as the primary I/O mechanism.
I agree with your point, just want to point out that models have been trained on AI generated prompts as synthetic data.
There is a bit of a fallacy in here. We don’t know how far it will improve, and in what ways. Progress isn’t continuous and linear, it comes more in sudden jumps and phases, and often plateaus for quite a while.
Fair enough. But it's enough of a fallacy as asserting that it won't improve, no?
The rate of improvement in the last 5 years hasn't stopped, and in fact has accelerated in the last two. There is some concern that it's slowing down as of 2024, but there is a historically high amount of interest, research, development and investment pouring into the field that it's more reasonable to expect further breakthroughs than not.
If nothing else, we haven't exhausted the improvements from just throwing more compute at existing approaches, so even if the field remains frozen, we are likely to see a few more generational leaps still.
I predict the rate in progress in LLMS will diminish over time, whereas the difficulty of an LLM writing an accurate computer program will go up exponentially with complexity and size. Is an LLM ever going to be able to do what say, Linus Torvalds did? Heck I've seen much less sophisticated software projects than that which it's hard to imagine an LLM doing.
On the lower end, while Joe Average is going to be able to solve a lot of problems with an LLM, I expect more bugs will exist than ever before because more software will be written, and that might end up not being all that terrible for software developers.
This makes sense to me. When updating anything beyond a small project, keeping things reliable and mantainable for the future is 10x more important than just solving the immediate bug or feature. Short term hacks add up and become overwhelming at some point, even if each individual change seems manageable at the time.
I used to work as a solo contractor on small/early projects. The most common job opportunity I encountered was someone who had hired the cheapest offshore devs they could find, seen good early progress with demos and POCs, but over time things kept slowing down and eventually went off the rails. The codebases were invariably a mess of hacks and spaghetti code.
I think the best historical comp to LLMs is offshore outsourcing, except without the side effect of lifting millions of people out of poverty in the third world.
Then it means you can use the matured tech and build in one day a superb service. And improve it the next day.
A large amount of the problems in the world don't require computer programs over a few hundred lines long to solve, so LLMs will still see use by DIY types.
People may underestimate how difficult it is for an LLM to write a long or complex computer program though. It makes sense LLMs do very well at pumping out boilerplate and leetcode answers or trivial programs, but it doesn't nessecarily track that it would be they would that good at writing complex sophisticated and unique custom software. It may in fact be much further away from doing that than a lot of people anticipate, in a self-driving is just around the corner kind of way.
People also forgets that coding is formal logic that describe algorithms to computer which is just a machine. And because it’s formal, it’s rigid and not prone to manipulation. Instead of using LLMs you’d better off studying a book and add some snippets to your editor. What I like about Laravel is their extensive use of generators. They know that part of the code will be boilerplate. The nice thing about Common Lisp is that you can make the language itself generate boilerplate.
You start by talking about apples and finish talking about cars.
Could you explain what you mean by that idiom?
A way that I like to describe something like this is that code is long form poetry with two intended audiences: your most literally minded friends (machines), and those friends that need and want a full story told with a recognizable beginning/middle/end (your fellow developers, yourself in the future). LLMs and boilerplate generators (and linters and so many other tools) are great about the mechanics of the poetry forms (keeping you to the right "meter", checking your "rhymes" for you, formatting your whitespace around the poem, naming conventions) but for the most part they can't tell your poem's story for you. That's the abstract thing of allegory and metaphor and simile (data structures and the semantics behind names and the significance of architecture structures, etc) that is likely going to remain the unique skill set of good programmers.
This is what I've been predicting for over a year: AI-assisted programming will increase demand for programmers.
It may well change how they do their work though, just like spreadsheets did for accountants and compilers did for the earliest generation of hand-code-in-ASM developers. I can imagine a future where we do most of our coding at an even higher level than today and only dive down into the minutia when the AI isn't good enough or we need to fix/optimize something. The same is true for ASM today-- people rarely touch it unless they need to debug a compiler or (more often) to write something extremely optimized or using some CPU-specific feature.
Programming may become more about higher level reasoning than coding lower level algorithms, unless you're doing something really hard or demanding.
Exactly this.
When was the last time you wrote assembler?
A crucial difference to the other examples is this. Compilers and spreadsheets are deterministic and repeatable, and are designed to solve a very specific task correctly.
LLMs, certainly in their current form, aren't.
This doesn't necessarily contradict what you and GP are writing, but it does give a flavor to it that I expect to be important.
This is such a well written, thoughtful, and succinct comment. It is people like you and input like this that make HN such a wonderful place. Had OP (of the comment you responded to) posted this on Twitter or Reddit, they would probably have been flooded with FUD-filled non-sense.
This is what the newcomers need. I've been saying something similar to new Software Engineers over the past couple of years and could never put something in a way you did.
Every single sentence is so insightful and to the point. I love it. Thank you so much for this.
I strongly agree with your points and sentiment as far as the state of machine intelligence remains non-general.
Currently, LLMs summarize, earlier systems classified, and a new system might do some other narrow piece of intelligence. If system is created that thinks and understands and is creative with philosophy and ideas, that is going to be different. I don't know if that is tomorrow or 100 years from now, but that is going to be very different.
Well said. Assistive technology is great when it helps developers write the “right” code and tremendous destructive to company value otherwise.
Hard to know without the benefit of hindsight if a productivity improvement is:
1: An ATM machine - which made banks more profitable, so banks opened up more of them and drew people into the bank with the machines then told them insurance and investments.
2: Online banking - which simply obsoleted the need to go to the bank at all.
My inclination that LLMs are the former, not the latter. I think the process of coding is an impediment to software development being financially viable, not job security.
The hardest part of software development is not writing code, full stop. It never has been and it never will be. The hard part is designing, understanding, verifying, and repairing complex systems. LLMs do not do this, even a little bit.
I guess my worst fear is not "no more jobs because AI can code" but "no more junior jobs because AI can code under the supervision of a senior". SWE jobs will exist, but only seniors will have them and juniors are never hired. Maybe the occasional "apprentice" will be brought on, but in nowhere near the same amount.
Where my blind hope lies more specifically is in networking into one of those "apprentice" roles, or maybe a third tech Cambrian explosion enabled by AI allows me to find work in a new startup. I don't want to give up just yet.
You're posting under a thread where many seniors are discussing how they don't want this because it doesn't work.
You cannot make a model understand anything. You can help a person understand something. You can accomplish that with a simple conversation with a junior engineer.
I will never make GPT-4 or whatever understand what I want. It will always respond with a simulacrum that looks and sounds like it gets what I'm saying, but it fundamentally doesn't, and when you're trying to get work done, that can range from being annoying to being a liability.
Many artists and illustrators thought AI art would never threaten their livelihood because it did not understand form, it completely messed up perspective, it could never draw hands, etc. Look at the state of their industry now. It still doesn't "understand" hands but it can sure as hell draw them. We're even getting video generation that understands object permanence, something that didn't seem possible just over a year ago when the best we got were terrible low quality noisy GIFs with wild inconsistencies.
Many translators thought AI would never replace them, and then Duolingo fired their entire translation team.
I'm sure that GP isn't worried about being replaced by GPT-4. They're worried about having to compete with a potentially much better GPT-5 or 6 by the time they graduate.
On that note, for anyone who hasn't run into the "looks like it understands, but doesn't" issue, here's a simple test case to try it out: Ask ChatGPT to tell you the heights of two celebrities, and ask it which would be taller if they were side-by-side. Regenerate the response a few times and you'll get responses where it clearly "knows" the heights, but also obviously doesn't understand how a taller-shorter comparison works.
A tough pill to swallow that I think a lot of students and very junior engineers fail to realize is that bringing on a new grad and/or someone very junior is quite often a drain on productivity for a good while. Maybe ~6 months for the "average" new grad. Maybe AI exacerbates that timeline somewhat, but engineering teams have always hired new grads with the implicit notion that they're hiring to have a productive team member 6 months to a year down the line, not day one.
Might I suggest Andrei Alexandrescus's CppCon 2023 Closing Keynote talk: https://www.youtube.com/watch?v=J48YTbdJNNc
The C++ standard library has a number of generic algorithms that are very efficient, like decades of careful revision from the C++ community efficient. With the help of ChatGPT Andrei makes major improvements to a few of them. At least right now these machines have a truly impressive ability to summarize large amounts of data but not creativity or judgement. He digs into how he did it, and what he thinks will happen.
He isn't fearmongering he is just one coder producing results. He does lay out some concerns, but at least for the moment the industry needs junior devs.
Unless there's some major breakthrough and AIs are able to gain judgement and reasoning capabilities, I don't think they'll be taking junior jobs any time soon.
Seniors don’t grow on trees, and they all were juniors at some point. And juniors won’t become seniors by only typing AI chat prompts. I wouldn’t fear.
I dropped out of school and went into startups with my first full-time gig in March, 2000. Managed to make that one last a few years, but whoooo boy that was a tough time to be a junior-to-mid developer looking for a job. I even went back to school with plans to go to medical school (yet I'm still a developer 20 years later.)
Being a junior is rough, landing those first few gigs, no doubt about it. It didn't get any better with the advent of code schools, which pretty much saturated the entry level market. But, if you stick it out long enough and keep working on learning, you'll acquire enough skills or network to land that first gig and build from there.
I wouldn't freak out about AIs—they're not going to take all the jobs. They're a tool (and a good one, sometimes.) Learn to use it that way. Learning a good tool can easily accelerate your personal development. Use it to understand by asking it to summarize unfamiliar code, to point you in the right direction when you're writing your own code, but don't have it write code you don't understand (and probably can't, because it doesn't work as written.)
Give it a few years, things will generally work out. Make a plan to be resilient in the meantime and keep learning and you'll be fine.
How many students / people early in career would benefit from having something to help them explore ideas?
How many don't have the advantages I had, of a four-year university, with professors and TAs and peers to help me stumble through something tricky?
How many have questions they feel embarrassed to ask their peers and mentors because they might make them look stupid?
Don't give up. This is a generational opportunity to lift up new developers. It's not perfect (nothing is). But if we sweat hard enough to make it good, then it is our chance to make a dent in the "why are there not more ______ people in tech" problem.
You can not have seniors without juniors.
Interacting with a computers (and therefor creating software) will probably soon detach itself from the idea of single chars and the traditional QWERTY keyboard.
Computing is entering a fascinating phase, I'd stick around for it.
If you're a junior looking for a job, it's always a tough time. Getting your first gig is insanely brutal (college career fairs help a lot). That said, I wouldn't give up and blame AI for "taking our jerbs". I would say the current macroeconomic conditions with higher interest rates have reduced the amount of developer headcount companies can support. AKA companies are risk averse right now, and juniors are a risk (albeit a relatively low cost).
If I were in your shoes, I would just stop consuming the doom and gloom AI content, and go heads down and learn to build things that others will find useful. Most importantly, you should be having fun. If you do that you'll learn how to learn, have fun, build a portfolio, and generally just be setting yourself up to succeed.
IMHO juniors who rely on AI to write code will never learn. You need to make mistakes to learn, and AI never makes mistakes even when it’s wrong.
As a senior, I write code 25% of the time, and it’s always to understand the intent of what I should fix or develop. This is something that AI will not be able to do for a long time since it cannot speak and understand what customers want.
The last 75% of my time are spent refactoring this "intent" or making sure that the business is running, and I’m accountable for it. AI will never be accountable for anything, again for a long time.
I’m scared for juniors that don’t want to learn, but I work with juniors who outsmart me with their knowledge and curiosity.
That requires seniors to adopt these AI tools. So far, I only see juniors going hard to those tools.
Less time spent writing code is more time you can spend thinking about those hard parts, no?
Yes? But it's commonly understood that reading code is harder than writing code. So why force yourself into a reading-mostly position when you don't have to? You're more likely to get it wrong.
There are other ways to decrease typing time.
I don't know about that. Maybe for kernel code or a codec. But I think most people could read (and understand) a 100 line class for a CRUD backend faster than they could write one.
It's not harder unless you write hard to read code.
LLMs make exceptionally clean code in my opinion. They don't try to be fancy or "elegant", they just spit out basic statements that sometimes (or most of the time) do what you need.
Then you _read_ what it suggests, with a skilled eye you can pretty much glance and see if it looks good and test it.
Luckily, we have perfectly behaving, feature-rich software for every complex system already. /s
I like the Primeagen's examples for the simple stuff. On stream he fires up an editor and copilot then writes the function signature for quick sort. Then copilot gets it wrong. It creates a sort function, but one worse than quick sort but not as bad as bubble sort.
These LLMs will get better. But today they are just summarizing. They screw up fairly simple tasks in fairly obvious ways right now. We don't know if they will do better tomorrow or in 20 years. I would wager it will be just a few years, but we have code that needs to be written today.
LLMs are great for students because they are often motivated and lacking broad experience, and a summarizer will such person very far.
can't wait until they're good enough to screw up complex tasks in subtle ways after undercutting the pay of junior developers such that no one is studying how to program computers anymore
In the future there won't be a Spacer Guild but a Greybeard Guild that mutates greybeard developers until they're immortal and forces them to sit at a terminal manipulating ancient Cobol, Javascript, and Go for all eternity, maintaining the software underpinnings of our civilization.
The electrons must flow.
LLMs remove the most annoying bits, making it possible for 1 person to do 3 people's work. LLMs are also good at fixing minor bugs. So version upgrades, minor maintenence and finding the right API handshakes will soon be doable for big LLMs without user supervision. Lastly, LLMs are accelerating existing tailwinds towards software commodotization. If an LLM can create a good-enough website on a low code platform, how likely are you to hire a front end engineer for the last 10% of excellence ?
Think about how many jobs are 'build a website', 'build an app' or 'manage this integration' style roles. They are all at risk of being replaced.
I agree, but you have to write a lot of code before you become good enough to think that clearly. If juniors don't get the opportunity to work their way up to a senior, then they might just never pick up the right skills. What's more like is is that CS education will undergo drastic changes, and masters/specialization might become a more degree requirement. But, those already on the market are in for a big shock.
or getting engineers to communicate properly with each other :)
Absolutely. Copilot Workspace might not seem like it, but it's very much our first step towards tools to aid in comprehension and navigation of a codebase. I think a lot of folks have conflated "generative AI" with "writes code" when reading and understanding is a much larger part of the job
I was looking for something like this. The only that might change is some of your toolset but LLMs won't change the nature of the job (which is what people seem to be thinking about)
Every single time a change like this happens, it turns out that there is in fact that much demand for software.
The distance between where we are now and the punch card days is greater than where we are now and the post-LLM days and yet we have more software developers than ever. This pattern will hold and you would need much stronger evidence than “LLMs seem like an effective productivity multiplier” for me to start to doubt it.
Also don’t forget that 80% of software development isn’t writing code. Someone is still gonna have to convert what the business wants into instructions for the LLM so it can generate Java code so the JVM can generate byte code so the runtime can generate assembly code so the the processor can actually do something.
And lastly, there are a lot of industries that won’t touch LLM’s for security reasons for a long time and even more that are just still writing Java 8 or COBOL and have no intention of trying out fancy new tools any time soon.
So yeah, don’t be too down in the dumps about the future of software development.
It seems like with GitHub is aiming for is a future where "what the business wants" can just be expressed in natural language the same way you might explain to a human developer what you want to build. I would agree that right now, LLMs generally don't do well with very high-level instructions, but I'm sure that will improve over time.
As for the security concerns, I think that's a fair point. However, as LLMs become more efficient, it they become easier to deploy on-prem, that mitigates one significant class of concerns. You could also reasonably make the argument that LLMs are more likely to write insecure code. I think that's true with respect to a senior dev, but I'm not so sure with junior folks.
SQL and python are arguably the languages closest to English, and even then getting someone to understand recursion is difficult. How do you specify that some values should be long lived? How do you specify exponential retries. Legalese tries to be as specific as possible without being formal and even then you need a judge on a case. Maybe when everyone has today’s datacenter compute power in their laptop.
Yes, but they're not English. All the concerns that you mention are ones that I think LLM development tools are aiming to eliminate from explicit consideration. Ideally, a user of such a tool shouldn't even have to have ever heard of recursion. I think we're a long way off from that future, but it does feel possible.
Have you ever actually tried getting proper non-contradictory requirements in pain natural language from anyone?
Good luck
This is absolutely a skill in itself. It could well be the case that such a plain expression of requirements in natural language is a valuable skill that enables use of such tools in the future.
We've been there before with 4GL in many forms, they all failed on the same principle: it requires reasoning to understand the business needs and translate that into a model made in code.
LLMs might be closer to that than other iterations of technology attempting the same but they still fail in reasoning, they still fail to understand imprecise prompts, correcting it is spotty when the complexity grows.
There's a gap that LLMs can fill but that won't be a silver bullet. To me LLMs have been extremely useful to retrieve knowledge I already had (syntax from programming languages I stopped using a while ago; techniques, patterns, algorithms, etc. that I forgot details about) but every single time I attempted to use one to translate thoughts into code it failed miserably.
It does provide a lot in terms of railroading knowledge into topics I know little about, I can prompt one to give me a roadmap of what I might need to learn on a given topic (like DSP) but have to double-check the information against sources of truth (books, the internet). Same for code examples for a given technique, it can be a good starting point to flesh out the map of knowledge I'm missing.
Any other case I tried to use it professionally it breaks down spectacularly at some point. A friend who is a PM and quite interested in all GenAI-related stuff has been trying to hone in prompts that could generate him some barebones application to explore how it could be used to enhance his skills, it's been 6 months and the furthest he got is two views of the app and saving some data through Core Data on iOS, something that could've been done in an afternoon by a mid-level developer.
I agree that we're far off from such a future, but it does seem plausible. Although I wouldn't be surprised to find that when and if we get there, that the underlying technology looks very different from the LLMs of today.
I think that's pretty powerful in itself (the 6 months to get there notwithstanding). I expect to see such use cases become much more accessible in the near future. Being able to prototype something with limited knowledge can be incredibly useful.
I briefly did some iOS development at a startup I worked at. I started with literally zero knowledge of the platform and what I came up with barely worked, but it was sufficient for a proof of concept. Eventually, most of what I wrote was thrown out when we got an experienced iOS dev involved. I can imagine a future where I would have been completely removed from the picture at the business folks just built the prototype on their own. Failing that, I would have at least been able to cobble something together much more quickly.
I do agree that this is their goal but I expect that expressing what you want the computer to do in natural language is still going to be done by programmers.
Similar to how COBOL is closer to natural language than assembly and as such more people can write COBOL programs, but you still need the same skills to phrase what you need in a way the compiler (or in the future, the LLM) can understand, the ability to debug it when something goes wrong, etc.
“Before LLM, chop wood, carry water. After LLM, chop wood, carry water.”
As for the security stuff, on premise or trusted cloud deployments will definitely solve a lot of the security issues but I think it will be a long time before conservative businesses embrace them. For people in college now, most of them who end up working at non-tech companies won’t be using LLM’s regularly yet.
Software Engineering is not a subset of computer science, they just intersect. And as a software engineer, your job can be summarized as gathering requirements and designing a solution, implementing and verifying said solution, and maintaining the solution in regards to changes. And the only thing AI does now is generating code snippets. In The Mythical Man Month, Brooks recommend to spend 1/3 of the schedule to planning, 1/6 to coding, 1/2 to testing components and systems (half for each). And LLMs can’t do the coding right. What LLMs add, you still have to review and refactor and it would have been faster to just do it.
False. Obviously this depends on the work, but an LLM is going to get you 80-90% of the way there. It can get you 100% of the way there, but I wouldn't trust it, and you still need to proof read.
In the best of times, it is about as good as a junior engineer. If you approach it like you're pair programming with a junior dev that costs <$20/mo then you're approaching it correctly.
No. No it can't.
However amazing they are (and they are unbelievably amazing), they are trained on existing data sets. Anything that doesn't exist on StackOverflow, or is written in a language slightly more "esoteric" than Javascript, and LLMs start vividly hallucinating non-existent libraries, functions, method call and patterns.
And even for "non-esoteric" languages it they will wildly hallucinate at every turn apart from some heavily trodden paths.
Yes it can. When the project is yet another javascript CRUD app, 80% isn't brand new, never existed before code, but almost-boilerplate that does exist on StackOverflow, on a heavily trodden path where the LLM will get you 80% of the way there.
You've literally repeated what I said
but with yes instead of no
Nothing changed in your description compared to what I wrote. It still remains "for a well-trodden path in a well-known language with SO-level solutions it will help you, for anything else, good luck"
I'm not contradicting what you're saying, no. I'm emphasizing that the well trodden path as being the majority of the work out there, as opposed to possibly being flippant about "anything else". if I'm reading you wrong, apologies.
Serious answer to a legitimate question:
1. Good senior developers are taking the tools seriously, and at least experimenting with them to see what's up. Don't listen to people dismissing them outright. Skepticism and caution is warranted, but dismissal is foolish.
2. I'd summarize the current state of affairs as having access to an amazing assistant that is essentially a much better and faster version of google and StackOverflow combined, which can also often write code for well specified problems. From what I have seen the current capabilities are very far from "specify high-level business requirements, get full, production app". So while your concern is rational, let's not exaggerate where we actually are.
3. These things make logical errors all the time, and (not an expert) my understanding is that we don't, at present, have a clear path to solving this problem. My guess is that until this is solved almost completely human programmers will remain valuable.
Will that problem get solved in the next 5 years, or 10, or 20? That's the million dollar question, and the career bet you'll be making. Nobody can answer with certainty. My best guess is that it's still a good career bet, especially if you are willing to adapt as your career progresses. But adapting has always been required. The true doom scenario of business people firing all or most of the programmers and using the AI directly is (imo) unlikely to come to pass in the next decade, and perhaps much longer.
I think latency is the biggest reason I killed my Copilot sub after the first month. It was fine at doing busy-work, like 40%~ success rate for very very standard stuff, which is a net win of like... 3-5%. If it was local and nearly instant, I'd never turn it off. Bonus points if I could restrict the output to finishing the expression and nothing more. The success rate beyond finishing the first line drops dramatically.
Interesting, latency has always been great for me. If it's going to work, it usually has suggestions within a second or two. I use the neovim plugin though so not on the typical VS-code based path.
I also used the neovim plugin. I'm on a fiber connection in the Midwest, so that is likely a factor. Latency was on the order of 2-5s consistently, which is way more than enough to interrupt my flow.
Interesting indeed! Do you normally experience high latencies >1s from your connection or is copilot more of an outlier? I have noticed that when I travel to the midwest I will get latencies around 70 to 90 ms rather than my current 30 ms, but It's not something I really notice too much, though that tends to be in major cities.
Most people in the midwest likely have cable or DSL internet which adds 30-50ms~ of latency out of the gate. My high speed business fiber connection to my apartment gets 28ms to github.com and 36ms to api.github.com, so it's probably not that, though it depends on where the Copilot datacenters are.
For adding and refactoring it can be a great tool. For greenfield development, it's more tricky - yesterday I sat down and started writing something new, with no context to give Copilot. Mid sourcefile, I paused to think about what I wanted to write - it spit out three dozen lines of code that I then had to evaluate for correctness and just ended up throwing away. I could have probably helped the process by writing docs first, but I'm a code first, docs second kind of guy. Totally sold on LLM written unit tests though, they are a drag to write and I do save time not writing them by hand.
It's going to be a bit before LLMs can make an app or library that meets all requirements, is scalable, is secure, handles dependencies correctly, etc, etc. Having an LLM generate a project and having a human check it over and push it in the right direction is not going to be cheaper than just having a senior engineer write it in the first place, for a while. (I could be off-base here - LLMs are getting better and better)
I'm not worried about being replaced, my bigger worry is in the mean time the bottom end falling out of the engineering market. I'm worried about students learning to program now being completely dependent on LLMs and never learning how to build things without it and not knowing the context behind what the LLM is putting out - there's definitely a local maxima there. A whole new "expert beginner" trap.
So, part of the trickiness here is that there's a few different moving pieces that have to cooperate for success to happen.
There needs to be a great UX to elicit context from the human. For anything larger than trivial tasks, expecting the AI to read our minds is not a fruitful strategy.
Then there needs to be steerability — it's not just enough to get the human to cough up context, you have to get the human to correct the models' understanding of the current state and the job to be done. How do you do that in a way that feels natural.
Finally, all this needs to be defensive against model misses — what happens when the suggestion is wrong? Sure, in the future the models will be better and correct more often. But right now, we need to design for falliability, and make it cheap to ignore when it's wrong.
All of those together add up to a complex challenge that has nothing to do with the prompting, the backend, the model, etcetc. Figuring out a good UX is EXACTLY how we make it a useful tool — because in our experience, the better a job we do at capturing context and making it steerable, the more it integrates that thinking you stopped to do, but should have had some rigorous UX to trigger.
Yeah to be clear I think Copilot Workspace is a great start. I wonder if the future is multi-modal though. Ignoring how obnoxious it would be to anyone near me, I could foresee narrating my stream of thoughts to the mic while using the keyboard to actually write code. It would still depend on me being able to accurately describe what I want, but it might free me from having to context switch to writing docs to hint the LLM.
I mean we explored that a little with Copilot Voice :D https://githubnext.com/projects/copilot-voice/
But yeah, the important part is capturing your intent, regardless of modality. We're very excited about vision, in particular. Say you paste a screenshot or a sketch into your issue...
I don't even trust AI for tests, except for generating test cases, but even then it usually does something idiotic and I have to think up a bunch of other test cases anyways
This is where I've landed, but I'm also skeptical of totally relying on them for this.
In my personal experience, it's worked out, but I can also see this resulting in tests that look correct but aren't, especially when the tests require problem domain knowledge.
Bad tests could introduce bugs and waste time in a roundabout way that's similar to just using LLMs for the code itself.
Because we've seen similar hype before and we know what impactful change looks like, even if we don't like the impact (See: Kubernetes, React, MongoDB).
Is this actually happening? I haven't seen any evidence of that.
You can look at SWE-Agent, it solved 12 percent of the GitHub issues of their test dataset. It probably depends on your definition of large-scale.
This will get much better, it is a new problem with lots of unexplored details, and we will likely get GPT-5 this year, which is supposed to be a similar jump in performance as from 3.5 to 4 according to Altman.
This is a laughable definition of large-scale. It's also a misrepresentation of that situation: It was 12% of issues in a dataset for the top 5000 repositories pypy packages. Further "solves" is a incredibly generous definition, so I'm assuming you didn't read the source or any of the attempts to use this service. Here's one where it deletes half the code and replaces network handling with a comment to handle network handling: https://github.com/TBD54566975/tbdex-example-android/pull/14...
"this will get much better" is the statement I've been hearing for the past year and a half. I heard it 2 years ago about the metaverse. I heard it 3 years ago about DAOs. I heard it 5 years about block chains...
What I do see is a lot more lies. Turns out things are zooming along at the speed of light if you only read headlines from sponsored posts.
... Wait, that's not one that they considered a _success_, is it? Like, one of the 12%?
We unfortunately have no idea what they consider a success! That's just one of the most recent ones by some random user who wanted to use the program in the real world.
Don't forget that this is marketing.
"You'll be the best cook if you buy the Mega Master Automated Kitchen Appliance (with two knives included)"
That line is marketed at me, who does not know how to cook, they're telling me I'll be almost a chef.
You'll hear Jensen say that coding is now an obsolete skill, because he's marketing the capabilities of his products to shareholders, to the press.
It might well be that in 10 years these LLMs are capable of doing really serious stuff, but if you're studying CS now, this would mean for you that in 10 years you'll be able to use these tools much better than someone who will just play with it. You'll really be able to make them work for you.
All the famous chefs didn't become famous from their cooking. They became famous because of their charisma. Jamie Oliver looked really good on camera.
AI will never be able to bullshit the way humans can.
LLMs bullshit, or hallucinate, or lie, or confabulate all day long.
Whoa, don't quit your course because of a product announcement! That'd be overreacting by a lot. Please consider these points instead!
Firstly, it's not true that LLMs can structure and execute large scale changes in entire repositories. If you find one that can do that please let me know, because we're all waiting. If you're thinking of the Devin demo, it turned out on close inspection to be not entirely what it seemed [1]. I've used Claude 3 Opus and GPT-4 with https://aider.chat and as far as I know that's about as good as it gets right now. The potential is obvious but even quite simple refactorings or changes still routinely fox it.
Now, I've done some research into making better coding AIs, and it's the case that there's a lot of low hanging fruit. We will probably see big improvements ... some day. But today the big AI labs have their attention elsewhere, and a lot of ideas are only executable by them right now, so I am not expecting any sudden breakthroughs in core capabilities until they finish up their current priorities which seem to be more generally applicable stuff than coding (business AI use cases, video, multi-modal, lowering the cost, local execution etc). Either that or we get to the point where open source GPT-4+ quality models can be run quite cheaply.
Secondly, do not underestimate the demand for software. For as long as I've been alive, the demand for software has radically outstripped supply. GitHub claims there are now more than 100 million developers in the world. I don't know if that's true, because it surely captures a lot of people who are not really professional developers, but even so it's a lot of people. And yet every project has an endless backlog, and every piece of software is full of horrible hacks that exist only to kludge around the high cost of development. Even if someone does manage to make LLMs that can independently tackle big changes to a repository, it's going to require a very clear and precise set of instructions, which means it'll probably be additive. In other words the main thing it'd be applied to is reducing the giant backlog of tickets nobody wants to do themselves and nobody will ever get to because they're just not quite important enough to put skilled devs on. Example: any codebase that's in maintenance mode but still needs dependency updates.
But then start to imagine all the software we'd really like to have yet nobody can afford to write. An obvious one here is fast and native UI. Go look at the story that was on HN a day or two ago about why every app seems so inefficient these days. The consensus reason is that nobody can afford to spend money optimizing anything, so we get an endless stream of Electron apps that abuse React and consume half a gig of RAM to do things that Word 95 could do in 10MB. Well, porting a web app to native UI for Mac or Windows or Linux seems like the kind of thing LLMs will be good at. Mechanical abstractions didn't work well for this, but if you can just blast your way through porting and re-porting code without those abstractions, maybe you can get acceptably good results. Actually I already experimented with porting JavaFX FXML files to Compose Multiplatform, and GPT-4 could do a decent job of simple files. That was over a year ago and before multimodal models let it see.
There are cases where better tech does wipe out or fundamentally change jobs, but, it's not always the case. Programmer productivity has improved enormously over time, but without reducing employment. Often what we see when supply increases is that demand just goes up a lot. That's Jevon's Paradox. In future, even if we optimistically assume all the problems with coding LLMs get fixed, I think there will still be a lot of demand for programmers but the nature of the job may change somewhat to have more emphasis on understanding new tech, imagining what's possible, working out what the product should do, and covering for the AI when it can't do what's needed. And sometimes just doing it yourself is going to be faster than trying to explain what you want and checking the results, especially when doing exploratory work.
So, chin up!
[1] https://news.ycombinator.com/item?id=40010488
And yet jobs are more difficult to come by than any time in recent history (regardless of skill or experience; excepting perhaps "muh AI" related roles), a seemingly universally expressed sentiment around these parts.
People usually mean FAANG jobs with the absurdly overinflated FAANG-level pay when they talk about jobs being hard to come by, to be fair.
That's because we've been here before. Be it the ERPs of the 90s-00s, the low/no-codes of the 2010s, the SaaS and the chatbots of 2015. There was always hype about automating the job. At the end of the day, most of a programmer's job is understanding the business domain, it's differences and edge cases, and translating those into code. An LLM can do the latter part, the same way a compiler can do high-level java into assembly
I mean, I don't disagree!
The leading coefficient of these tools successfully getting you to/near the goal is all about clearly articulating the domain and the job to be done
Ergo, it's pretty important to craft experiences that make their core mechanic about that. And that's how Copilot Workspace was designed. The LLM generating the code is in some ways the least interesting part of CW. The effort to understand how the code works, which files must be touched, how to make coordinated changes across the codebase — that's the real challenge tackled here.
But there is so much context that the LLM has no access to. Implicit assumptions in the system, undocumented workflows, hard edge cases, acceptable bugs and workarounds, Peter principle boundaries, etc... All these trade-offs need someone that understands the entire business domain, the imperfect users, the system' implementation and invariants, the company's politics and so much more. I have never encountered a single programmer, no matter intelligence and seniority, that could be onboarded on a project simply by looking at the code.
After programming for 35 years since I was 12 and learning everything from the transistor level up through highly abstracted functional programming, I'm a total doomer. I put the odds of programming being solved by AI by 2030 at > 50%, and by 2040 at > 95%. It's over.
Programming (automating labor) is the hardest job there is IMHO, kinda by definition. Just like AGI is the last problem is computer science. You noticed the 3 year pace of exponential growth, and now that will compound, so we'll see exponential-exponential growth. AIs will soon be designing their own hardware and playgrounds to evolve themselves perhaps 1 million times faster than organic evolution. Lots has been written about this by Ray Kurzweil and others.
The problem with this is that humans can't adapt that fast. We have thought leaders and billionaires in total detail of the situation. Basically that late stage capitalism becomes inevitable once the pace of innovation creates a barrier to entry that humans can't compete with. The endgame will be one trillionaire or AI owning the world, with all of humanity forced to perform meaningless work, because the entity in charge will lack all accountability. We're already seeing that now with FAANG corporations that are effectively metastasized AIs using humans as robots. They've already taken over most governments through regulatory capture.
My personal experience in this was that I performed a lifetime of hard work over about a 25 year period, participating in multiple agencies and startups, but never getting a win. So I've spent my life living in poverty. It doesn't matter how much I know or how much experience I have when my mind has done maybe 200 years of problem solving at a 10x accelerated rate working in tech - see the 40 years of work in 4 years quote by Paul Graham. I'm burned out basically at a level of no return now. I'm not sure that I will be able to adapt to delegating my labor to younger workers like yourself and AI without compromising my integrity by basing my survival on the continued exploitation of others.
I'd highly recommend to any young people reading this to NOT drink the kool-aid. Nobody knows what's going to happen, and if they say they do then they are lying. Including me. Think of tech as one of the tools in your arsenal that you can use to survive the coming tech dystopia. Only work with and for people that you can see being someday. Don't lose your years like I did, building out someone else's dream. Because the odds of failure which were once 90% in the first year are perhaps 99% today. That's why nobody successful pursues meaningful work now. The successful people prostitute themselves as influencers.
I'm still hopeful that we can all survive this by coming together and organizing. Loosely that will look like capturing the means of production in a distributed way at the local level in ways that can't be taken by concentrated wealth. Not socialism or communism, but a post-scarcity distribution network where automation provides most necessities for free. So permaculture, renewable energy, basically solarpunk. I think of this as a resource economy that self-evidently provides more than an endlessly devaluing and inflating money supply.
But hey, what do I know. I'm the case story of what not to do. You can use my story to hopefully avoid my fate. Good luck! <3
Hey I was feeling particularly dour when I wrote this, but not everything is doom and gloom. AI/AGI will be able to solve any/all problems sometime between 2030 and 2040. I take the alarmist position because I've been hit with bad news basically every day since the Dot Bomb and 9/11, and don't feel that I'm living my best life. But we can manifest a brighter tomorrow if we choose to:
'ChatGPT for CRISPR' creates new gene-editing tools:
https://www.nature.com/articles/d41586-024-01243-w
https://news.ycombinator.com/item?id=40205961
The intersection of AI with biology will enable us to free ourselves of the struggles of the human condition. Some (like me) are concerned about that, but others will run with it and potentially deliver heaven on Earth.
The way I see it all going is that a vanishingly small number of people, roughly 1 in 10,000 (the number of hackers/makers in society) will work in obscurity to solve the hard problems and get real work done on a shoestring budget. But we'll only hear about the thought leaders and billionaires who do little more than decide where the resources flow.
So the most effective place to apply our motivation, passion and expertise will be in severing the hold that capital has over innovation. Loosely that looks like UBI and the resource economy I mentioned, which I just learned has the name Universal Basic Services (UBS), and intentionally avoids complications from the money side being manipulated by moneyed interests:
https://en.wikipedia.org/wiki/Universal_basic_services
The idea is that by providing room and board akin to an academic setting, people will be free to apply their talents to their calling and work at an exponentially faster rate to get us to a tech utopia like Star Trek, instead of being stuck in the path we're on now towards a neofeudalist tech dystopia.
Sorry if I discouraged anyone. I truly believe that there is still hope!
We’re not dismissing them! They’re just not that good at helping us with our actual work.
I have Copilot on and it’s…fine. A marginal productivity improvement for specific tasks. It’s also great at variable names, which is probably the main reason I leave it on.
But it’s not replacing anyone’s job (at least not yet).
Copilot is great at boilerplate and as a super autocomplete.
Useful when needing to recall some api without having to open the browser and google too.
But honestly writing code is nowhere near the hard part of the job, so there's 0 reasons to fear LLMs.
No they didn't. They're still at the step of barely stringing together a TODO app, and mostly because it's as simple as copying the gazillionth TODO app from GitHub.
I’ve used copilot recently in my work codebase and it absolutely has no idea what’s going on in the codebase. At best it’ll look at the currently open file. Half the time it can’t seem to comprehend even the current file fully. I’d be happy if it was better but it’s simply not.
I do use chatgpt most recently today to build me a GitHub actions yaml file based on my spec and it saved me days of work. Not perfect but close enough that I can fill in some details and be done. So sometimes it’s a good tool. It’s also an excellent rubber duck- often better than most of my coworkers. I don’t really know how to extrapolate what it’ll be in the future. I would guess we hit some kind of a limit that will be tricky to get past because nothing scales forever
AI can't prompt itself. The machines will always need human operators.
this is completely wrong.. the entire LLM system was bootstrapped by "self-supervised learning" .. where data sets are divided and then training proceeds on parts.. it is literally self-training
Writing code is one of the less important parts of your job as a developer. Understanding what the client actually wants and turning that into code is where the real difficulty lies for most software development. Also, I doubt we'll be able to overcome the "subtly wrong/insecure" issues with a lot of LLM generated code
This is what AI bros don’t understand since they seem to spend their days writing CRUD backends for REST APIs.
You need to understand a lot of stuff before coding anything:
- client: what do you want? - product owner: what does the client really want? - me: what do they fucking want and how will I do it? - QA: how will I test this cleanly so that they don’t bother me all day long? - manager: when do you want it? - boss: how much are you willing to spend for this?
We usually say that nerds are shy and introverted, but we are central to the development of a product, and I don’t think an AI can change this.
after the first two or three times i got asked to code-review something that another developer "didn't know how to write so just asked copilot/chatgpt/etc and it produced this, could you tell me if it's right?" i got pretty tired of it. obviously it's useless to ask questions about how the code was written because the person asking for the code review didn't actually write it and they don't have any answers about why it was written how it was.
especially on the back of the xz supply chain attack and, y'know, literally any security vulnerability that slipped through code review, i refuse to have unaccountable, unreviewed code in projects i work on.
somewhat recently, there was the case with air canada's LLM-based support bot making a false statement and then a judge forcing air canada to honour it. i think we're setting the stage for something like that happening with LLM-written code – it's going to be great for a while, everyone's going to be more productive, and then we'll all collectively find out that copilot spat out a heartbleed-level flaw in some common piece of software.
Maybe it would help me, not sure. I haven't been impressed with what I've seen when team members have "run stuff through chatgpt". (I'm senior, been doing this professionally for 25 years. Made my share of mistakes, supported my code for a decade or two.)
My main issue at the moment with Junior devs is getting stuck in the weeds, chasing what they think is a syntax error, but not seeing (or hearing) that what they have is a lack of understanding. Some of that is experience, some of that is probably not being able to read the code and internalize what it all means, or make good test cases to exercise it.
If you can't produce the code, and have a sketchy grasp of reasoning it out, debugging it is going to be a step too far. And the AIs are (hopefully) going to be giving you things that look right, there will be subtle bugs. This puts it in the dangerous quadrant.
Noah Smith has an economic argument that even if AI is better than humans at literally everything, we'll still have full employment and high wages: https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-...
Staff engineer here. First started writing code roughly 24 years ago, been doing it professionally in one form or another for about 18 years. Many years ago I was asked how I planned to compete - because all IT was being offshored to other countries who worked cheaper and had more training in math than I did.
I've been asked whether no-code platforms would make us obsolete. I've wondered if quantum computing would make everything we know become obsolete. Now people are wondering whether LLM tools will make us obsolete.
All these things make us more productive. Right now I'm excited by AI tools that are integrated into my IDE and offer to finish my thoughts with a stroke of the 'Tab' key. I'm also very underwhelmed by the AI tools that try to implement the entire project. You seem to be talking about the latter. For the type of coding exercises we do for fun (test drive an implementation of Conway's Game of Life), LLM's are good at them and are going to get better. For the type of coding exercises we do for pay (build a CRUD API), LLM's are mediocre at them. They can give you a starting point, but you're going to do a lot of fiddling to get the schema and business logic right. For the type of coding exercises we do for a lot of pay (build something to solve a new problem in a new way), LLM's are pretty terrible. Without an existing body of work to draw from, they produce code that is either very wrong or subtly flawed in ways that are difficult to detect unless you are an expert in the field.
Right now, they are best used as a productivity enhancer that's autocomplete on steroids. Down the road they'll continue to offer better productivity improvements. But it's very unlikely they will ever (or at least in our lifetimes) entirely replace a smart developer who is an expert in their field. Companies know that the only way to create experts is to maintain a talent pipeline and keep training junior developers in hope that they become experts.
Software development has continued to grow faster than we can find talent. There's currently no indication of LLM's closing that gap.
No engineer worth their salt (whether junior, mid or senior) should be concerned.And this is a good illustration why: https://news.ycombinator.com/item?id=40200415
Either way you're going to want to have a backup career plan. By 40 if not earlier you could be forced out of tech by ageism or something else. Unless you transition into management, but even then. I don't think middle management is going to be immune to AI-enabled employment destruction. So you basically should plan to make most of your money from software in the first decade or two. Live cheap and save/invest it.
Best to plan and train early because its super hard to switch careers mid life. Trust me, I'm failing at it right now.
Having written and sold machine learning software for over 15 years, you are definitely over-reacting. There is about a 10 year pattern. Every 10 years AI gets drummed up as the next big thing. It is all marketing hype. It always fails.
This happened in the 70s, 80s, 90s, 00s, 10s, and now the 20s. Without fail. It is a hyped up trend.
Only be concerned when someone is presenting a real breakthrough in the science (not the commercial aspect). A real breakthrough in the science will not have any immediate economic impact.
Convolutional neural networks are absolutely not revolutionary over the prior state of the art. These are incremental gains in model accuracy at the cost of massive data structures. There is no real leap up here in the ability for a machine to reason.
ChatGPT is just snake oil. Calm down. It will come and go.
FWIW, as an oldish, so far everything that has been significantly impacted by deep learning has undergone a lot of change, but hasn't been destroyed. Chess and Go are a couple of easy examples; the introduction of powerful machine learning there has certainly changed the play, but younger players that have embraced it are doing some really amazing things.
I would guess that a lot of the same will happen in software. A lot of the scut work will evaporate, sure, but younger devs will be able to work on much more interesting stuff at a much faster pace.
That said, I would only recommend computing as a career to youth that are already super passionate about it. There are some pretty significant cultural, institutional, and systemic problems in tech right now that are making it a miserable experience for a lot of people. Getting ahead in the industry (where that means "getting more money and more impressive job titles") requires constantly jumping on to the latest trends, networking constantly for new opportunities, and jumping to new companies (and new processes / tech stacks) every 18 months or so. Companies are still aggressively culling staff, only to hire cheaper replacements, and expectations for productivity are driving some developers into really unhealthy habits.
The happiest people seem to be those that are bringing practical development skills into other industries.
If you really think its over, pivot into business. As someone who is all in on AI assistance, there's simply too much curation for me to do to think it's replacing people any time soon, and that's just with coding which is maybe 20% of my job these days. I will say CoPilot/ChatGPT helped reduce my 30% into 20% pretty quickly when I first started using it. Now I largely take suggestions instead of actually typing it out. When I do have to type it out it's always nice to have years of experience.
Said it before will say it again, it's a multiplier, that's it.
These tools make people who know software engineering massively more productive.
Given the choice between an LLM-assisted non-engineer and an LLM-assisted experienced software engineer, I know who I would want to work with - even if the non-engineer was significantly cheaper.
AI _will_ take jobs. It's a matter of when and not if. The real question is will that occur in the next 10/50/100 years.
It might not happen in your lifetime, but as you've noted the rate of progress is stunning. It's possible that the latest boom will lead to a stall, but of course nobody knows.
IMO it's way too hard to predict what the consequences will be. Ultimately the best thing you can do are to continue with your degree, and consider what skills you have that an AI couldn't easily replicate. e.g. no matter how good AI gets, robotics still has a ways to go before an AI could replace cooks, nurses, etc.
I have yet to see an LLM build and debug something complex. Barely stringing TODO apps, sure.
Still need a competent human to oversee. Hallucination are a serious problem. Without symbolic reasoning, LLMs quickly start to fall apart due to context limits and being able to know what exactly is wrong and needs to be changed.
A simple allegory to this: Cloud Computing.
Cloud computing boomed, and is by some measure continuing to do so, the last ~15 years, from AWS to Firebase to VPS providers like Linode.
The promise, in part, was that it would replace the need for certain roles, namely system administrators and - depending on what technologies you adopted - you could replace good chunks of backend engineers.
Yet, what happened was roles shifted. System Administration became DevOps, and backend engineers learned to leverage the tools to move faster but provide value elsewhere - namely in designing systems that are stable and well interconnected between different systems, and developing efficient schema representations of data models, among other things.
The reality today, is I can buy an entire backend, I can even buy a backend that will automatically stand up API endpoints in GraphQL or REST, (or both!). Even though this is true, the demand for backend engineers hasn't shrunken dramatically (if anything, it seemingly increased).
Technologies enable things in unforseen ways all the time, and whether LLMs will displace alot of tech workers will be up for debate, and the reality is - for some at least - it will, but overall, if we take the closest situations possible from the past, it will overall increase the demand for software engineers over time, as LLMs paired with humans have thus far shown that it works best that way and I foresee that continuing to the case, much like accountants + excel is better than accountants - excel.
Don't fret. What people call AI these days is just a gigantic economically unsound bullshit generator (some underlying ideas might be valuable though) that passed very stupid tests. It is brutally marketed like blockchain & crypto by some sociopaths from Silicon Valley, their mini-mes and middle management hell which needs to double down on bad investments.
The bigger problem I see is the economical situation.
First they are definitely not currently as capable as you say. Second there is a misconception that the rise of LLMs has been exponential but the curve is really logistic and we've hit the flat tail hard imo. Where is ChatGPT5? All the Coding AI tools I've tried like Copilot either havent gotten better since release or seemingly gotten worse as they try to fine tune them. Third there is ton more to being a software engineer than writing ReactTodoAppMVCDemo which many responses have been talking about.
I've been writing software professionally for 25 years and I am absolutely not dismissing them, on the contrary.
We are currently in a window where LLM's are helpful but nothing more, making them a great tool. I suspect that will last for a good while and probably turn me into more of a "conductor" in time -- instructing my IDE something like "let's replace this pattern with this other one", and have it create a PR for me that changes many files in one go. But I see absolutely no reason why the evolution shouldn't continue to the point where I just need to tell it what I want from a user perspective.
Nobody really dismisses LLM's as not being useful. I've been a developer for 15 years and LLM's help a ton with coding, system design, etc. My main piece of advice for students is make sure that your heart is in the right place. Tech isn't always an easy or secure field. You have to love it.
In my experience these tools continue to be really bad at actually translating business needs, that is, specific logic for handling/processing information, specific desired behaviors. They're good at retreiving general patterns and templates, and at transforming existing code in simple ways. My theory is that translating the idea of how some software is supposed to function into working code requires a robust world model and capacity for deep thought and planning. Certainly we will create AI capable of these things eventually, but when that happens "did I make a mistake in studying computer science?" will not be your greatest concern.
The opposite, we see these tools as mechsuits to help developers, and particularly newer developers, to do things that they would struggle to do.
Power tools did not result in fewer buildings built. I mean I guess some early skyscrapers did not benefit from modern power tools. But I don't think any construction company today is like "nah we'll just use regular saws thanks".
The allergy to hype is real; I don't think this or any tool is a magic wand that lets you sit back and just click "implement". But the right UX can help you move along the thought process, see solutions you might not have gotten to faster, and iterate.
It is hard to find direct comparisons as the tech is truly novel, but I have heard people say we don’t need to learn math because your calculator can do “university level math”. I don’t know how close that argument is to yours, but there is some overlap.
Your calculator can indeed do fancy math, but you will not be able to do anything with it because you do not understand it.
This is like fancying yourself an engineer because you constructed an IKEA cupboard or an automotive expert because you watched Youtube.
Anything an amateur can come up with is blown to pieces by an actual expert in a fraction of the time and will be of considerable higher quality. The market will go through a period of adjustment as indeed the easy jobs will be automated, but that makes the hard jobs even harder, not easier.
Once you automate the easy stuff, the hard stuff remains.
Basically:
Expert + AI > Amateur + AI
LLMs are autocomplete with context, for simple tasks they do OK but for complex tasks they get lost and produce crap
Give an example of "structuring and executing large-scale changes in entire repositories". Let's see the complexity of the repository along with what it structured and executed.
Ever been frustrated with a piece of software? Or wished for some software to exist to solve a problem? Well just point these nifty tools at it and watch the solutions magically materialize. That will comfort you somewhat after being bummed out over obsoletion. But if you find you can't materialize the desired software quick time then.. I guess at least for now humanity still required
The market still desperately needs engineers. We’re still at a point in supply/demand where experienced engineers are making 2-3x national median salaries. It’s tougher for juniors to land the most lucrative positions, but there are still tons of jobs out there. The more money you accumulate early in your career, the more time that money has to grow. Interest rates are high, so it’s a great time to be saving money.
Also, the skills you learn as an engineer are highly transferable, as you learn problem solving skills and executive function - many top CEOs have engineering backgrounds. So if you do need to pivot later in your career, you’ll be set up for success
Because the problem is not necessarily coding.
90% of the market is just doing CRUDS, and every year there's a new magical website that will make all websites be built by a WYSIWYG drag and drop editor.
The problem is even defining the correct requirements from the start and iterating them.
My concern is not the death of the market, but more of the amount of not good but workable code that's going to make juniors learning path a lot harder.
As others said, I do think this will help productivity by removing the let's please update the readme, changelog, architecture diagram etc etc part of the codebase, and maybe in some cases actually remove the need to generate boilerplate code all together (why bother when it can be generate on the fly when needed for eg).
it's a new tool that really works well in some ways and falls on its face in others. My advice, learn the tool and how/when to use it and become an expert. You'll be in a better place than many "seniors" and your peers by having a large toolset and knowing each one very well. Also, be careful believing the hype. Some specific cases can make for incredible videos and those are going to be everywhere. Other use cases really show the weaknesses but those demos will be much harder to find.
Well, I've been out of work for now over a year, and it's the third time in 10 years. People will say that advances in tooling create more work, but ultimately more work is created when there's more money flowing around, and when that money is invested in tooling to eek out productivity gains, which will continue, but will it outpace how much we're padding out the bottom of the funnel? Will so much more money start flowing around for a good reason that it matches how many people can do the work?
It's also worth considering that if you finished school prior to 2020 and started trying to tackle the brutal fight upstream that software development already was, why the hell would it be worth it? For... the passion? For... the interest in technical stuff? Quite frankly, in a tech career, you need to get quite lucky with timing, skill, perception of your own abilities and how they relate to what you're paid to do, and if you have the ability to be passably productive at it, it's at least worth considering other paths. It may end up comfy, or it may end up extremely volatile, where you're employed for a bit and then laid off, and then employed, and laid off, and in-between you end up wondering what you've done for anyone, because the product of your labor is usually at-best ephemeral, or at-worst destructive to both the general population and your mind and body; waking up and going straight over to your computer to crank out digital widgets for 8 hours might seem lovely, but if it's not, it's isolating and sad.
Also worth considering the tax changes in the U.S that have uniquely made it more difficult to amortize the cost of software development, but I don't claim to understand all that yet as a non-US person.
Programming languages are a formal way to deliver instructions to a machine. Software is a set of instructions whether you use Python or some visual language . LLMs are just another way of generating those instructions. You still need someone that knows how those instructions work together logically and how to build it using best practices. You can't escape that no matter what you use (Python, some no code tool or LLMs).
So in that sense, the role is not going anyway anytime soon. The only thing that could change is how we make software (but even that is unlikely to change that much anytime soon)
Because they don't work? I've been harsh on these LLM models because every time I have interacted with them, they've been a giant waste of time. I've spent hours with them, and it just goes nowhere, and there's a huge amount of noise and misinformation.
I recently had another round where I tried to put aside my existing thoughts and perhaps biases and tried a trial of Copilot for a couple of days, using it all day doing tasks. Nearly every single piece of code it gave me was broken, and I was using Python. I was trying to use it for a popular Python library whose documentation was a bit terse. It was producing code from the various versions of the library's API, and nothing it gave me compiled. We ended up just going in circles, where it had no idea what to do. I was asking something as simple as "here's a YAML file, write me Python code to read it in" (of course in more detail and simple steps). It couldn't do it. I eventually gave up and just read the documentation and used StackOverflow.
About the only thing I have been able to use it for so far with relatively consistent success is to write boilerplate code. But even then, it feels like I'm using more time than just doing it myself.
And that happens a lot with this stuff. I initially got very excited about Copilot because I thought, shit I was wrong about all this, this is useful. But after that wore off, I saw it for what it is. It's just throwing a bunch of statistically correlated things at me. It doesn't understand anything, and because of that, it gets in the way.
First of all, yes, this is a provocative prompt that bears engagement. You're right to be concerned.
I share your frustration with the reticence of seasoned engineers to engage with these tools.
However, "structuring and executing large-scale changes in entire repositories" is not a capability that is routinely proven out, even with SOTA models in hellaciously wasteful agentic workflows. I only offer a modest moderation. They'll get there, some time between next week and 2030.
Consider: Some of the most effective engineers of today cut their teeth writing assembly, fighting through strange dialects of C or otherwise throwing themselves against what are now incontestibly obselete technologies but otherwise honed their engineering skills to a much higher degree than their comrades who glided in on Java's wing.
Observe that months of hand-sculpted assembly has turned into a single Python call. AI is yet another tier of abstraction.
Another lens is application -- AI for X domain, for X group, for X age, for X culture. Lots to do there.
Finally, there's empowerment. If this technology is so powerful, do you concede that power to others? Or are you going to be a part of the group that ensures it benefits all?
FYI, OpenAI published a labor market study suggesting professions that are more or less exposed to AI. Take a look.
I've been working on the same product for 6 years now, on a team of people who have been working on it for 8 years. It is ridiculous how hard it is for us to design a new feature for the product that doesn't end up requiring the entirety of the deep knowledge we have about how existing features in the product interact with each other. That's not even getting into the complex interplay between the client and the backend API, and differences in how features are described and behave between the client and server. It's, in some ways, a relatively straightforward product. But the devil is in the details, and I find it hard to believe LLMs will be able to build and possess that kind of knowledge. The human brain is hard to beat deep down, and I think it betrays a pessimism about our overall capability when people think LLMs can really replace all the things we do.
My two cents: I worked in a different engineering field before transitioning to Software Engineering because "coding" was and is what we need to solve problems, and I got the hang of it. A few years in, I spend little of my day actually writing code but more time in meetings, consoles, documentation, logs, etc. Large language models (LLMs) help when writing code, but it's mostly about understanding the problem, domain, and your tools. When going back to my old area, I am excited about what a single person can do now and what will come, but I am also hitting walls fast. LLMs are great when you know what you are doing, but can be a trap if you don't and get worse and worse the more novel and niche you go.
They're asymptotic to human performance.
Not enough context to tell if YOU should worry.
Anyways, in general, I wouldn't worry because if we get to a point where software can replace human software engineers then almost everyone else will be without a job soon after (think bug-free software being produced exponentially for every market niche).
It seems to me we never make less of something when we make it more efficiently. The opposite seems true.
Sure, if one clung to writing "code" on binary punch cards instead of adopting assembly then that person would have been redundant after a while. Today some people still write assembly but the vast majority uses higher level languages. LLMs will probably be the next step in the abstraction ladder. If you think yourself a <insert programming language> programmer then, yeah, you should worry. Current programming languages will be obsolete in my opinion. Letting LLMs write code and then reading/changing it is a very short term (and doomed) trend. You don't read compiler-written assembly of your <insert programming language> programs, do you? Almost no one cares how a piece of software works unless it's slow and/or needs to be modified. Software programming will get to a much higher level of abstraction (think modules, integrations). So much so that everyone could do it. The same way everyone could be a plumber but almost no one is. A plumber is paid to suffer under the sink or getting covered in dirty water and or crap. Something you don't want to deal with. Sure business owner could get an LLM to write their brand new idea of the day but they won't. They will pay someone else to do it, you'll be that someone because they'd rather handle other business stuff or enjoy the money they're making. On top of all consider that the amount of new things people could do will also increase exponentially. If we lived lives like our ancestors we could do nothing all day (and probably we are, to their eyes) but we don't, actually we get busier and busier.
Dude once you work in industry you will realize that LLMs aint coming for your job any time soon. The job, for many software engineers, is primarily soliciting and then translating domain-expert requirements into technical requirements. The coding is just wrapping things up.
LLMs are useless for this.
Simple.
It's the 90%, 10% theory.
LLMs will do the 90% that is easy, the final 10% it'll get wrong and will insist on it's solutions being correct.
If anything this is horrible for junior level developers. A senior dev now has a restless junior developer at their whim.
As far as your own career, I'd argue to finish your degree, but be aware things are about to get really rough. Companies don't like headcount. Even if it's not true today, in the future AI + 1 senior engineer will be faster than 4 juniors + 1 senior.
I haven't seen any evidence that these systems are capable of structuring and executing large-scale changes in entire repositories; but given you're still a student, your definition of large might be different.
The middle of an S-curve looks like an asymptote, which is where we're at right now. There's no guarantee that we'll see the same kind of exponential growth we saw over the past three years again. In fact, there's a ton of reason to believe that we won't: models are becoming exponentially more expensive to train; the internet has been functionally depleted of virgin training tokens; and chinks in the armor of AI's capabilities are starting to dampen desire for investment in the space.
Everyone says "this is the worst they'll be"; stated as a fact. Imagine its 2011 and you're running Windows 7. You state: "This is the worst Windows will ever be". Software is pretty unpredictable. It does not only get better. In fact, software (which absolutely includes AI models) has this really strange behavior of fighting for its life to get worse and worse unless an extreme amount of craft, effort, and money is put into grabbing the reins and pulling it from the brink, day in, day out. Most companies barely manage to keep the quality at a constant level, let alone increase it.
And that's traditional software. We don't have any capability to truly judge the quality of AI models. We basically just give each new one the SAT and see the score go up. We can't say for certain that they're actually getting better at the full scope of everything people use them for; a feat we can barely accomplish for any traditionally observable software system. One thing we can observe about AI systems very consistently, however, is their cost: And you can bet that decision makers at Microsoft, Anthropic, Meta, whoever, obsess about that just as much if not more than capability.