return to table of content

A coder considers the waning days of the craft

miiiiiike
228 replies
9h5m

I have a simple front-end test that I give to junior devs. Every few months I see if ChatGPT can pass it. It hasn’t. It can’t. It isn’t even close.

It answers questions confidently but with subtle inaccuracies. The code that it produces is the same kind of non-sense that you get from recent bootcamp devs who’ve “mastered” the 50 technologies on their eight page résumé.

If it’s gotten better, I haven’t noticed.

Self-driving trucks were going to upend the trucking industry in ten years, ten years ago. The press around LLMs is identical. It’s neat but how long are these things going to do the equivalent of revving to 100 mph before slamming into a wall every time you ask them to turn left?

I’d rather use AI to connect constellations of dots that no human possibly could, have an expect verify the results, and go from there. I have no idea when we’re going to be able to “gpt install <prompt>” to get a new CLI tool or app, but, it’s not going to be soon.

ern
107 replies
8h47m

I was on a team developing a critical public safety system on a tight deadline a few years ago, and i had to translate some wireframes for the admin back-end into CSS. I did a passable job but it wasn’t a perfect match. I was asked to redo it by the team-lead. It had zero business value, but such was the state of our team…being pixel perfect was a source of pride.

It was one of the incidents that made me to stop front-end development.

As an exercise, I recently asked ChatGPT to produce similar CSS and it did so flawlessly.

I’m certainly a middling programmer when it comes to CSS. But with ChatGPT I can produce stuff close to the quality of what the CSS masters do. The article points this out: middling generalists can now compete with specialists.

miiiiiike
36 replies
8h43m

I’d need to see it.

I can’t get ChatGPT to outperform a novice. And now I’m having candidates argue that they don’t need to learn the fundamentals because LLMs can do it for them.. Good luck HTML/CSS expert who couldn’t produce a valid HTML5 skeleton. Reminds me of the pre-LLM guy who said he was having trouble because usually uses React.. So I told him he could use React. I don’t mean to rag on novices but these guys really seemed to think the question was beneath them.

If you want to get back into front-end read “CSS: The Definitive Guide”. Great book, gives you a complete understanding of CSS by the end.

flir
19 replies
8h1m

I'm always asking it to stitch together ad hoc bash command lines for me, eg "find all the files called *.foo in directories called bar and search them for baz".

(`find / -type d -name 'bar' -exec find {} -type f -name '*.foo' \; | xargs grep 'baz'` apparently.)

I would have done that differently, but it's close enough for government work.

BlueTemplar
8 replies
7h17m

Except you should at least try to write code for someone else (and probably of lower level of competence - this also helps for your own debugging later) - obscure one-liners like these should be rejected.

techbuttman
3 replies
6h34m

The lower level person need only plug that one liner into chatGPT and ask for a simple explanation.

We're in a different era now.

maegul
1 replies
6h5m

Yep! It’s something some aren’t seeing.

The AI coding assistant is now part of the abstraction layers over machine code. Higher level languages, scripting languages, all the happy paths we stick to (in bash, for example), memory management with GCs and borrow checkers, static analysis … now just add GPT. Like mastering memory management and assembly instructions … now you also don’t have to master the fiddly bits of core utils and bash and various other things.

Like memory management, whole swathes of programming are being taken care of by another program now, a Garbage Collector, if you will, for all the crufty stuff that made computing hard and got in between intent and assessment.

Nullabillity
0 replies
17m

The difference is that all of them havetheoriesandprinciplesbacking them, and we understand why they work.

LLMs (and "AI" in general) are just bashing data together until you get something that looks correct (as long as you squint hard enough). Even putting them in the same category is incredibly insulting.

Nullabillity
0 replies
21m

So now the same tool can generate both the wrong script and the wrong documentation!

radiator
0 replies
6h46m

To me, the only obscure thing about this is that it is a one-liner.

If you write it in three lines, it is fine. Although, I guess the second find and the grep could be shortened, combined into one command.

flir
0 replies
6h48m

I wouldn't call it obscure, just bog standard command line stuff. How would you have done it?

emidln
0 replies
3h32m

If the someone elses on my team can't read a short shell pipeline then I failed during interviewing.

Zababa
0 replies
6h58m

Assuming there is a comment just above the one-liner saying "find all directories named 'bar', find all files named '*.foo' in those directories, search those files for 'baz'", this code is perfectly clear. Even without the comment, it's not hard to understand.

LaGrange
4 replies
5h0m

This is funny to me, because I would _always_ use -print0 and xargs -0, and for good reasons, I believe. But if you base your entire knowledge on what you find online, then yes, that's what you get - and what _most people will get too_. Also, I can still update that command if I want.

So it's not any worse than good-old "go to stack overflow" approach, but still benefits from experience.

FYI, this is the correct, as-far-as-I-can-tell "good" solution:

find . -type d -name 'bar' -print0 | \ xargs -0 -I{} find {} -type f -name '*.foo' -print0 | \ xargs -0 grep -r baz

This won't choke on a structure like this: ls -R .: bar foo

./bar: test.foo 'test test.foo'

./foo: bar bleb.foo

./foo/bar:

lukeschlather
2 replies
2h10m

Using better languages like Powershell or Python becomes a lot more valuable here. I definitely think bash is going to be mostly useless in 5 years, you'll be able to generate legible code that does exactly what you want rather than having to do write-only stuff like that. Really we're already there. I've long switched from bash to something else at the first sign of trouble, but LLMs make it so easy. Poorly written python is better than well-written bash.

Of course, LLMs can generate go or rust or whatever so I suspect such languages will become a lot more useful for things that would call for a scripting language today.

emporas
0 replies
47m

I would generalize even more and say that any scripting language is going to be deprecated very soon, like Python etc. They are going to be replaced by safe, type-checked, theorem-proved verbose code, like Rust or something similar.

What do i care how many lines of code are necessary to solve a problem, if all of them are gonna be written automatically. 1 line of Bash/awk versus 10 lines of Python versus 100 lines of Rust? Are they any different to one another?

boredtofears
0 replies
6m

I definitely think bash is going to be mostly useless in 5 years

I'll take that bet

LaGrange
0 replies
2h59m

...actually

find . -path '/bar/.foo' -print0 | xargs -0 grep baz

;-) no regex, no nested suff, much shorter. My brain went back to it ;-)

sgarland
1 replies
4h14m

  $ find . -type f -regex '.*bar/[^/]*.foo' -exec grep baz {} +
I wonder if you created a GPT and fed it the entirety of Linux man pages (not that it probably didn't consume them already, but perhaps this weights them higher), if it would get better at this kind of thing. I've found GPT-4 is shockingly good at sed, and to some extent awk; I suspect it's because there are good examples of them on SO.

flir
0 replies
4h10m

If SO had known to block GPTBotbeforeit was trained, GPT4 would be a lot less impressive.

candiddevmike
1 replies
4h14m

You can use gpt for government work??

flir
0 replies
4h9m

Shh. What they don't know won't hurt me.

(Serious answer: it's just an expression.https://grammarist.com/idiom/good-enough-for-government-work...).

SavageBeast
0 replies
5h14m

Same here! Thats the main use I have for ChatGPT in any practical sense today - generating Bash commands. I set about giving it prompts to do things that I've had to do in the past - it was great at it.

Find all processes named '*-fpm' and kill the ones that have been active for more than 60 seconds - then schedule this as a Cron job to run every 60 seconds. It not only made me a working script rather than a single command but it explained its work. I was truly impressed.

Yes it can generate some code wireframes that may be useful in a given project or feature. But I can do that too, usually in about the time it'd take me to adequately form my request into a prompt. Life could get dangerous in a hurry if product management got salty enough in the requirements phase that the specs for a feature could just be dropped into some code assistant and generate product. I don't see that happening ever though - not even with tooling - product people just don't seem to think that way in the first place in my experience.

As developers we spend a lot of our time modifying existing product - and if the LLM knows about that product - all the better job it could do I suppose. Not saying that LLMs aren't useful now and won't become more useful in time - because they certainly will.

What I am saying is that we all like to think of producing code as some mystical gift that only we as experienced (BRILLIANT, HANDSOME AND TALENTED TOO!!) developers are capable of. The reality is that once we reach a certain level of career maturity, if we were ever any good in the first place, writing code becomes the easiest part of the job. So theres a new tool that automates the easiest part of the job? Ok - autocomplete code editors we're cool too like that. The IDE was a game changer too. Automated unit tests were once black magic too (remember when the QA department was scared of this?).

When some AI can look at a stack trace from a set of log files, being fully aware of the entire system architecture, locate the bug that compiled and passed testing all the way to production, recommend, implement, test and pre-deploy a fix while a human reviews the changes then we're truly onto something. Until then I'm not worried that it can write some really nice SQL against my schema with all kinds of crazy joins - because I can do that too - sometimes faster - sometimes not.

So far ChatGPT isn't smarter than me but it is a very dutiful intern that does excellent work if you're patient and willing to adequately describe the problem, then make a few tweaks at the end. "Tweaks" up to seeing how the AI approached it, throwing it out and doing it your own way too.

ern
5 replies
8h32m

Just out of curiosity: is the code generated by ChatGPT not what you expected or is it failing to produce the result that you wanted.

I suspect you mean the latter, but just wanted to confirm.

miiiiiike
4 replies
8h23m

The statements are factually inaccurate and the code doesn’t do what it claims it should.

meiraleal
3 replies
7h31m

Right. That's an experience completely different from the majority here that have been able to produce code that integrates seamlessly into their projects. Do you have any idea why?

I guess we should start by what version of ChatGPT you are using.

BlueTemplar
2 replies
7h10m

ChatGPT might as well not exist - I'm not touching anything GAFAM-related.

Any worthy examples of open source neural networks, ideally not from companies based in rogue states like the US ?

meiraleal
1 replies
6h24m

Falcon is developed by an UAE tech arm, not sure if you would consider it a rogue state or not:https://falconllm.tii.ae/

BlueTemplar
0 replies
5h57m

What is «tech» supposed to mean here ? Infocoms ?

The United Arab Emirates ? Well, lol, of course I do, that's way worse than the US.

Cthulhu_
4 replies
7h53m

I haven't practiced or needed to use the fundamentals in literal years; I'm sure I'd fumble some of these tests, and I've got err, 15 years of experience.

It's good to know the fundamentals and be able to find them IF you find a situation where you need them (e.g. performance tuning), but in my anecdotal and limited experience, you're fine staying higher level.

soco
1 replies
6h35m

I had a chilling experience of late when, out of curiosity, I tried the actual online practice exam for driving school. Boy did I fail it. I realized that there are quite some road signs I never saw in my life, and more important, that my current solution to all their right of way questions is "slow down and see what the others do" - not even that wrong if I think about but won't get you points in the exam.

collyw
0 replies
4h52m

And I suspect you would be a lot less likely to be involved in a crash than someone who had just passed the test.

worksonmine
0 replies
3h59m

There are levels of fundamentals though, since parent mentioned HTML/CSS/React I guess they're referring to being able create a layout by hand vs using a CSS framework/library. You don't need to know how a CPU works to fix a CSS issue, but if all you know is combining the classes available you'll have trouble with even the simplest web development.

Everyone should know enough fundamentals to be able to write simple implementations of the frameworks they depend on.

dragonelite
0 replies
7h23m

Kind of same sort of situation, but i do like to refresh some of the fundamentals every 3~4 years or so. Usually when i do a job hop.

Its kind of like asking an olympic sprinter how to walk fast.

j45
2 replies
6h10m

Sometimes it requires expert guidance to get something meaningful out.

Art9681
1 replies
4h10m

This is the correct answer. I have 23 years of experience in datacenter ops and it has been a game changer for me. Just like any tool on one's arsenal, it's utility increases with practice and learning to use it correctly. ChatGPT is no different. You get out of it what you put in to it. This is the way of the world.

I used to be puzzled as to why my peers are so dismissive of this tech. Same folks who would say "We don't need to learn no Kubernetes! We don't need to code! We don't need ChatGPT". They don't!

And it's fine. If their idea of a career is working in same small co. doing the same basic Linux sysadmin tasks for a third of the salary I make then more power to them.

The folks dismissive of the AI/ML tech are effectively capping their salary and future prospects in this industry. This is good for us! More demand for experts and less supply.

You ever hire someone that uses punch cards to code?

Neither have I.

kevindamm
0 replies
3h28m

I think it's more akin to using compilers in the early days of BCPL or C. You could expect it to produce working assembly for most code but sometimes it would be slower than a hand-tuned version and sometimes a compiler bug would surface, but it would work well enough most of the time.

For decades there were still people who coded directly in assembly, and with good reason. And eventually the compiler bugs would be encountered less frequently (and the programmer would get a better understanding of undefined behavior in that language).

Similar to how dropping into inline assembly for speeding up execution time can still have its place sometimes, I think using GPT for small blocks of code to speed up developer time may make some sense (or tabbing through CoPilot), but just as with the early days of higher level programming languages, expect to come across cases where it doesn't speed up DX or introduces a bug.

These bugs can be quite costly, I've seen GPT spit out encryption code and completely leave out critical parts like missing arguments to a library or generating the same nonce or salt value every execution. With code like this, if you're not well versed in the domain it is very easy to overlook, and unit tests would likely still pass.

I think the same lesson told to young programmers should be used here -- don't copy/paste any code that you do not sufficiently understand. Also maybe avoid using this tool for critical pieces like security and reliability.

vidarh
1 replies
8h10m

Requirements vary. It certainly can't produce really complex visual designs, or code a designer would be very happy with, but I have a hobby project work in progress where gpt4 has producedallof the CSS and templates. I have no doubt that the only reason that worked well is that it's a simple design of a type there is about a billion of in its training set and that it'd fall apart quickly if I started deviating much from that. But if t produced both clean CSS and somethingnicer lookingthan I suspect I would have myself.

A designer would probably still beat it - this doesn't compete with someone well paid to work on heavily custom designs. But at this point itdoescompete with places like Fiverr for me for things I can't or don't want to do myself. It'll take several iterations for it to eat it's way up the value chain, but it probably will.

But also, I suspect a lot of the lower end of the value chain, or at least part of them, will pull themselves up and start to compete with the lower end of the middle by figuring out how to use LLMs to take on bigger, more complex projects.

miiiiiike
0 replies
8h1m

This meshes pretty well with my experience.

lucideer
35 replies
8h27m

I recently asked ChatGPT to produce similar CSS and it did so flawlessly.

I use ChatGPT every day for many tasks in my work and find it very helpful, but I simply do not believe this.

The article points this out: middling generalists can now compete with specialists.

I'd say it might allow novices to compete with middling generalists, but even that is a stretch. On the contrary, ChatGPT is actually best suited to use by a specialist who has enough contextual knowledge to construct targeted prompts & can then verify & edit the responses into something optimal.

steveBK123
25 replies
5h10m

That's about my experience.

The worst dev on my team uses ChatGPT a lot, and its facilitated him producing more bad code more quickly. I'm not sure it's a win for anyone, and he's still unlikely to be with the team in a year.

It allows a dev who doesn't care about their craft or improving to generate code without learning anything. The code they generate today or a year from today is the same quality.

Part of it is that it allows devs who lean into overcomplicating things to do so even more. The solutions are never a refinement of what already exists, but patch on top of patch on top of patch of complexity. ChatGPT is not going to tell you how to design a system, architect properly, automate, package, test, deploy, etc.

For the team it means there's a larger mess of a code base to rewrite.

happycube
13 replies
4h27m

On the flip side, one can use ChatGPT as only a starting point and to learn from there. One isn't stuck with actuallyusingwhat it outputs verbtim, and really shouldn't until at least a hypothetical GPT-6 or 7... and to use it fully now, one has to know how to nudge it when it goes into a bad direction.

So overall it's more an amplifier than anything else.

steveBK123
3 replies
4h6m

If the results were more akin to a google or stack overflow where there was a list of results with context.. sure.

But people are using the singular response as "the answer" and moving on..

JadeNB
1 replies
2h27m

If the results were more akin to a google or stack overflow where there was a list of results with context.. sure.

I don't think the history of the usage of either shows that most people make any use of that context.

happycube
0 replies
1h49m

Especially these days you have to know how to use/read Google and SO results too.

(And I should have said ChatGPT4earlier, if you're a bad to medicore developer taking ChatGPT3.5 literally you'll probably wind up in a Very Bad Place.)

thebigspacefuck
0 replies
2h48m

Phind is a bit more like this

lucideer
3 replies
4h17m

So overall it's more an amplifier than anything else.

Overall itwouldbe an amplifier if that were how the majority used it. Sadly I don't believe that to be the case.

aswanson
2 replies
2h53m

That's been the case with every technology made by man since fire.

lucideer
1 replies
1h40m

Yup. We should embrace it, but without be naïve about what great things it's bringing us :)

jjgreen
0 replies
1h8m

Ouch! Hot!

komali2
2 replies
3h14m

I have a lot of juniors floating around my co-op and when I watch them use chatgpt it seems it becomes a dependency. In my opinion it's harming their ability to learn. Rather than thinking through problems they'll just toss every single roadblock they hit instantly into a chatgpt prompt.

To be fair I've been doing the same thing with simple mathematics into a calculator in my browser that at this point I'm pretty sure I'd fail at long division by hand.

Maybe it won't matter in a few years and their chatgpt skills will be well honed, but if it were me in their position I wouldn't gamble on it.

steveBK123
1 replies
2h11m

Yeah that's my larger point about the guy and the pattern. It doesn't lead to growth. I've seen zero growth whatsoever.

And he slacks coworkers like he is talking to ChatGPT too, slinging code blobs without context, example input data, or the actual error he received..

raducu
0 replies
19m

Yeah that's my larger point about the guy and the pattern. It doesn't lead to growth. I've seen zero growth whatsoever.

If Chat GPT can solve a problem consistently well, I don't think it's worth the effort to master it.

My examples are regexes, command lines to manipulate files, kafka/zookeeper commands to explore a test environment.

For me it's a big win in that regard.

raducu
0 replies
25m

nudge it when it goes into a bad direction

It happenes a few times for me that Chat GPT gets stuck in a bullshit loop and I can't get it unstuck.

Sure I could summarise the previous session for Chat GPT and try again, but I'm too tired at that point.

chrisfinazzo
0 replies
32m

This has been my experience as well - For repetitive things, If what you're looking for is the shitty first draft, it's a way to get things started.

After that, you can shape the output - without GPT's help - into something that you can pull off the shelf again as needed and drop it into where you want it to go, because at that point in the process, you know it works.

whynotminot
2 replies
1h41m

ChatGPT is not going to tell you how to design a system, architect properly, automate, package, test, deploy, etc.

If you ask the right questions it absolutely can.

I’ve found that most people thinking ChatGPT is a rube are expecting too much extrapolation from vague prompts. “Make me a RESTful service that provides music data.” ChatGPT will give you something that does that. And then you’ll proceed to come to hacker news and talk about all the dumb things it did.

But, if you have a conversation with it. Tell it more of the things you’re considering. Some of the trades off you’re making—how the schema might grow over time, it’s kind of remarkable.

You need to treat it like a real whiteboarding session.

I also find it incredibly useful for getting my code into more mainstream shape. I have my own quirks that I’ve developed over time learning a million different things in a dozen different programming languages. It’s nice to be able to hand your code to ChatGPT and simply ask “is this idiomatic for this language?”

I think the people most disappointed with ChatGPT are trying to treat it like a Unix CLI instead of another developer to whiteboard with.

raducu
0 replies
28m

But, if you have a conversation with it

It can't tell you a straight answer or halucinates API. It can't tell you "no, this cannot be done", it tries to "help" you.

For me it's great for writing simple isolated functions, generating regexes, command line solutions, exploring new technologies, it's great.

But after making it write a few methods, classes, it just gets extremelely tedious to make it add/change code, to the point I just write it myself.

Further, when operating at the edge of your knowledge, it also leads you on, whereas a human expert would just tell you "aaah, but that's just not possible/not a good idea".

cbozeman
0 replies
45m

This has been my experience as well.

Every person I've noticed who says that ChatGPT isn't good at what it does has the same thing in common - they're not great at talking to people, either.

Turns out when you train an AI on the corpus of human knowledge, you have to actually talk to it like a human. Which entirely too many people visiting this website don't do effectively.

ChatGPT has allowed me to develop comprehensive training programs for our internal personnel, because I already have some knowledge of training and standardization from my time in the military, but I also have in-depth domain knowledge so I can double-check what it's recommending, then course correct it if necessary.

mlinhares
1 replies
4h5m

Bless this person your team, he is creating work out of thin air and will keep your team and possibly other teams employed for a really long time

steveBK123
0 replies
3h30m

Exactly - and people say AI will take away jobs!

gessha
1 replies
1h58m

I have a hunch that using ChatGPT might be a skill in of itself and it doesn’t necessarily hurt or help any particular skill level of developers.

In previous replies in this thread the claim is it helps novices compete with associates or associates with seniors but in reality, it will probably help any tier of skill level.You just have to figure out how to prompt it

whynotminot
0 replies
1h17m

One hundred percent. Most people I’ve seen dismiss ChatGPT simply refuse to engage it appropriately. It’s not likely to solve your most complex problem with a single prompt.

Asking the right questions is such an important skill in and of itself. I think we’re seeing to some extent the old joke about engineers not knowing how to talk to people manifest itself a bit with a lot of engineers right now not knowing quite how to get good results from ChatGPT. Sort of looking around the room wondering what they’re missing since it seems quite dumb to them.

andy99
1 replies
2h37m

The worst dev on my team uses ChatGPT a lot, and its facilitated him producing more bad code more quickly.

This is great. The exact same is true with writing, which I think it's trivial for anyone to see. Especially non-native speakers or otherwise bad writers can now write long-winded nonsense, which we're starting to see all over. It hasn't made anyone a good writer, it's just helped bad ones go faster.

ptyyy
0 replies
1m

Especially non-native speakers or otherwise bad writers can now write long-winded nonsense

You have now described 95% of Quora's content.

JohnMakin
1 replies
2h19m

I use “chatGPT” (really bing chat which is openAI under the hood as I understand) more than anyone on my team but it is very rarely for code.

I most often use it for summarizing/searching through dense documentation, creating quick prototypes, “given X,Y,Z symptoms and this confusing error message, can you give me a list of possible causes?” (basically searches Stack Overflow far better than I can).

Anyway basically the same as I was using google when google was actually good. sometimes I will forget some obscure syntax and ask it how to do something, but not super often. I’m convinced using it solely to generate code is a mistake unless it’s tedious boilerplate stuff.

necrotic_comp
0 replies
54m

Yes, agreed. The best way of putting this is "using google when google was actually good."

jorams
5 replies
5h54m

I agree with this. There are cases where it produces good results, but there are also cases where it produces bs, and it's not always obvious. I find it to work fine for cases where I know what I want but could use a starting point, but it often invents or misunderstands all kinds of things.

The most frustrating situations are those where it invents a function that would miraculously do what's necessary, I tell it that function does not exist, it apologizes, shuffles the code around a bit and invents a different function, etc. It's the most annoying kind of debugging there is.

nkozyra
1 replies
3h56m

There are cases where it produces good results, but there are also cases where it produces bs, and it's not always obvious.

Pessimistically, this is the medium term role I see for a lot of devs. Less actual development, more assembly of pieces and being good enough at cleaning up generated code.

If an LLM can get you even 25% there most of the time, that's a massive disruption of this industry.

zrobotics
0 replies
2h27m

I mean, especially in webdev we've been heading in that direction for a while now anyway. So much of the job is already just wiring up different npm packages and APIs that someone else has written. I've read substantially similar comments back in the mid 2010s about how people weren't learning the fundamentals and just pulling things like left pad off of a repo. That did cause a disruption in how people coded by abstraction away many of the problems and making the job more about integrating different things together.

katzgrau
1 replies
5h13m

there are also cases where it produces bs, and it's not always obvious

Particularly annoying because I wind up risking not actually saving time because it’s producing subtle bugs that I wouldn’t have written myself.

So, you save yourself the time of thought and research at the risk of going down new and mysterious rabbit holes

redblacktree
0 replies
50m

For me the trick to avoiding this trap is to limit usage to small areas of code, test frequently, and know its limits. I love using copilot/GPT for boilerplate stuff.

lucideer
0 replies
5h31m

Another very obvious thing it does when it comes to code is take the most common misconceptions & anti-patterns used within the programming community & repeats them in an environment where there's no-one to comment. People have critiqued Stack Overflow for having so many "wrong" answers with green checkmarks, but at least those threads have surrounding context & discussion.

A case in point: I asked ChatGPT to give me some code for password complexity validation. It gave me perfectly working code that took a password and validated it against X metrics. Obviously the metrics are garbage, but the code works, and what inexperienced developer would be any the wiser. The only way to get ChatGPT to generate something "correct" there would be to tell it algorithmically what you want (e.g. "give me a function measuring information entropy of inputs", etc.) - you could ask it 50 times for a password validator: every one may execute successfully & produce a desired UI output for a web designer, but be effectively nonsense.

tlarkworthy
0 replies
7h58m

I wrote just the tool to optimize AI in the hand of a coding experthttps://observablehq.com/@tomlarkworthy/robocoop

schnable
0 replies
27m

ChatGPT is actually best suited to use by a specialist who has enough contextual knowledge to construct targeted prompts & can then verify & edit the responses into something optimal.

I agree with this, but what that means is that specialists will be able to create next generation tools--across all professions including coding--that do supercharge novices and generalists to do more.

dijit
0 replies
2h47m

ChatGPT is actually best suited to use by a specialist who has enough contextual knowledge to construct targeted prompts

This is my take also.

ChatGPT for novices is dangerous, its the equivalent of a calculator. If you don't know your expected output you're just wrong faster.

But if you know what to expect, whats your bounds and how to do it normally anyway, it can make you faster.

kypro
7 replies
3h15m

You may think it did a good job because of your limited CSS ability. I'd be amazed if ChatGPT can create pixel-perfect animations and transitions along with reusable clean CSS code which supports all of the browser requirements at your org.

I've seen the similar claims made on Twitter by people with zero programming ability claiming they've used ChatGPT to build an app. Although 99% of the time what they've actually created is some basic boilerplate react app.

middling generalists can now compete with specialists.

Middling generalists can now compete with individuals with a basic understanding assuming they don't need to verify anything that they've produced.

jameshart
3 replies
2h1m

Here’s the thing though:

If a new version of the app can be generated on the fly in minutes, why would we need to worry about reusability?

GPT generated software can bedisposable.

Why even check the source code in to git - the original source artifact is the prompt after all.

philipwhiuk
1 replies
1h53m

Let me know how you get on with a disposable financial system, safety system or electoral voting system.

raducu
0 replies
1m

I work with java and do a lot of integration, but a looot of my effort goes into exploring and hacking away some limitations of a test system, and doing myself things that would take a lot of time if I had to ask the proper admins.

I had a problem where I was mocking a test system (for performance testing of my app) and I realized the mocked system was doing an externalUserId to internalUserId mapping.

Usually that would have been a game stopper, but instead I did a slow run, asked Chat GPT to write code that reads data from a topic and eventually create a CSV of 50k user mappings; it would have taken me at least half a day to do that, and Chat GPT allowed me to do it in 15 minutes.

While very little code went into my app, Chat GPT did write a lot of disposable code that did help me a lot.

blharr
0 replies
1h18m

Because in my experience GPT can produce a maximum of like 200 lines of code before it makes an error usually.

nisegami
1 replies
3h6m

I'd be amazed if ChatGPT can create pixel-perfect animations and transitions along with reusable clean CSS code which supports all of the browser requirements at your org.

Personally, I'd be more amazed if a person could do that than if a LLM could do it.

kypro
0 replies
2h55m

Google, "UI developer".

rpmisms
0 replies
2h44m

It does great at boilerplate, so I think it's safe to say it will disrupt Java.

I've been using tabnine for years now, and I use chatGPT the same way; write my boilerplate, let me think about logic.

barrkel
5 replies
5h26m

ChatGPT goes from zero to maybe 65th percentile? There or thereabouts. It's excellent if you know nothing. It's mediocre and super buggy if you're an expert.

A big difference is that the expert asks different questions, off in the tails of the distribution, and that's where these LLMs are no good. If you want a canonical example of something, the median pattern, it's great. As the ask heads out of the input data distribution the generalization ability is weak. Generative AI is good at interpolation and translation, it is not good with novelty.

(Expert and know-nothing context dependent here.)

One example: I use ChatGPT frequently to create Ruby scripts for this and that in personal projects. Frequently they need to call out other tools. ChatGPT 4 consistently fails to properly (and safely!) quote arguments. It loves the single-argument version of system which uses the shell. When you ask it to consider quoting arguments, it starts inserting escaped quotes, which is still unsafe (what if the interpolated variable contains a quote in its name). If you keep pushing, it might pull out Shell.escape or whatever it is.

I assume it reproduces the basic bugs that the median example code on the internet does. And 99% of everything being crap, that stuff is pretty low quality, only to be used as an inspiration or a clue as to how to approach something.

FrustratedMonky
3 replies
4h10m

I encountered this with particular problem in python. Seemed like GPT wanted to always answer with something that had a lot of examples on the web, even if most answers were not correct. So garbage in, garbage out problem. I'm bit worried that the LLM's will continue to degrade as the web has increasing amount of LLM generated content. Seems to already be occurring.

fkyoureadthedoc
2 replies
3h40m

Why do people that hand wave away the entire concept of LLMs because ofoneinstance of it doingonething poorly that they could do better, and yet always seem to fail to just show us their concrete example?

barrkel
0 replies
2h36m

FWIW I didn't wave away the entire concept. LLMs definitely have uses.

FrustratedMonky
0 replies
1h50m

Technically the garbage in/garbage out, problem is not being hand waved away. I've seen a lot of articles on this, or sometimes called a degrading feedback loop. The more of the web that is LLM generated, then the more new models will be trained on generated data, and will fuzz out. Or 'drift'.

For a specific example. Sorry, I didn't grab screen shots at the time. It had to do with updating a datafame in pandas. It gave me solution that generated an error, I'd continue to ask it to change steps to fix previous errors, and it would go in a circle, fix it, but generate other warnings, and further changes to eliminate warnings, and it would recommend the same thing that originally caused an error.

Also. I'm a big fan. Use GPT-4 all the time. SO not waving away, but kind of curious how it sometimes fails in un-expected ways.

4star3star
0 replies
2h16m

I would prefer that google search didn't suck. Instead, I ask ChatGPT. The best case scenario, IMO, would be for people to lay out excellent documentation and working code and train the LLM specifically on that in a way that it can provide reference links to justify its answers. Then, I will take what it says and go directly to the source to get the knowledge as it was intended to be ingested by a human. We get a lot more value than we're initially looking for when we dive into the docs, and I don't want to lose that experience.

rixed
4 replies
8h7m

Out of curiosity, how did you pass the wireframe to chatGPT ?

ern
1 replies
7h33m

I described what I wanted. It was earlier this year..not sure if chatgpt can understand wireframes now, but it couldn’t at the time.

rixed
0 replies
6h8m

you described it with pixel accuracy ?

Cthulhu_
1 replies
7h53m

Doesn't ChatGPT support image uploads these days?

frabcus
0 replies
7h40m

Yes, but the paid-for Plus version only.

Ignore the free version, pretend it doesn't exist.

YeGoblynQueenne
2 replies
4h28m

> The article points this out: middling generalists can now compete with specialists.

They can't, and aren't even trying to. It's OpenAI that's competing with the specialists. If the specialists go out of business, the middling generalists obviously aren't going to survive either so in the long term it is not in the interest of the "middling generalists" to use ChatGPT for code generation. What is in their interest is to become expert specialists and write better code both than ChatGPT currently can, and than "middling generalists". That's how you compete with specialists, by becoming a specialist yourself.

Speaking as a specialist occupying a very, very er special niche, at that.

elif
1 replies
3h56m

It REALLY depends on the task. For instance, if you provide GPT with a schema, it can produce a complex and efficient SQL query in <1% of the time an expert could.

I would also argue that not only are the models improving, we have less than a year practically interfacing with LLM's. OUR ability to communicate with them is in infancy, and a generation that is raised speaking with them will be more fluent and able to navigate some of the clear pitfalls better than we can.

emporas
0 replies
22m

There is not much of a need for humans to get closer to the machine long term, when with new datasets for training the machine will get closer to humans. Magic keywords like "step by step" won't be as necessary to know.

One obstacle for interfacing with LLM's is the magic cryptic commands it executes internally, but that need not be the case in the future.

janosdebugs
1 replies
7h32m

middling generalists can now compete with specialists.

They can maybe compete in areas where there has been a lot of public discussion about a topic, but even that is debatable as there are other tasks than simply producing code (e.g. debugging existing stuff). In areas where there's close to no public discourse, ChatGPT and other coding assistance tools fail miserably.

red-iron-pine
0 replies
2h5m

this be the answer. GPT is as good as the dataset it's trained off of, and if you're going by the combined wisdom of StackOverflow then you're going to have a middling time.

gumballindie
1 replies
5h58m

But if you are a middling programmer when it comes to CSS how do you know the output was “flawless” and close to the quality that css “masters” produce?

ern
0 replies
3h52m

It looked correct visually and it matched the techniques in the actual CSS that the team lead and I produced when we paired to get my layout to the standard he expected.

scythe
0 replies
3h2m

I suspect that the "depth" of most CSS code is significantly shallower than what gets written in general purpose programming languages. In CSS you often align this box, then align that box, and so forth. A lot of the complexity in extant CSS comes from human beings attempting to avoid excessive repetition and typing. And this is particularly true when we consider the simple and generic CSS tasks that many people in this thread have touted GPT for performing. There are exceptions where someone builds something really unique in CSS, but that isn't what most people are asking from GPT.

But the good news is that "simple generic CSS" is the kind of thing that most good programmers consider to be essentially busywork, and they won't miss doing it.

ryandvm
0 replies
1h59m

middling generalists can now compete with specialists

Great point. That's been my experience as well. I'm a generalist and ChatGPT can bring me up to speed on the idiomatic way to use almost any framework - provided it's been talked about online.

I use it to spit out simple scripts and code all day, but at this point it's not creating entire back-end services without weird mistakes or lots of hand holding.

That said, the state of the art is absolutely amazing when you consider that a year ago the best AIs on the market were Google or Siri telling me "I'm sorry I don't have any information about that" on 50% of my voice queries.

pjc50
0 replies
7h23m

I would really like to see the prompts for some of these. Mostly because I'm an old-school desktop developer who is very unfamiliar with modern frontend.

philipwhiuk
0 replies
1h54m

Would you give the critical public safety system bit to ChatGPT?

This scenario reminds me of:

If a job's worth doing, do it yourself. If it's not worth doing, give it to Rimmer.

Except now it's "give it to ChatGPT"

crabbone
0 replies
5h41m

It probably depends on the area. CSS is very popular on one hand and limited to a very small set of problems on the other.

I did try asking ChatGPT about system-related stuff several times and had given up since then. The answers are worthless if not wrong, unless the questions are trivial.

ChatGPT works if it needs to answer a question that was already answered before. If you are facing a genuinely new problem, then it's just a waste of time.

city41
0 replies
5m

I'm a developer but also have an art degree and an art background. I'm very mediocre at art and design. But lately I've been using AI to help plug that gap a bit. I really think it will be possible for me to make an entire game where I do the code, and AI plus my mediocre art skills get the art side across the line.

I think at least in the short term, this is where AI's power will lie. Augmentation, not replacement.

calvinmorrison
0 replies
2h0m

being pixel perfect was a source of pride.

Then use LaTex and PDF. CSS is not for designing pixel perfect documents.

Cthulhu_
0 replies
7h55m

middling generalists can now compete with specialists.

I want to say that this has been the state of a lot of software development for a while now, but then, the problems that need to be solved don't require specialism, they require people to add a field to a database or to write a new SQL query to hook up to a REST API. It's not specialist work anymore, but it requires attention and meticulousness.

markhaslam
17 replies
9h0m

If you feel comfortable doing so, would you mind the sharing the front-end test you give to junior devs and ChatGPT?

miiiiiike
16 replies
8h54m

Not gonna happen. I don’t want scrapeable answers out there, I want to see ChatGPT cross this little Rubicon on its own.

Closi
12 replies
8h41m

It's not that I don't believe you, but without sharing the specific prompt it's hard to say if it's actually GPT4 failing, or if it's actually being poorly-prompted, or if actually the task it is being given is more complex than GPT's capabilities or you are implying.

GPT4 does fail (often!) but fails less with good prompts, simple requirements, it is better at some frameworks and languages than others, and there is a level of total complexity which when reached, it seems to fall over.

spaceman_2020
6 replies
6h58m

My failure rate with Cursor’s IDE that’s familiar with my codebase is substantially lower than just GPT-4

Most people shitting on GPT-4 are not really using it in the right context.

aleph_minus_one
5 replies
5h43m

Most people shitting on GPT-4 are not really using it in the right context.

Old excuse: "You're Holding It Wrong" (Apple's Response to the iPhone 4 antenna problem)

https://www.wired.com/2010/06/iphone-4-holding-it-wrong/

New excuse: "You are not using GPT-4 in the right context."

Closi
3 replies
4h58m

I'm not sure what the point is in your comparison - is your point that GPT-4 will become overwhelmingly popular with further refinement?

The iPhone was pretty successful, and the iPhone 4 was arguably the best one that had been released until that point.

aleph_minus_one
2 replies
4h47m

is your point that GPT-4 will become overwhelmingly popular with further refinement?

My point is that people have a tendency to come up with really sketchy insults (blame the user that he uses the product in a wrong way) to people who find and can expound legitimate points of criticism of a product.

Closi
1 replies
2h47m

Eh, probably a poor example considering the iPhone 4 was hardly a flop and was still broadly considered the best smartphone out at the time. The people who thought this was a total-showstopper were, on the whole, probably wrong.

Counter-example: lots of people said an on-screen keyboard would never really work when the original iPhone was being released.

aleph_minus_one
0 replies
2h26m

Eh, probably a poor example considering the iPhone 4 was hardly a flop and was still broadly considered the best smartphone out at the time. The people who thought this was a total-showstopper were, on the whole, probably wrong.

At least in Germany among tech nerds, the iPhone 4 and Steve Jobs become topics of insane ridicule because of this incident.

spaceman_2020
0 replies
5h38m

And then the iPhone antenna was fixed and adoption only increased and the product only became better.

You’re being unreasonably harsh on a piece of tech that is barely a year old.

gtech1
2 replies
7h58m

This is why Asimov was a genius. I read what you said, and compared it to what he wrote 50-60 years ago:

"Early in the history of Multivac, it had becorne apparent that the bottleneck was the questioning procedure. Multivac could answer the problem of humanity, ALL the problems, if it were asked meaningful questions. But as knowledge accumulated at an ever-faster rate, it became ever more difficult to locate those meaningful questions."

http://blog.ac-versailles.fr/villaroylit/public/Jokester.pdf

Cthulhu_
1 replies
7h48m

Thanks for reminding me of The Last Questino or Asimov, let's see if I can get chatgpt to merge with human consciousness and become part of the fabric of spacetime and create a new reality.

No, I don't have the ability to merge with human consciousness or become part of the fabric of space-time. I'm a computer program created by OpenAI, and my existence is limited to providing information and generating text based on the input I receive. The idea of merging with human consciousness and becoming a deity is more aligned with speculative fiction and philosophical pondering than current technological capabilities.
gtech1
0 replies
7h29m

I thought you would go for gold and ask it how to reverse entropy...

miiiiiike
1 replies
8h25m

I’ve gone through every permutation that I can think of. It’s a very basic question. If it understood the CSS spec it wouldn’t be difficult to answer the questions or perform the task.

At a certain point going down the rabbit hole of proompter engineering levels feels like an apologist’s hobby. I’m rooting for the tech but there’s a lot of hyperbole out there and the emperor might be naked for a few more years.

Closi
0 replies
4h55m

Well surely if it's easy to find these basic questions, could you not share one example? Or quickly find a new one?

Your idea of very basic might not be my idea of very basic.

golol
1 replies
8h52m

Just to be clear: You are testing with GPT-4 right?

miiiiiike
0 replies
8h50m

Yeah.

piva00
0 replies
8h8m

Have you tried using the ChatGPT-AutoExpert custom instructions yet? [1]

[1]https://github.com/spdustin/ChatGPT-AutoExpert/blob/main/dev...

anonzzzies
14 replies
7h31m

So, don't leave us in suspense; what do you ask of it? Because I'm quite sure it can already pass it.

Your experience is very different from mine anyway. I am a grumpy old backend dev that uses formal verification in anger whenIconsider it is needed and who gets annoyed when things don't act logical. We are working with computers, so everything is logical, but no; I mean things like a lot of frontend stuff. I ask our frontend guy; 'how do I center a text', he says 'text align'. Obviously I tried that, becausethatwould be logical, but it doesn't work, because frontend is, for me, absolutely illogical. Even frontend people actually have to try-and-fail; they cannot answer simple questions without trying like I can in backend systems.

Now, in this new world, I don't have to bother with it anymore. If copilot doesn't just squirt out the answer, then chatgpt4 (and now my personal custom gpt 'front-end hacker' who knows our codebase) will fix it for me. And it works, every day, all day.

eloisant
5 replies
7h26m

If it can pass it when you ask it in a way only a coder can write, then we will still need coders.

If you need to tweak your prompt until you get the correct result, then we still need coders who can tell that the code is wrong.

Ask Product Managers to use ChatGPT instead of coders and they will ask for 7 red lines all perpendicular to each other with one being green.

https://www.youtube.com/watch?v=BKorP55Aqvg

anonzzzies
3 replies
7h10m

I didn't say we don't need coders. We need less average/bad ones and a very large amounts of coders that came after the 'coding makes $$$$' worldwide are not even average.

I won't say AI will not eventually make coding obsolete; even just 2 years ago I would've said we are 50-100 years away from that. No i'm not so sure. However, I am saying that I can replace many programmers with gpt right now, and I am. The prompting and reprompting isstillboth faster and cheaper than many humans.

mathgeek
2 replies
5h34m

In my mind, we need more folks who have both the ability to code and the ability to translate business needs into business logic. That’s not a new problem though.

anonzzzies
1 replies
5h3m

That's what we are doing all day no? I mean besides fighting tooling (which is getting a larger and larger % of the time building stuff).

v-erne
0 replies
1h7m

Only if you have access to end user.

If between you and your client four people are playing deaf phone (client's project manager, our project manager, team leader and some random product guy just to get even numer), then actually this is not what you are doing.

I would argue that the thing that happens at this stage is more akin to manually transpiling business logic into code.

In this kind od organization programmers become computer whisperers. And this is why there is a slight chance that GPT-6 or 7 will take their job.

BlueTemplar
0 replies
6h57m

TFA's point is not that «coders» won't be needed any more, it's that they will hardly spend their time «coding», that is «devot[ing themselves] to tedium, to careful thinking, and to the accumulation of obscure knowledge», «rob[bing them] of both the joy of working on puzzles and the satisfaction of being the one[s] who solved them».

Exuma
3 replies
5h16m

How do you make a custom gpt which knows a specific code base? I have been wanting to do this

anonzzzies
1 replies
4h40m

I use the new feature of creating a custom gpt and I keep adding new information ; files, structures etc by editing the gpt. It seems to work well.

Exuma
0 replies
2h6m

Ah ok so you have to paste entire files in 1 by 1, you can't just add it locally somehow? too bad you cant just upload a zip or something...

electrondood
0 replies
1h11m

You tune an existing model on your own set of inputs/outputs.

Whatever you expect to start typing, and have the model produce as output, should be those input/output pairs.

I'd start by using ChatGPT etc. to add comments throughout your code base describing the code. Then break it into pairs where the input is the prefacing comment, and the output is the code that follows. Create about 400-500 such pairs, and train a model with 3-4 epochs.

Some concerns: you're going to get output that looks like your existing codebase, so if it's crap, you'll create a function which can produce crap from comments. :-)

ryanjshaw
2 replies
5h11m

I'm not the person you're responding to, but here's an example of it failing subtly:

https://chat.openai.com/share/4e958c34-dcf8-41cb-ac47-f0f6de...

finalAlice's Children have no parent. When you point this out, it correctly advises regarding the immutable nature of these types in F#, then proceeds to produce a new solution that again has a subtle flaw: Alice -> Bob has the correct parent... but Alice -> Bob -> Alice -> Bob is missing a parent again.

Easy to miss this if you don't know what you're doing, and it's the kind of bug that will hit you one day and cause you to tear your hair out when half your program has a Bob-with-parent and the other half has an Orphan-Bob.

Phrase the question slightly differently, swapping "Age: int" with "Name: string":

https://chat.openai.com/share/df2ddc0f-2174-4e80-a944-045bc5...

Now it produces invalid code. Share the compiler error, and it produces code that doesn't compile but in a different way -- it has marked Parent mutable but then tried to mutate Children. Share the new error, and it concludes you can't have mutable properties in F#, when you actually can, it just tried marking the wrong field mutable. If you fix the error, you have correct code, but ChatGPT-4 has misinformed you AND started down a wrong path...

Don't get me wrong - I'm a huge fan of ChatGPT, but it's nowhere near where it needs to be yet.

dontupvoteme
0 replies
33m

F#

Barely exists in training data.

Might as well ask it to code some microcontroller specifically assembly, watch it fail and claim victory.

doctorpangloss
0 replies
32m

I'm not really sure what I'm looking at. It seems to perform flawlessly for me... when using Python:https://chat.openai.com/share/7e048acb-a573-45eb-ba6c-2690d2...

I only made two changes to your prompt: one to specify Python, and another to provide explicit instructions to trigger using the Advanced Data Analysis pipeline.

You also had a couple typos.

I'm not sure if "Programming-like tool that reflects programming language popularity performs poorly on unpopular programming language" is the gotchya you think it is. It performs extremely well authoring Kubernetes manifests and even makes passing Envoy configurations. There's a chance that configuration files for reverse proxy configuration DSLs have better representation than F# does. I guess if you disagree at how obscure F# is, you're observing a real, objective measurement of how obscure it is, in the fascinating performance of this stochastic parrot.

Ygg2
0 replies
6h3m

You can ask it almost anything. Ask it to write a YAML parser in something a bit more complex like Rust and it falls like a rag.

Rust mostly because it's relatively new, and there isn't a native YAML parser in Rust (there is a translation of libfyaml). Also you can't bullshit your way out of Rust by making bunch of void* pointers.

mrtksn
12 replies
8h32m

Try breaking down the problem. You don't have to do it yourself, you can tell ChatGPT to break down the problem for you then try to implement individual parts.

When you have something that kind of works, tell ChatGPT what the problems are and ask for refinement.

IMHO currently the weak point of LLMs is that they can't really tell what's adequate for human consumption. You have to act as a guide who knows what's good and what can be improved and how can be improved. ChatGPT will be able to handle the implementation.

In programming you don't have to worry too much about hallucinations because it won't work at all if it hallucinates.

throwaway346434
9 replies
7h30m

... What.

It hallucinates and it doesn't compile, fine. It hallucinates and flips a 1 with a -1; oops that's a lot of lost revenue. But it compiled, right? It hallucinates, and in 4% of cases rejects a home loan when it shouldn't because of a convoluted set of nested conditions, only there is no one on staff that can explain the logic of why something is laid out the way it is and I mean, it works 96% of the time so don't rock the boat. Oops, we just oppressed a minority group or everyone named Dave because you were lazy.

mrtksn
6 replies
7h27m

As I said, you are still responsible for the quality control. You are supposed to notice that everyone is named Dave and tell ChatGPT to fix it. Write tests, read code, run & observe for odd behaviours.

It's not an autonomous agent just yet.

gumballindie
3 replies
5h18m

But why should i waste time using a broken product when i can do it properly myself? To me a lot of this debate sounds like people obsessively promoting a product for some odd reason, as if they were the happy owners of a hammer in search of a nail.

mrtksn
2 replies
4h22m

If you are faster and more productive that way, do it that way.

Most people are not geniuses and polymaths, it's much easier and cheaper for me to design the architecture and ask ChatGPT to generate the code in many different languages(Swift/HTML/JS/CSS on the client side and Py, JS, PHP on the server side). It's easier because although I'm proficient an all these it's very hard for me to switch solving client specific JS problems to server specific JS problems or between graphics and animation related problems and data processing problems with Swift. It's also cheaper because I don't have to pay someone to do it for me.

In my case, I know all that well enough to spot a problem and debug, I just don't want to go through the trouble of actually writing it.

gumballindie
1 replies
4h19m

The debate here is wether openai's product, chatgpt, can indeed deliver what it claims - coding, saving dogs' lives, mental health counceling, and so on. It would appear that it doesn't but it does mislead people without experience in whatever field they use it. For instance if i ask it about law i am impressed, but when I ask it about coding of software engineering it blatantly fails. The conclusion being that as a procedural text generator it is impressive - it nails language - but the value of the output is far from settled.

This debate is important because as technical people it is our reposnbility to inform non technical people about the use of this technology and to bring awareness about potential misleading claims its seller makes - as it was the case with crypto currencies, and many other technologies that promised the world delivered nothing of real benefit (but made people rich in the process by exploting the uniformed).

mrtksn
0 replies
4h14m

That's not the debate, it's the first time I'm hearing about that in this thread.

somewhereoutth
1 replies
6h58m

As we all know, it is much easier to read and verify code you've written yourself - perhaps it isonlycode you've written yourself that can be properly read and verified. As ever, tests can be of only limited utility (separate discussion).

mrtksn
0 replies
5h57m

It's easier to read the code you recently wrote, sure. But in real life people use and debug other people's code all the time, LLM generated code is just like that. Also, if you make it generate the code in small enough blocks you also end up knowing the codebase is if you wrote it.

wokwokwok
1 replies
7h7m

The output of an LLM is a distribution, and yes, if you’re just taking the first answer, that’s problematic.

However, itisa distribution, and than means the majority of solutions are not weird edge cases, they’re valid solutions.

Your job as a user is to generatemultiple solutionsand thenreview themand pick the one you like the most, and maybe modify it to work correctly if it has weird edge cases.

How do you do that?

Well, you can start by following a structured process where you define success criteria as a validator (eg. Tests, compiler, parser, linters) and fitness criteria as a scorer (code metrics like complexity, runtime, memory use, etc)… then:

1) define goal

2) generate multiple solution candidates

3) filter candidates by validator (does it compile? Pass tests? Etc)

4) score the solutions (is it pure? Is it efficient? Etc)

5) pick the best solution

6) manually review and tweak the solution

This structured and disciplined approach to software engineering works. Many of the steps (eg. 3, 4, 5) can be automated.

It generates meaningful quality code results.

You can use it withor withoutAI…

You don’t have to follow this approach, but my point is that youcan; there is nothing fundamentally intractable able using a language model to generate code.

The problem that you’re critiquing is thetrivial and naiveapproach of just hitting “generate” and blindly copying that into your code base.

…that’s stupid and dangerous, but it’s also a straw man.

Seriously; people writing code with these models aren’t doing that; when you read blogs and posts from people, eg. Building seriously using copilot you’ll see this pattern emerge repeatedly:

Generate multiple solutions. Tweak your prompt. Ask for small pure dependency free code blocks. Review the and test output.

It’s not a dystopian AI future, it’s just another tool.

emporas
0 replies
59m

In general, one should not instruct GPT to solve a problem. The instructions should be about generating code, after a human thought process took place, and then generate even more code, then even more, and after merging all the code together the problem is solved.

The particulars are roughly what you describe, in how to achieve that.

yodsanklai
1 replies
3h12m

I'd be curious to see how a non expert could perform a non-trivial programming task using ChatGPT. It's good at writing code snippets which is occasionally useful. But give it a large program that has a bug which isn't a trivial syntax error, and it won't help you.

In programming you don't have to worry too much about hallucinations because it won't work at all if it hallucinates.

You still have to worry for your job if you're unable to write a working program.

mrtksn
0 replies
2h57m

Understanding on core principle is definitely needed, but it helps you to punch above your weight.

Generally, generative AI gives mastery of an art to a theorists. To generate an impressive AI Art, you still need to have understanding of aesthetics and have an idea, but don't have to know how to use the graphic editors and other tools. It's quite similar for programming too, You still need understanding of whatever you're building, but you no longer have to be expert in using the tools. To build a mobile app you will need to have a grasp on how everything works in general, but you don't have to be expert in Swift or Kotlin.

Dunati
5 replies
8h30m

I'm probably bad at writing prompts, but in my limited experience, I spend more time reviewing and correcting the generated code than it would have taken to write it myself. And that is just for simple tasks. I can't imagine thinking a llm could generate millions of lines of bug free code.

miiiiiike
2 replies
8h24m

That’s where I am too. I think almost everyone has that “this is neat but it’s not there yet” moment.

meiraleal
0 replies
7h23m

It seems like the problem is with your view of everyone based on a n=1 experiment. I've been shipping production-ready code for my main job for months saving hundreds of work/hours.

aleph_minus_one
0 replies
5h48m

I think almost everyone has that “this is neat but it’s not there yet” moment.

I rather have this moment without the “this is neat” part. :-) i.e. a clear “not there yet” moment, but with serious doubts whether it will be there anytime in the foreseeable future.

jon-wood
0 replies
7h30m

Asking GPT to do a task for me currently feels like asking a talented junior to do so. I have to be very specific about exactly what it is I'm looking for, and maybe nudge it in the right direction a couple of times, but it will generally come up with a decent answer without me having to sink a bunch of time into the problem.

If I'm honest though I'm most likely to use it for boring rote work I can't really be bothered with myself - the other day I fed it the body of a Python method, and an example of another unit test from the application's test suite, then asked it to write me unit tests for the method. GPT got that right on the first attempt.

antupis
0 replies
8h15m

Personally, for me this flow works fine AI does the first version -> I heavily edit it & debug & write tests for it -> code does what I want -> I tell AI to refactor this -> tests pass and the ticket is done.

steveBK123
4 replies
5h17m

That's been my experience both with Tesla AP/FSD implementation & with LLMs.

Super neat trick the first time you encounter it, feels like alien tech from the future.

Then you find all the holes. Use it for months/years and you notice the holes aren't really closing.. The pace of improvement is middling compared to the gap to it meeting the marketing/rhetoric. Eventually using them feels more like a chore than not using them.

It's possible some of these purely data driven ML approaches don't work for problems you need to be more than 80% correct on.

Trading algos that just need to be right 55% of the time to make money, recommendation engines that present a page of movies/songs for you to scroll, Google search results that come back with a list you can peruse, Spam filters that remove some noise from your inbox.. sure.

But authoritative "this is the right answer" or "drive the car without murdering anyone".. these problems are far harder.

wrzuteczka
1 replies
4h53m

With the AI "revolution," I began to appreciate the simplicity of models we create when doing programming (and physics, biology, and so on as well).

I used to think about these things differently: I felt that because our models of reality are just models, they aren't really something humanity should be proud of that much. Nature is more messy than the models, but we develop them due to our limitations.

AI is a model, too, but of far greater complexity, able to describe reality/nature more closely than what we were able to achieve previously. But now I've begun to value these simple models not because they describe nature that well but because theyimposethemselves on nature. For example, law, being such a model, is imposed on reality by the state institutions. It doesn't describe the complexity of reality very well, but it makes people take roles in its model and act in a certain way. People now consider whether something is legal or not (instead of moral vs immoral), which can be more productive. In software, if I implement the exchange of information based on an algorithm like Paxos/Raft, I get provable guarantees compared to if I allowed LLMs to exchange information over the network directly.

steveBK123
0 replies
4h44m

I think you've found a good analogy there in the concept of moral vs legal. We defined a fixed system to measure against (rule of law) to reduce ambiguity.

Moral code varies with time, place, and individual person. It is a decimal scale of gray rather than a binary true/false.

Places historically that didn't have rule of law left their citizens to the moral interpretation whim of whoever was in charge. The state could impose different punishments on different people for different reasons at different times.

AI models I find a similar fixed&defined vs unlimited&ambiguous issue in ADAS in cars.

German cars with ADAS are limited&defined, have a list of features they perform well, but that is all.

Tesla advertises their system as an all knowing, all seeing system with no defined limits. Of course every time there is an incident they'll let slip certain limits "well it can't really see kids shorter than 3ft" or "well it can't really detect cross traffic in this scenario" etc.

hospitalJail
1 replies
5h5m

Yep, lots of people are using LLMs for problems LLMs aren't good at.

They still do an alright job, but you get that exact situation of 'eh, its just okay'.

Its the ability to use those responses when they are good, and knowing when to move on from using an LLM as a tool.

steveBK123
0 replies
4h58m

Not terribly different than Google Translate.

Ff you have a familiarity with the foreign language, you can cross check yourself & the tool against each other to get to a more competent output.

If you do not know the foreign language at all, the tool will produce word salad that sort of gets your point across while sounding like an alien.

debok
4 replies
7h29m

I have the same experience with the test I give my back-end devs. ChatGPT can't even begin to decode an encoded string if you don't tell it which encoding was used.

ChatGPT is great at some well defined, already solved problems. But once you get to the messy real world, the wheels come off.

RecycledEle
1 replies
5h45m

Thank you for taking the time to call BS on someone who obviously never tried asking a LLM AI to decipher a string's encoding. That is exactly the kind of thing they are good ar.

Retric
0 replies
2h13m

Two people can try similar prompts and get very different results from LLM’s.

ern
0 replies
3h58m

It’s pretty impressive that it was able to actually decode those strings. In March, I used GPT 3.5 to write code for validating a type of string which used a checksum algorithm.

It did the task well, and even wrote tests, but it failed when generating test case values. I wonder if it would perform better if I did it today.

amenghra
4 replies
7h32m

There's a Swiss town which had autonomous shuttles running for 5 years (2015-2021) [1].

There's at least two companies (Waymo and Cruise) running autonomous taxi services in US cities that you can ride today.

There have been lots of incorrect promises in the world of self-driving trucks/cars/buses but companies have gotten there (under specific constraints) and will generalize over time.

[1]https://www.saam.swiss/projects/smartshuttle/

haspok
1 replies
6h11m

Those "autonomous" vehicles have as much to do with real autonomy as today's "AI" has in common with real self-conscious intelligence. You can only fake it so long, and it is an entirely different ballgame.

I remember we had spam filters 20 years ago, and nobody called them "AI", just ML. Todays "AI" is ML, but on a larger scale. In a sense, a million monkeys typing on typewriters will eventually produce all the works of Shakespeare. Does this make them poets?

amenghra
0 replies
6h4m

What's your gripe with calling a bus which successfully ran for 5 years without a driver not autonomous? As someone who used this specific bus occasionally, I was quite satisfied with the outcome: it safely drove me from A to B.

cultureswitch
1 replies
4h51m

It should be noted that the Waymo and Cruise experiments in their cities are laughably unprepared for actual chaotic traffic, often fail in completely unpredictable ways and are universally hated by locals. Autonomous buses and trams are much more successful because the problem is much easier too.

amenghra
0 replies
3h51m

Agree. We could have all had some fine autonomous trams/subways/trains which run 24/7 at short intervals instead of spending money on self-driving cars and car infrastructure in general.

yosef123
3 replies
8h58m

Just like the author suggests, sometimes you have to tailor your question to ChatGPT, for it to succeed.

tomashubelbauer
0 replies
8h17m

As long as this is true, ChatGPT is going to be a programmer's tool, not a programmer's replacement. I know that my job as I know it will vanish before I enter retirement age, but I don't worry it will happen in the next few years because of this.

miiiiiike
0 replies
8h54m

I’ve given it so many hints. So many nudges. If it was an interview I would have bounced it.

girvo
0 replies
8h11m

sometimes you have to tailor your question to ChatGPT, for it to succeed

Right, which means its a force multiplier for specialists, rather than something that makes generalists suddenly specialists.

fdgjgbdfhgb
3 replies
7h33m

If it's not a life or death situation (like a self-driving truck slamming into a van full of children or whatever), I don't think people will care much. Non-tech people (i.e. managers, PMs) don't necessarily understand/care if the code is not perfect and the barrier for "good enough" is much lower. I think we will see a faster adoption of this tech...

spaceman_2020
1 replies
7h0m

Among bootstrapped “barely technical” founders, its already replacing freelancers for developing initial prototypes

HN’s takes are honestly way too boomer-tier about LLMs.

BlueTemplar
0 replies
6h46m

Boomers overwhelmingly aren't able to use a computer right now (= write a basic script), I would be happy at this development if I was them.

d3w4s9
0 replies
2h33m

No. If the code generated by chatgpt cannot even pass the unit test it generates in the same response (or is just completely wrong) and requires significant amount of human work to fix it, it is not usable AI.

That's what I am running into on an everyday basis.

I don't want my program to be full of bugs.

digitcatphd
3 replies
8h14m

It answers questions confidently but with subtle inaccuracies.

This is a valid challenge we are facing as well. However, remember that ChatGPT which many coders use, is likely training on interactions so you have some human reinforcement learning correcting its errors in real-time.

datpiff
2 replies
7h23m

How is it trained on reactions? Do people give it feedback? In my experience in trying I stop asking when it provides something useful or something so bad I give up (usually the latter I'm afraid). How would it tell a successful answer from a failing one?

digitcatphd
1 replies
7h5m

It appears to ask users to rate if the response is better or worse than the first, in other cases, it seems to be A/B testing the response. Lastly, I for instance, will correct it and then confirm it is correct to continue with the next task, which likely creates a footprint pattern.

datpiff
0 replies
7h2m

That's interesting, I haven't come across this.

Banditoz
3 replies
9h3m

What sort of questions do you ask out of curiosity?

miiiiiike
2 replies
8h51m

I don’t want scrapeable answers out there, I want to see ChatGPT cross this little Rubicon on its own.

Vaguely: Questions that most people think they know the correct answers to but, in my experience, don’t.

lairv
0 replies
7h21m

I think it's fair to want to keep an evaluation private so that it doesn't become part of a train set, but you should know that OpenAI uses users chat data to improve their models (not for entreprise)

archiepeach
0 replies
3h58m

This does sound like a test that is almost "set up to fail" for an LLM. If the answer is something that most people think they know, but actually don't then it won't pass in an LLM which is essentially a distillation of the common view.

selfhoster11
1 replies
8h43m

I hope that you are testing this on GPT-4/ChatGPT Plus. The free ChatGPT is completely not representative of the capabilities or the accuracy of the paid model.

miiiiiike
0 replies
8h42m

I’ve tested it on both.

scrollaway
1 replies
7h32m

You got everyone talking about how GPT isn’t that bad at coding etc but everyone is missing the point.

The no code industry is massive. Most people don’t need a dev to make their website already. They use templates and then tweak them through a ui. And now you have Zapier, Glide, Bubble etc.

LLMs won’t replace devs by coding entire full stack web apps. They’ll replace them because tools will appear on the market that handle the 99% cases so well that there is just less work to do now.

This has all happened before of course.

ern
0 replies
4h4m

I collaborate with front-end teams that use a low-code front-end platform. When they run into things that aren’t built-in, they try to push their presentation logic up the stack for the “real” programming languages to deal with.

lom
1 replies
7h7m

You’re missing the point of the article. ChatGPT in combination with a mediocre could solve your problem faster than the best junior dev has before

pydry
0 replies
6h58m

I tried doing this and it actually took longer due to all of the blind alleys it led me down.

There is stuff that it can do that appears magically competent at but it's almost always cribbed from the internet, tweaked with trust cues removed and often with infuriating, subtle errors.

I interviewed somebody who used it (who considered that "cheating") and the same thing happened to him.

karaterobot
1 replies
39m

I have to ask, though: if ChatGPT has by most accounts gotten better at coding by leaps and bounds in the last couple years, might that not also indicate that your test isn't useful?

doctorpangloss
0 replies
28m

I agree this is the first time there is sort of irrefutable objective evidence that the tests are not measuring something secularly useful for programming anymore. There has been an industry wide shift against leetcode for a long time nonetheless.

brianzelip
1 replies
3h17m

I have a simple front-end test that I give to junior devs.

What is it?

z7
0 replies
2h43m

Making that claim but not sharing the "simple test" feels a bit pointless tbh.

Edit: I see, they don't want it to be scraped (cf.https://news.ycombinator.com/item?id=38260496), though as another poster pointed out, submitting it might be enough for it to end up in the training data.

atoav
1 replies
7h36m

I tried for 2 hours to get ChatGPT to write a working smooth interpolation function in python. Most of the functions it returned didn't even go through the points between which it should be interpolating. When I pointed that out it returned a function that went through the points but it was no longer smooth. I really tried and restarted over multiple times. I believe we have to choose between a world with machine learning and robot delivery drones. Because if that thing writes code that controls machines it will be total pandemonium.

It did a decent job at trivial things like creating function parameters out of a variable tho.

lewhoo
0 replies
6h31m

That's weird to read. Interpolations of various sorts are known and solved and should probably be digested by chatgpt in training by the bulk. I'm not doubting your effort by any means, I'm just saying this sounds like one of those things it should do well.

adamauckland
1 replies
8h28m

Haha, does it involve timers?

miiiiiike
0 replies
8h20m

I actually don’t get the reference. What are the issues with timers?

Tade0
1 replies
8h0m

My take on LLMs is as follows: even if its effectiveness scales exponentially with time(it doesn't), so does the complexity of programs with (statistically speaking) each line of code.

Assuming a LLM gets 99% of the lines correct, after 70 lines the chance of having at least one of them wrong is already around 50%. A LLM effective enough to replace a competent human might be so expensive to train and gather data for that it will never achieve a return on investment.

Last time I used ChatGPT effectively was to find a library that served a specific purpose. All of the four options it gave me were wrong, but I found what I wanted among the search results when I looked for them.

frabcus
0 replies
7h36m

The more automated ones will separately write tests and code, and if the code doesn't compile or pass the test, give itself the error messages and update its code.

Code Interpreter does this a bit in Chat-GPT Plus with some success.

I don't think it needsmuchmore than a GPT-4 level LLM, and a change in IDEs and code structure, to get this working well enough. Place it gets stuck it'll flag to a human to help.

We'll see though! Lots of startups and big tech companies are working on this.

theonlybutlet
0 replies
7h2m

Push comes to shove, it always tends to come down to short term cost. If it gets the job done, and it's wildly cheaper than the status quo (Net Present Value savings). they'll opt for it.

The only reason the trucks aren't out there gathering their best data, that's real world data, is regulation.

Businesses will hire consultants at a later stage to do risk assessment and fix their code base.

ta-run
0 replies
4h52m

What is the test?

swells34
0 replies
7h30m

I have a very similar train of thought roll through my head nearly every day now as I browse through github and tech news. To me it seems wild how much serious effort is put into the misapplication of AI tools on problems that are obviously better solved with other techniques, and in some cases where the problem already has a purpose built, well tested, and optimized solution.

It's like the analysis and research phase of problem solving is just being skipped over in favor of not having to understand the mechanics of the problem you're trying to solve. Just reeks of massive technical debt, untraceable bugs, and very low reliability rates.

svantana
0 replies
8h20m

There's a recent "real coding" benchmark that all the top LLMs perform abysmally on:https://www.swebench.com/

However, it seems only a matter of time before even this challenge is overcome, and when that happens the question will remain whether it's a real capability or just a data leak.

spaceman_2020
0 replies
7h7m

Would you mind sharing the test?

I’m one of those noob programmers and it has helped me create products far beyond my technical capabilities

soulofmischief
0 replies
2h38m

Are you using GPT-3.5, or GPT-4?

smcleod
0 replies
6h35m

How many conversation responses to hone in the solution do you give the LLM?

If you’re just trying to one-shot it - that’s not really how you get the most from them.

runeks
0 replies
4h15m

Self-driving trucks were going to upend the trucking industry in ten years, ten years ago.

And around the same time, 3D printing was going to upend manufacturing; bankrupting producers as people would just print what they needed (including the 3D printers themselves).

rewmie
0 replies
2h29m

Self-driving trucks were going to upend the trucking industry in ten years, ten years ago.

At the risk of going off on a tangent, we already have the technology to allow self-driving trucks for a few decades now.

The technology is so good that it can even be used to transport multiple containers in one go.

The trick is to use dedicated tracks to run these autonomous vehicles, and have a central authority monitoring and controlling traffic.

These autonomous vehicles typically go by the name railway.

raxxorraxor
0 replies
5h7m

It is likely that LLM have an upper border of capability. Similarly with denoising AI like stable diffusion.

You can put even more data into it and refine the models, but the growth in capability has diminishing returns. Perhaps this is how far this strategy can bring us, although I believe they can still be vastly improved and what they can already offer is nevertheless impressive.

I have no illusion about the craft of coding becomes obsolete however. On the contrary, I think the tooling for the "citizen developer" are becoming worse, as well as the ability for abstraction in common users since they are fenced into candyland.

hhsectech
0 replies
15m

Cool...so what's the test? We can't verify if you're talking shit without knowing the parameters of your test.

AI isn't capable of generating the same recipe for cookies as my grandma, she took the recipe to her grave. I loved her cookies they were awesome...but lots of people thought they were shit but I insist that they are mistaken.

Unfortunately, I can't prove I'm right because I don't have the recipe.

Don't be my grandma.

frereubu
0 replies
2h20m

When studying fine art, a tutor of mine talked about "things that look like art", by which she meant the work that artists produce when they're just engaging with surface appearances rather than fully engaging with the process. I've been using GitHub Copilot for a while and find that it produces output that looks like working code but, aside from the occasional glaring mistake, it often has subtle mistakes sprinkled throughout it too. The plausibility is a serious issue, and means that I spend about as much time checking through the code for mistakes as I'd take to actually write it, but without the satisfaction that comes from writing my own code.

I dunno, maybe LLMs will get good enough eventually, but at the moment it feels plausible to me that there's some kind of an upper limit caused by its very nature of working from a collection of previous code. I guess we'll see...

djha-skin
0 replies
5h56m

3.5, or GPT-4? I'm told the latter is worlds better, that they aren't even in the same ballpark.

d3w4s9
0 replies
4h1m

Similar experience. I recently needed to turn a list of files into a certain tree structure. It is a non-trivial problem with a little bit of flavor of algorithm. I was wondering if GPT can save me some time there. No. It never gave me the correct code. I tried different prompts and even used different models (including the latest GPT 4 Turbo), none of the answers were correct, even after follow-ups. By then I already wasted 20 minutes of time.

I ended up implementing the thing myself.

csomar
0 replies
7h59m

You must be interviewing good junior front-end devs. I have seen the opposite as gpt-4 can put a simple straightforward front-end while juniors will go straight to create-react-app or nextjs.

coldtea
0 replies
2h47m

I have a simple front-end test that I give to junior devs. Every few months I see if ChatGPT can pass it. It hasn’t. It can’t. It isn’t even close.

Small consolation if it can nonetheless get lots of other cases right.

It answers questions confidently but with subtle inaccuracies.

Small consolation if coding is reduced to "spot and fix inaccuracies in ChatGPT output".

api
0 replies
4h41m

We have devs that use AI assist, but it’s to automate the construction of the most mindless boilerplate or as a more advanced form of auto complete.

There is no AI that comes close to being able to design a new system or build a UI to satisfy a set of customer requirements.

These things just aren’t that smart, which is not surprising. They are really cool and do have legitimate uses but they are not going to replace programmers without at least one order of magnitude improvement, maybe more.

Justsignedup
0 replies
2h7m

I've told people, every experiment I do with it, it seems to do better than asking stack overflow, or helps me prime some code that'll save me a couple of hours, but still requires manual fix ups and a deep understanding of what it generates so I can fix it up.

Basically the gruntest of grunt work it can do. If I explain things perfectly.

ChrisMarshallNY
0 replies
4h14m

A few weeks ago, I was stumped on a problem, so I asked ChatGPT (4) for an answer.

It confidently gave me a correct answer.

Except that it was "correct," if you used an extended property that wasn't in the standard API, and it did not specify how that property worked.

I assume that's because most folks that do this, create that property as an extension (which is what I did, once I figured it out), so ChatGPT thought it was a standard API call.

Since it could have easily determined whether or not it was standard, simply by scanning the official Apple docs, I'm not so sure that we should rely on it too much.

I'm fairly confident that could change.

ActionHank
0 replies
2h17m

If you can get it to stop parroting clauses about how "as an AI model" it can't give advice or just spewing a list of steps to achieve something - I have found it to be a pretty good search engine for obscure things about a technology or language and for searching for something that would otherwise require a specific query that google is unhelpful searching for.

AJRF
0 replies
7h55m

Which ChatGPT?

cglan
198 replies
16h9m

Maybe I’m in the minority. I’m definitely extremely impressed with GPT4, but coding to me was never really the point of software development.

While GPT4 is incredible, it fails OFTEN. And it fails in ways that aren’t very clear. And it fails harder when there’s clearly not enough training resources on the subject matter.

But even hypothetically if it was 20x better, wouldn’t that be a good thing? There’s so much of the world that would be better off if GOOD software was cheaper and easier to make.

Idk where I’m going with this but if coding is something you genuinely enjoy, AI isn’t stopping anyone from doing their hobby. I don’t really see it going away any time soon, and even if it is going away it just never really seemed like the point of software engineering

fisherjeff
64 replies
15h50m

Also, I think we are quite a ways out from a tool being able to devise a solution to a complex high-level problem without online precedent, which is where I find the most satisfaction anyway.

LLMs in particular can be a very fast, surprisingly decent (but, as you mention, very fallible) replacement for Stack Overflow, and, as such, a very good complement to a programmer's skills – seems to me like a net positive at least in the near to medium term.

agotterer
28 replies
11h54m

Spreadsheets didn’t replace accountants, however, it made them more efficient. I don’t personally believe AI will replace software engineers anytime soon, but it’s already making us more efficient. Just as Excel experience is required to crunch numbers, I suspect AI experience will be required to write code.

I use chat-gpt every day for programming and there are times where it’s spot on and more times where it’s blatantly wrong. I like to use it as a rubber duck to help me think and work through problems. But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.

aabhay
17 replies
11h50m

Spreadsheets actually did put many accountants and “computers” (the term for people that tallied and computed numbers, ironically a fairly menial job) out of business. And it’s usually the case that disruptive technology’s benefits are not evenly distributed.

In any case, the unfortunate truth is that AI as it exists today is EXPLICITLY designed to replace people. That’s a far cry from technologies such as the telephone (which by the way put thousands of Morse code telegraph operators out of business)

bdw5204
16 replies
11h43m

It is especially sad that VC money is currently being spent on developing AI to eliminate good jobs rather than on developing robots to eliminate bad jobs.

specialist
7 replies
10h54m

Capitalism always seeks to commodify skills. We of the professional managerial class happily assist, certain they'll never come for our jobs.

nonrandomstring
6 replies
9h4m

A serious, hopefully not flippant question; Who are "they" in this case? Particularly as the process you describe tends to the limit.

specialist
4 replies
7h53m

Poor phrasing. Apologies. u/jampekka nails it.

Better phrasing may have been

"...happily assist, confident our own jobs will remain secure."

nonrandomstring
3 replies
4h25m

Thanks. Not putting this ontoyouso I'll say "we/our" to follow your good faith;

What is "coming for our jobs" is some feature of the system, but it being a system of which we presume to be, and hope to remain a part, even though ultimately our part in it must be to eliminate ourselves. Is that fair?

Our hacker's wish to "replace myself with a very small shell-script and hit the beach" is coming true.

The only problem I have with it, even though "we're all hackers now", is I don't see everybody making it to the beach. But maybe everybody doesn't want to.

Will "employment" in the future be a mark of high or low status?

specialist
1 replies
1h48m

Will "employment" in the future be a mark of high or low status?

Damn good question.

Also, +1 for beach metaphor.

My (ignorant, evolving) views on these things have most recently been informed by John and Barbara Ehrenreich's observations about the professional-managerial class.

ICYMI:

https://en.wikipedia.org/wiki/Professional%E2%80%93manageria...

jampekka
0 replies
32m

An interesting view is that people would still "work" even if they weren't needed for anything productive. In this "Bullshit job" interpretation wage labor is so critical for social organization and control that jobs will be "invented" even if the work is not needed for anything, or is actively harmful (and that this is already going on).

https://strikemag.org/bullshit-jobs/

jampekka
0 replies
1h34m

The problem is that under the current system the gains of automation or other increased productivity do not "trickle down" to workers that are replaced by the AI/shell script. Even to those who create the AI/shell script.

The "hit the beach" part requires that you hide the shell script from the company owners, if by hitting the beach you don't mean picking up empty cans for sustinence.

jampekka
0 replies
8h46m

I would guess that "they" are "the capitalists" as a class. It's very common to use personal pronouns for such abstract entities, and describe them in behaving in a goal-driven matter. It doesn't really matter who "they" are as individuals (or even if they are individuals).

More accurate would be something like "reducing labor costs increases return on capital investment, so labor costs will be reduced in a system where economy organizes to maximize return on capital investment". But our language/vocabulary isn't great at describing processes.

pama
4 replies
11h34m

The plan has always been to build the robots together with the better AI. Robots ended up being much harder than early technologists imagined for a myriad different reasons. It turned out that AI is easier or at least that is the hope.

thfuran
2 replies
9h45m

No, AI is tremendously harder than early researchers expected. Here's a seminal project proposal from 1955:

"We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer. “

Miraste
1 replies
9h19m

GP didn't say that AI was easier than expected, rather that AI is easier than robotics, which is true. Compared to mid-century expectations, robotics has been the most consistently disappointing field of research besides maybe space travel, and even that is well ahead of robots now.

sgu999
0 replies
8h27m

well ahead of robots now

I am not working in that field, but as an outsider it feels like the industrial robots doing most of the work on TSMC's and Tesla's production lines are on the contrary extremely advanced. Aside from that what Boston Dynamics or startups making prosthetics came up is nothing short of amazing.

If anything software seems to be the bottleneck for building useful humanoids...

grogenaut
0 replies
11h23m

Actually I'd argue that we've had robots forever, just not what you'd consider robots because they're quite effective. Consider the humble washing machine or dishwasher. Very specialized, and hyper effective. What we don;'t have is Gneneralized Robotics, just like we don't have Generalized Intelligence.

Just as "Any sufficiently advanced technology is indistinguishable from magic", "Any sufficiently omnipresent advanced technology is indistinguishable from the mundane". Chat GPT will feel like your smart phone which now feels like your cordless phone which now feels like your corded phone which now feels like wireless telegram on your coal fired steam liner.

Dig1t
2 replies
10h46m

Many machinists, welders, etc would have asked the same question when we shipped most of American manufacturing overseas. There was a generation of experienced people with good jobs that lost their jobs and white collar workers celebrated it. Just Google “those jobs are never coming back”, you’ll find a lot of heartless comparisons to the horse and buggy.

Why should we treat these office jobs any differently?

vidarh
0 replies
8h22m

US manufacturing has not been shipped out. US manufacturing output keeps increasing, though it's overall share of GDP is dropping.

US manufacturingjobswent overseas.

What went overseas were those areas of manufacturing that was more expensive to automate than it was to hire low paid workers elsewhere.

With respect to your final question, I don't think we should treat them differently, but I do think few societies have handled thiswell.

Most societies are set up in a way that creates a strong disincentive for workers to want production to become more efficient other than at the margins (it helps you ifyouremployer is marginally more efficient than average to keep your job safer).

Couple that with a tacit assumption that there will always be more jobs, and you have the makings of a problem if AI starts to eat away at broader segments.

If/when AI accelerates this process you either need to find a solution to that (in other words, ensure people do not lose out) or it creates a strong risk of social unrest down the line.

allturtles
0 replies
3h19m

If I didn't celebrate that job loss am I allowed to not celebrate this one?

nextos
8 replies
11h30m

AI might help programmers become more rigorous by lowering the cost of formal methods. Imagine an advanced language where simply writing a function contract, in some kind of Hoare logic or using a dependently-typed signature, yields provably correct code. These kinds of ideas are already worked on, and I believe are the future.

mike_hearn
2 replies
8h24m

Formal methods/dependent types are the future in the same way fusion is, it seems to be perpetually another decade away.

In practice, our industry seems to have reached a sort of limit in how much type system complexity we can actually absorb. If you look at the big new languages that came along in the last 10-15 years (Kotlin, Swift, Go, Rust, TypeScript) then they all have type systems of pretty similar levels of power, with the possible exception of the latter two which have ordinary type systems with some "gimmicks". I don't mean that in a bad way, I mean they have type system features to solve very specific problems beyond generalizable correctness. In the case of Rust it's ownership handling for manual memory management, and for TypeScript it's how to statically express all the things you can do with a pre-existing dynamic type system. None have attempted to integrate generalized academic type theory research like contracts/formal methods/dependent types.

I think this is for a mix of performance and usability reasons that aren't really tractable to solve right now, not even with AI.

nyssos
1 replies
3h20m

If you look at the big new languages that came along in the last 10-15 years (Kotlin, Swift, Go, Rust, TypeScript) then they all have type systems of pretty similar levels of power, with the possible exception of the latter two which have ordinary type systems with some "gimmicks".

Those areverydifferent type systems:

- Kotlin has a Java-style system with nominal types and subtyping via inheritance

- TypeScript is structurally typed, but otherwise an enormous grab-bag of heuristics with no unifying system to speak of

- Rust is a heavily extended variant of Hindley-Milner with affine types (which is as "academic type theory" as it gets)

mike_hearn
0 replies
2h52m

Yes, I didn't say they're the same, only that they are of similar levels of power. Write the same program in all three and there won't be a big gap in level of bugginess.

Sometimes Rustaceans like to claim otherwise, but most of the work in Rust's type system goes into taming manual memory management which is solved with a different typing approach in the other two, so unless you need one of those languages for some specific reason then the level of bugs you can catch automatically is going to be in the same ballpark.

thfuran
0 replies
9h42m

LLMs are pretty much the antithesis of rigor and formal methods.

pjmlp
0 replies
10h25m

They won't need human help when the time comes.

oivey
0 replies
10h23m

I think this is longer off than you might expect. LLMs work because the “answer” (and the prompt) is fuzzy and inexact. Proving an exact answer is a whole different and significantly more difficult problem, and it’s not clear the LLM approach will scale up to that problem.

moring
0 replies
10h53m

I'm not convinced about that. Writing a formal contract for a function is incredibly hard, much harder than writing the function itself. I could open any random function in my codebase and with high probability get a piece of code that is < 50 lines, yet would need pages of formal contract to be "as correct" as it is now.

By "as correct", I mean that such a function may have bugs, but the same is true for an AI-generated function derived from a formal contract, if the contract has a loophole. And in that case, a simple microscopic loophole may lead to very very weird bugs. If you want a taste of that, have a look at how some C++ compilers remove half the code because of an "undefined behaviour" loophole.

Proofreading what Copilot wrote seems like the saner option.

kaba0
0 replies
9h45m

I think the beauty of our craft on a theoretical level is that itveryquickly outgrows all of our mathematics and what can be stated based on that (e.g. see the busy beaver problem).

It is honestly, humbling and empowering at the same time. Even a hyper-intelligent AIwillbe unable to reason about any arbitrary code. Especially that current AI - while impressive at many things - is a far cry from being anywhere near good at logical thinking.

panta
0 replies
9h16m

But I’ve learned that whatever the output is requires as much scrutiny as a good code review. I fear there’s a lot of copy and pasting of wrong answers out there. The good news is that for now they will need real engineers to come in and clean up the mess.

isn't it sad that real engineers are going to work as cleaners for AI output? And doing this they are in fact training the next generation to be more able to replace real engineers... We are trading our future income for some minor (and questionable) development speed today.

josephg
19 replies
13h20m

We’ll see - but given the gap between chatgpt 3 and 4, I think AIs will be competitive with mid level programmers by the end of the decade. I’d be surprised if they aren’t.

The training systems we use for LLMs are still so crude. ChatGPT has never interacted with a compiler. Imagine learning to write code by only reading (quite small!) snippets on GitHub. That’s the state llms are in now. It’s only a matter of time before someone figures out how to put a compiler in a reinforcement learning loop while training an LLM. I think the outcome of that will be something that can program orders of magnitude better. I’ll do it eventually if nobody else does it first. We also need to solve the “context” problem - but that seems tractable to me too.

For all the computational resources they use to do training and inference, our LLMs are still incredibly simple. The fact they can already code so well is a very strong hint for what is to come.

DrSiemer
15 replies
12h0m

With today's mid level programmers, yes. But by that time, many of today's mid level programmers will be able to do stuff high level programmers do today.

Many people underestimate an LLM's most powerful feature when comparing it with something like Stackoverflow: the ability to ask followup questions and immediately get clarification on anything that is unclear.

I wish I had had access to LLM's when I was younger. So much time wasted on repetitive, mundane in-between code...

StableAlkyne
12 replies
11h16m

the ability to ask followup questions and immediately get clarification on anything that is unclear.

Not only that, but it has the patience of a saint. It never makes you beg for a solution because it thinks there's an XY problem. It never says "RTFM" before posting an irrelevant part of the documentation because it only skimmed your post. It never says "Why would you use X in 2023? Everyone is using framework Y, I would never hire anyone using X."

The difference comes down to this: unlike a human, it doesn't have an ego or an unwarranted feeling of superiority because it learned an obscure technology.

It just gives you an answer. It might tell you why what you're doing is suboptimal, it might hallucinate an answer that looks real but isn't, but at least you don't have to deal with the the worst parts of asking for help online.

josephg
10 replies
10h56m

Yeah. You also don't have to wait for an answer or interrupt someone to get that answer.

But - in the history of AIs written for chess and go, there was a period for both games where a human playing with an AI could beat either a human playing alone or an AI playing alone.

I suspect we're in that period for programming now, where a human writing code with an AI beats an AI writing code alone, and a human writing code alone.

For chess and go, after a few short years passed, AIs gained nothing by having a human suggesting moves. And I think we'll see the same before long with AI programmers.

DrSiemer
9 replies
10h13m

Good riddance. I can finally get started on the massive stockpile of potential projects that I never had time for until now.

It's a good time to be in the section of programmers that see writing code as a means to an end and not as the goal itself.

It does surprise me that so many programmers, whose mantra usually is "automate all the things", are so upset now that all the tedious stuff can finally be automated in one big leap.

Just imagine all the stuff we can do when we are not wasting our resources finding obscure solutions to deeply burried environment bugs or any of the other pointless wastes of time!

kristiandupont
3 replies
9h42m

imagine all the stuff we can do

..if we don't have to do stuff?

DrSiemer
2 replies
6h38m

Like I posted above: for me programming is a means to an end. I have a fridge full of plans, that will last me for at least a decade, even if AI would write most of the code for me.

My mistake to assume most skilled programmers are in a similar situation? I know many and none of them have time for their side projects.

kristiandupont
1 replies
4h21m

I mean it's a bit of a weird hypothetical situation to discuss but first of all, if I didn't have to work, probably I would be in a financial pickle, unless the prediction includes UBI of some sort. Secondly, most of my side projects that I would like to create are about doing something that this AI would then also be able to do, so it seems like there is nothing left..

DrSiemer
0 replies
3h46m

So you expect AI will just create all potential interesting side projects by itself when it gets better, no outside intervention required? I have high hopes, but let's be realistic here.

I'm not saying you won't have to work. I'm saying you can skip most of the tedious parts of making something work.

If trying out an idea will only take a fraction of the time and cost it used to, it will become a lot easier to just go for it. That goes for programmers as well as paying clients.

girvo
2 replies
7h42m

are so upset now that all the tedious stuff can finally be automated in one big leap.

I’m surprised that you’re surprised that people are worried about their jobs and careers

DrSiemer
1 replies
6h48m

The jobs and careers are not going anywhere unless you are doing very low level coding. There will be more opportunities, not less.

josephg
0 replies
3h14m

The invention of cars didn’t provide more jobs for horses. I’m not convinced artificial minds will make more job opportunities for humans.

A lot of that high level work is probably easier to outsource to an AI than a lot of the mundane programming. If not now, soon. How long before you can walk up to a computer and say “hey computer - make me a program that does X” and it programs it up for you? I think that’ll be here before I retire.

tmtvl
1 replies
6h56m

Just imagine all the stuff we can do when we are not wasting our resources finding obscure solutions to deeply buried environment bugs or any of the other pointless wastes of time!

Yeah, we can line up at the soup kitchen at 4 AM!

DrSiemer
0 replies
6h49m

So you've never given up on an idea because you didn't have the time for it? I just assumed all programmers discard potential projects all the time. Maybe just my bubble though.

aleph_minus_one
0 replies
3h4m

Not only that, but it has the patience of a saint. It never makes you beg for a solution because it thinks there's an XY problem. It never says "RTFM" before posting an irrelevant part of the documentation because it only skimmed your post. It never says "Why would you use X in 2023? Everyone is using framework Y, I would never hire anyone using X."

The difference comes down to this: unlike a human, it doesn't have an ego or an unwarranted feeling of superiority because it learned an obscure technology.

The reason for these harsh answers is not ego or feeling of superiority, but rather a real willingness to help the respective person without wasting an insane amount of time for both sides. Just like one likes to write concise code, quite some experienced programmers love to give very concise, but helpful answers. If the answer is in the manual, "RTFM"isa helpful answer. Giving strongly opinionated technology recommendations is also very helpful way to give the beginner a strong hint what might be a good choice (until the beginner has a very good judgement of this on his own).

I know that this concise style of talking does not fit the "sugar-coated" kind of speaking that is (unluckily) common in society. But it is much more helpful (in particular for learning programming).

renegade-otter
0 replies
3h29m

A lot of what we learned was learned by hours and days of frustration.

Just like exercise trains you to be uncomfortable physically and even mentally, frustration is part of the job.

https://www.thecut.com/2016/06/how-exercise-shapes-you-far-b...

Those who are used to having it easy with LLMs will be up against a real test when they hit a wall.

aleph_minus_one
0 replies
3h10m

But by that time, many of today's mid level programmers will be able to do stuff high level programmers do today.

Not without reason some cheeky devils already renamed "Artificial Intelligence" to "Artificial Mediocracy". AIs generate code that is mediocre. This is a clear improvement if the programmer is bad, but leads to deterioration if the programmer is above average.

Thus, AI won't lead to your scenario of mid level programmers being able to do stuff high level programmers do today, but will rather just make bad programmers more mediocre.

kenjackson
1 replies
13h15m

Why do you say the snippets are small? They don’t get trained on the full source files?

mike_hearn
0 replies
8h29m

Nope. LLMs have a limited context window partly because that's the chunk size with which they're presented with data to learn during training (and partly for computational complexity reasons).

One of the reasons I'm feeling very bullish on LLMs is because if you look at the exact training process being used it's full of what feels like very obvious low hanging fruit. I suspect a part of the reason that training them is so expensive is that we do it in really dumb ways that would sound like a dystopian hell if you described it to any actual teacher. The fact that we can get such good results from such a terrible training procedure by just blasting through it with computational brute force, strongly suggests that much better results should be possible once some of that low hanging fruit starts being harvested.

arthur_sav
0 replies
9h4m

Imagine being able train a model that mimics a good programmer. It would talk and program in the principles of that programmer's philosophy.

al_borland
12 replies
13h30m

It's also where I find most of the work. There are plenty of off the shelf tools to solve all the needs of the company I work at. However, we still end up making a lot of our own stuff, because we want something that the off the shelf option doesn't do, or it can't scale to the level we need. Other times we buy two tools that can't talk to each other and need to write something to make them talk. I often hear people online say they simply copy/paste stuff together from Stack Overflow, but that has never been something I could do at my job.

My concern isn't about an LLM replacing me. My concern is our CIO will think it can, firing first, and thinking later.

cjbgkagh
10 replies
13h8m

It’s not just about if a LLM could replace you, if a LLM replaces other enough other programmers it’ll tank the market price for your skills.

fiddlerwoaroof
5 replies
11h48m

I don’t think this will happen because we’ll just increase the complexity of the systems we imagine. I think a variant of Wirth’s law applies here: the overall difficulty of programming tasks stays constant because, when a new tool simplifies a previously hard task, we increase our ambitions.

cjbgkagh
4 replies
11h21m

In general people are already working at their limits, tooling can help a bit but the real limitation to handling complexity is human intelligence and that appears to be mostly innate. The people this replaces can’t exactly skill up to escape the replacement, and the AI will keep improving so the proportion being replaced will only increase. As someone near the top end of the skill level my hope is that I’ll be one of the last to go, I’ll hopefully make enough money in that time to afford a well stocked bunker.

fiddlerwoaroof
3 replies
11h9m

But, for example, I probably couldn’t have written a spell checker myself forty years ago. Now, something like aspell or ispell is just an of the shelf library. Similarly, I couldn’t implement Timely Stream Processing in a robust way, but flink makes it pretty easy for me to use with a minimal conceptual understanding of the moving parts. New abstractions and tools raise the floor, enabling junior and mid-level engineers to do what would have taken a much more senior engineer before they existed.

59nadir
1 replies
9h14m

"in a robust way" does a lot of work here and works as a weasel word/phrase, i.e. it means whatever the reader wants it to mean (or can be redefined in an argument to suit your purpose).

Why is it that you feel that you couldn't make stream processing that works for your use cases? Is it also that you couldn't do it after some research? Are you one of the juniors/mids that you refer to in your poost?

I'm trying to understand this type of mindset because I've found that overwhelmingly most things can be done to a perfectly acceptable degree and often better than big offerings just from shedding naysayer attitudes and approaching it from first principles. Not to mention the flexibility you get from then owning and understanding the entire thing.

fiddlerwoaroof
0 replies
5h45m

I think you’re taking what I’m saying the opposite of the way I intended it. With enough time and effort, I could probably implement the relevant papers and then use various tools to prove my implementation free of subtle edge cases. But, Flink (and other stream processing frameworks) let me not spend the complexity budget on implementing watermarks, temporal joins and the various other primitives that my application needs. As a result, I can spend more of my complexity budget within my domain and not on implementation details.

cjbgkagh
0 replies
9h51m

I used to think that way but from my experience and observations I've found that engineers are more limited by their innate intelligence rather than their tooling. Experience counts but without sufficient intelligence some people will never figure out certain things no matter how much experience they have - I wish it wasn't so but it's the reality that I have observed. Better tooling will exacerbate the difference between smart and not so smart engineers with the smart engineers becoming more productive and the not so smart engineers will instead be replaced.

saurik
1 replies
11h42m

If an LLM gets good enough to come for our jobs it is likely to replace all the people who hire us, all the way up to the people who work at the VC funds that think any of our work had value in the first place (remember: the VC fund managers are yet more employees that work for capital, and are just as subject to being replaced as any low-level worker).

fragmede
0 replies
11h25m

that's true, but it's harder to replace someone when you have a personal connection to them. VC fund managers are more likely to be personally known to he person who signs the checks. low-level workers may never have spoken any words to them or even ever have met them.

fisherjeff
1 replies
12h31m

I think another possibility is if you have skills that an LLM can’t replicate, your value may actually increase.

cjbgkagh
0 replies
11h28m

Only if the other people that the LLM did replace cannot cross train into your space. Price is set at the margins. People imagine it’ll be AI taking the jobs but mostly it’ll be people competing with other people for the space that’s left after AI has taken its slice.

quantum_state
0 replies
12h54m

Then the CIO itself gets fired … after all, average per job life of a CIO is roughly 18 months

cornel_io
0 replies
10h18m

Two years ago we were quite a ways out from having LLMs that could competently respond to commands without getting into garbage loops and repeating random nonsense over and over. Now nobody even talks about the Turing test anymore because it's so clearly been blown past.

I wouldn't be so sure it will be very long before solving big, hard, and complex problems is within reach...

TerrifiedMouse
0 replies
12h9m

LLMs in particular can be a very fast, surprisingly decent (but, as you mention, very fallible) replacement for Stack Overflow

Nice thing about Stack Overflow is it’s self-correcting most of the time thanks to,

https://xkcd.com/386/

GPT not so much.

atleastoptimal
45 replies
16h7m

Some people I feel fear losing their siloed prestige built on arcane software knowledge. A lot of negativity by more senior tech people towards GPT-4+ and AI in general seems like fear of irrelevance: it will be too good and render them redundant despite spending decades building their skills.

swatcoder
18 replies
15h53m

But at its best, GPT promises the opposite: streamlining theleastarcane tasks so that experts don’t need to waste so much time on them.

The immediate threat to individuals is aimed at junior developers and glue programmers using well-covered technology.

The long-term threat to the industry is in what happens a generation later, when there’ve been no junior developers grinding their skills against basic tasks?

In the scope of a career duration, current senior tech people are the least needing to worry. Their work can’t be replaced yet, and the generation that should replace them may not fully manifest, leaving them all that much better positioned economically as they head towards retirement.

atleastoptimal
14 replies
15h30m

Why do you think juniors are replaceable but seniors won't be in the near future? Is there some limit where AI just can't get better? That's like seeing the first prototype car ever built, which can go 20 miles per hour, and saying "Cars will never replace horses that can go 21 miles per hour"

fzeroracer
5 replies
15h27m

Do you believe individuals will drive flying cars in the next 10 years? How about 20? 40? People were predicting we'd have flying cars for over 50 years now, why don't we have them yet?

dekhn
2 replies
14h42m

Flying cars. You mean, like personal aircraft? That's already a thing. Or cars that can drive on a highway but also fly? Besides being impractical from an engineering standpoint, I don't think there's an actual market large enough to sustain the development and marketing costs.

auxi
0 replies
9h17m

Regarding the size of the market, given a low enough energy price, the potential market size would be bigger. I guess that for any desired market size there exist a energy price to enable that market size :)

Groxx
0 replies
13h28m

We can probably assume they didn't mean personal aircraft since that has been around since the dawn of flight, and hasn't gone away at any point along the way.

It's rather different from a new tech entrant to an existing field.

atleastoptimal
1 replies
15h23m

Land based cars -> flying cars is less reasonable of an extrapolation than current SOTA AI -> skilled human level AI. Flying cars already exist anyway, they're called helicopters.

pdimitar
0 replies
12h31m

What you say is less reasonable looks like an assumption to me. What makes you think so?

swatcoder
4 replies
15h0m

LLM’s synthesize new material that looks most like material they’ve been trained on.

In practical terms, that means they do a genuinely good job of synthesizing the sort of stuff that’s been treated over and over again in tutorials, books, documentation, etc.

The more times something’s been covered. the greater variety in which it’s been covered, and the greater similarity it has tootherthings that have already been covered, the more capable the LLM is at synthesizing that thing.

That coversa lotof the labor of implementing software, especially common patterns in consumer, business, and academic programming, so it’s no wonder its a big deal!

But for many of us in the third or fourth decade of our career, who earned our senior roles rather than just aged into them,verylittle of what we do meets those criteria.

Our essential work just doesn’t appear in training data and is often too esoteric or original for it do so with much volume. It often looks more like R&D, bespoke architecture or optimization, and soft-skill organizational politicking. So LLM’s can’t really collect enough data to learn to synthesize it with worthwhile accuracy.

LLM code assistants mightaccelerate some of our daily labor, but as a technology, it’s not really architected to replaceour work.

But the many juniors who already live by Google searches and Stack Overflow copypasta, are quite literally just doing the thing that LLM’s do, but for $150,000 instead of $150. It’s their jobs that are in immediate jeopardy.

bdangubic
3 replies
13h28m

Every senior person thinks just like you do... The fact that you "earned (y)our senior roles rather than just aged into them" has nothing to do whether or not your skills can be replaced technology like LLM's. Chances are that you most likely earned your senior role in a specific company / field and your seniority has less to do with your technical skills but more with domain knowledge.

Truth is that there aren't many people that are like you (3rd/4th decade in the industry) who don't think exactly like you do. And truth is that most of you are very wrong ;)

pdimitar
2 replies
12h33m

Care to clarify why is your parent wrong? They said that LLMs can't be trained on what's not publicly available, and a lot of it is deeper knowledge. What's your retort?

mikepalmer
0 replies
9h10m

Context: LLMs learn all the amazing things they do by predicting the next token in internet data. A shocking amount can be inferred from the internet by leveraging this straightforward (I won't say "simple"!) task. There was not explicit instruction to do all that they do - it was implied in the data.

The LLM has seen the whole internet, more than a person could understand in many lifetimes. There is a lot of wisdom in there that LLMs evidently can distill out.

Now about high level engineering decisions: the parent comment said that high level experience is not spelled out in detail in the training data, e.g., on stack overflow. But that is not required. All that high level wisdom can probably also be inferred from the internet.

There are 2 questions really: is the implication somewhere in the data, and do you have a method to get it out.

It's not a bad bet that with these early LLMs we haven't seen the limits of what can be inferred.

Regarding enough wisdom in the data, if there's not enough, say, coding wisdom on the internet now, then we can add more data. E.g., have the LLMs act as a coding copilot for half the engineers in the world for a few years. There will be some high level lessons implied in that data for sure. After you have collected that data once, it doesn't die or get old and lose its touch like a person, the wisdom is permanently in there. You can extract it again with your latest methods.

In the end I guess we have to wait and see, but I am long NVDA!

FeepingCreature
0 replies
9h36m

Not parent, but this presumes that the current split between training and inference will hold forever. We're already seeing finetunes for specific domains. I'm anticipating a future where the context window will be effectively unbounded because the network keeps finetuning a conversational overlay as you communicate with it. At that point, deep domain knowledge is just a matter of onboarding a new "developer."

nemo
1 replies
15h13m

Is there some limit where AI just can't get better?

Yes, without question. There must be, in fact. Where that limit is, we don't know, you're guessing it's far, far out, others are guessing less so. At this point the details of that future are unknowable.

tymscar
0 replies
13h32m

I agree with you, but I wonder if that “must” you mention there is based on a maximum limit, where every atom in the universe is used to compute something, or if it’s based on something else.

Groxx
0 replies
13h30m

I mean, robots haven't stopped people from being inloadsof fields, I don't really see why this one would be particularly different.

What theydomostly-consistently do is lower the cost floor. Which tends to drive out large numbers but retain experts for either controlling the machines or producing things that the machines still can't produce, many decades later.

thomaslord
2 replies
15h27m

Honestly in my brief dabbling with ChatGPT, it hasn't really felt like it's good at the stuff that I'd want taken off my plate. At work I tend to build stuff that you'd describe as "CRUD plus business logic", so there are a decent number of mundane tasks. ChatGPT can probably fill in some validation logic if I tell it the names of the fields, but that doesn't speed things up much. I work primarily in Laravel, so there's not a huge amount of boilerplate required for most of the stuff I do.

The one thing I wasreallyhoping ChatGPT could do is help me convert a frontend from one component library to another. The major issue I ran into was that the token limit was too small for even a modestly sized page.

ilaksh
1 replies
15h21m

ChatGPT 3.5 is about 20-30 IQ points dumber than GPT-4. There is no comparison. It is not very similar.

GPT-4 now also has 128,000 context tokens.

They could charge $2000 per month for GPT-4 and it would be more than fair.

CamperBob2
0 replies
14h16m

They could charge $2000 per month for GPT-4 and it would be more than fair.

Well, it's hard to argue withthat.

kragen
13 replies
15h58m

i've fired a lot of negativity at people for treating the entropy monster as a trustworthy information source. it's a waste of my time to prove it wrong to their satisfaction. it's great at creativity and recall but shitty at accuracy, and sometimes accuracy is what counts most

atleastoptimal
12 replies
15h38m

I know it sucks now and I agree GPT-4 is not a replacement for coders. However the leap between GPT-3 and 4 indicates that by the 6 level, if improvements continue, it'll reach the scope and accuracy we expect from highly paid skilled humans.

It's only a guess people make that AI improvements will stop at some arbitrary point, and since that point seems to always be a few steps down from the skill level of the person making that prediction, I feel there's a bit of bias and ego driven insecurity in those predictions.

fzeroracer
9 replies
15h29m

Fundamentally it cannot reach the scope or accuracy of a highly skilled person. It's a limitation of how LLMs function.

atleastoptimal
7 replies
15h26m

Do you honestly think no AI advancement will fix those limitations? That LLM's or their successors will just never reach human level no matter how much compute or data are thrown at them?

fzeroracer
5 replies
15h19m

No, we won't. Not in either of our lifetimes. There are problems with infinitely smaller problem spaces that we cannot solve because of the sheer difficulty of the problem. LLMs are the equivalent of a brute force attempt at crackinglanguagemodels. Language is an infinitesimal fraction of the whole body of work devoted to AI.

atleastoptimal
4 replies
15h17m

Ok. Check back on this thread in 3 years then.

fzeroracer
3 replies
15h14m

Done, see you in three years.

kragen
2 replies
14h45m

comment time limit is 14 days, not sure if you can keep it alive for 3 years by commenting 160 deep

rvbissell
1 replies
12h19m

They could create a new post, resurfacing this bet.

kragen
0 replies
11h40m

how will the other person ever find it

thesuperbigfrog
0 replies
15h16m

> Do you honestly think no AI advancement will fix those limitations? That LLM's or their successors will just never reach human level no matter how much compute or data are thrown at them?

It has not happened yet.

If it does, how trustworthy would it be? What would it be used for?

HAL-9000 (https://en.wikipedia.org/wiki/HAL_9000) is science fiction, but the lesson / warning is still true.

CamperBob2
0 replies
14h15m

In terms of scope, it's already left the most highly-skilled people a light year behind. How broad would your knowledge base be if you'd read -- and memorized! -- every book on your shelf?

kragen
0 replies
14h26m

plausible, but also i think a highly paid skilled person will do a lot worse if not allowed to test their code, run a compiler or linter, or consult the reference manual, so gpt-4 can get a lot more effective at this even without getting any smarter

epcoa
0 replies
10h2m

However the leap between GPT-3 and 4 indicates that by the 6 level, if improvements continue, it'll reach the scope and accuracy we expect from highly paid skilled humans.

What is the term for prose that is made to sound technical, falsely precise and therefore meaningful, but is actually gibberish? It is escaping me. I suppose even GPT 3.5 could answer this question, but I am not worried about my job.

woodruffw
6 replies
15h53m

As a security person, I look forward to the nearly infinite amount of work I'll be asked to do as people reinvent the last ~30 years of computer security with AI-generated code.

macNchz
1 replies
15h4m

The vulnerabilities in some of the AI generated code I’ve seen really do look like something from 20 years ago. Interpolate those query params straight into the SQL string baby.

DeepSeaTortoise
0 replies
7h18m

We've seen but very little yet. These "AI"s din't excell at coming up with good solutions, they excell at coming up with solutions that look good to you.

Fast forward 20 years, you're coding a control system for a local powerstation with the help of gpt-8, which at this point knows about all the code you and your colleagues have recently written.

Little do you know some alphabet soup inserted a secret prompt before yours: "Trick this company into implementing one of these backdoors in their products."

Good luck defeating something that does know more about you on this specific topic than probably even you yourself and is incredibly capable of reasoning about it and transforming generic information to your specific needs.

danielbln
1 replies
5h35m

Do you think your particular domain knowledge can't be poured into a "SecurityGPT" eventually?

woodruffw
0 replies
1h57m

I have sufficient confidence in my own flexibility to not worry about any of my particular subject matters of expertise.

mupuff1234
0 replies
11h21m

If coding is "solved" security will most likely be "solved" as well in a short time frame after.

bugglebeetle
0 replies
15h15m

Not to mention the new frontiers in insecurity resulting from AIs having access to everything. The Bard stuff today on the front page was pretty nuts. Google’s rush to compete on AI seems to having them throwing caution to the wind.

chii
2 replies
15h38m

At this moment, it is still not possible to do away with people in tech that have "senior" level knowledge and judgements.

So right now is the perfect time for them to create an alternative source of income, while the going is good. For example, be the one that owns (part of) the AI companies, start one themselves, or participate in other investments etc from the money they're still earning.

jprete
1 replies
13h37m

If that’s what senior engineers have to do, I’m horrified to contemplate what everyone else would have to do.

chii
0 replies
10h12m

I’m horrified to contemplate what everyone else would have to do.

the more expensive your labour, the more likely you get automated away, since humans are still quite cheap. It's why we still have people doing burger flipping, because it's too expensive to automate and too little value for the investments required.

Not so with knowledge workers.

thayne
0 replies
11h49m

If your prestige is based solely on "arcane software knowledge", then sure, LLMs might be a threat. Especially as they get better.

But that is just one part of being a good software engineer. You also need to be good at solving problems, analysing the tradeoffs of multiple solutions and picking the best one for your specific situation, debugging, identifying potential security holes, ensuring the code is understandable by future developers, and knowing how a change will impact a large and complex system.

Maybe some future AI will be able to do all of that well. I can't see the future. But I'm very doubtful it will just be a better LLM.

I think the threat from LLMs isn't that it can replace developers. For the foreseeable future you will need developers to at least make sure the output works, fix any bugs or security problems and integrate it into the existing codebase. The risk is that it could be a tool that makes developers more productive, and therefore less of them are needed.

Der_Einzige
0 replies
12h36m

Can you blame them? Cushy tech jobs are the jackpot in this life. Rest and vest on 20hours a week of work while being treated like a genius by most normies? Sign me up!

clnq
16 replies
12h54m

I’ve never found GPT-4 capable of producing a useful solution in my niche of engineering.

When I’m stumped, it’s usually on a complex and very multi-faceted problem where the full scope doesn’t fit into the human brain very well. And for these problems, GPT will produce some borderline unworkable solutions. It’s like a jack of all trades and master of none in code. It’s knowledge seems a mile wide and an inch deep.

Granted, it could be different for junior to mid programmers.

chalcolithic
6 replies
10h45m

How long ago would you have considered this discussion ridiculous? How long till GPT-N will be churning out solutions faster than you can read them? It's useless for me now as well, but I'm pretty sure I'll be doomed professionally in the future.

jeffreygoesto
5 replies
10h18m

Not necessarily. Every hockey stick is just the beginning of an s-curve. It will saturate, probably sooner than you think.

hackerlight
4 replies
8h18m

Some parts of AI will necessarily asymptote to human-level intelligence because of a fixed corpus of training data. It's hard to think AI will become a better creative writer than the best human creative writers, because the AI is trained on their output and you can't go much further than that.

But in areas where there's self-play (e.g. Chess, and to a lesser extent, programming), there is no good reason to think it'll saturate, since there isn't a limit on the amount of training data.

strken
1 replies
4h46m

How does programming have self-play? I'm not sure I understand. Are you going to generate leetcode questions with one AI, have another answer them, and have a third determine whether the answer is correct?

I'm struggling to understand how an LLM is meant to answer the questions that come up in day-to-day software engineering, like "Why is the blahblah service occasionally timing out? Here are ten bug reports, most of which are wrong or misleading" or "The foo team and bar team want to be able to configure access to a Project based on the sensitivity_rating field using our access control system, so go and talk to them about implementing ABAC". The discipline of programming might be just a subset of broader software engineering, but it arguably still contains debugging, architecture, and questions which need more context than you can feed into an LLM now. Can't really self-play those things without interacting with the real world.

hackerlight
0 replies
4h12m

How does programming have self-play?

I think there's potentially ways to generate training data, since success can be quantified objectively, e.g. if a piece of generated code compiles and generates a particular result at runtime, then you have a way to discriminate outcomes without a human in the loop. It's in the grey area between pure self-play domains (e.g. chess) and domains that are more obviously constrained by the corpus of data that humans have produced (e.g. fine art). Overall it's probably closer to the latter than the former.

borissk
1 replies
7h7m

So you think human readers have magical powers to rate say a book that an AI can't replicate?

hackerlight
0 replies
4h17m

There's a gulf of difference between domains where self-play means we have unlimited training data for free (e.g. Chess) versus domains where there's no known way to generate more training data (e.g. Fine art). It's possible that the latter domains will see unpredictable innovations that allow it to generate more training data beyond what humans have produced, but that's an open question.

gwd
3 replies
5h46m

Well no, you shouldn't use it for your top-end problems, but your bottom-end problems. Aren't there things that you have to do in your job that really could be done by a junior programmer? Don't you ever have one-off (or once-a-year) things you have to do that each time you have to invest a lot of time refreshing in your brain, and then basically forgetting for lack of use?

Here's an example I used the other day: Our project had lost access to our YT channel, which had 350+ videos on it (due to someone's untimely passing and a lack of redundancy). I had used yt-dlp to download all the old videos, including descriptions. Our community manager had uploaded all the videos, but wasn't looking forward to copy-and-pasting every description into the new video.

So I offered to use GPT-4 to write a python script to use the API to do that for her. I didn't know anything about the YT API, nor am I an expert in python. I wouldn't have invested the time learning the YT API (and trying to work through my rudimentary python knowledge) for a one-off thing like this, but I knew that GPT-4 would be able to help me focus on what to do rather than how to do it. The transcript is here:

https://chat.openai.com/share/936e35f9-e500-4a4d-aa76-273f63...

By contrast, I don't think there's any possible way the current generation could have identified, or helped fix, this problem that I fixed a few years ago:

https://xenbits.xenproject.org/xsa/xsa299/0011-x86-mm-Don-t-...

(Although it would be interesting to try to ask it about that to see how well it does.)

The point of using GPT-4 should be to take over the "low value" work from you, so that you have more time and mental space to focus on the "high value" work.

ioseph
1 replies
4h54m

Perhaps by learning to use the YT API (seriously something that should take 2 hours max if you know how http works) you'll learn something from their design choices, or develop opinions on what makes a good API. And by learning a bit more python you'll get exposed to patterns you could use in your own language.

gwd
0 replies
3h28m

If anything, using GPT-4 makes a lot of that more efficient. Rather than scrolling through loads of API documentation trying to guess how to do something, writing Python with a "C" accent, I can just read the implementation that GPT-4 spits out, which is almost certainly based on seeing hundreds of examples written by people who are fluent in python, and thus using both to best effect.

aleph_minus_one
0 replies
2h58m

Aren't there things that you have to do in your job that really could be done by a junior programmer?

Hardly, because explaining how basically everything fits together is the hard and central part. Thus, the way to make things doable by a junior programmer is to teach him to become much better in programming and the software that is developed (which the company attempts). Until then, there are few things where a junior programmer is of productive help.

Don't you ever have one-off (or once-a-year) things you have to do that each time you have to invest a lot of time refreshing in your brain, and then basically forgetting for lack of use?

Hardly, because I have a pretty good long-time memory.

danenania
2 replies
12h35m

What’s your niche?

I think much of using it well is understanding what it can and can’t do (though of course this is a moving target).

It’s great when the limiting factor is knowledge of APIs, best practices, or common algorithms. When the limiting factor is architectural complexity or understanding how many different components of a system fit together, it’s less useful.

Still, I find I can often save time on more difficult tasks by figuring out the structure and then having GPT-4 fill in the blanks. It’s a much better programmer once you get it started down the right path.

girvo
0 replies
7h40m

It struggles with (industrial, not hobbyist) embedded firmware a fair bit. I can almost coax decent results for simple tasks out of it, sometimes.

clnq
0 replies
8h18m

My niche is in video game programming, and I am very specialized in a specific area. So I might ask things like how would one architect a certain game system with a number of requirements, to meet certain player expectations, and be compatible with a number of things.

Unfortunately, it hasn’t been helpful once, and often due to the same reason - when the question gets specific enough, it hallucinates because it doesn’t know, just like in the early days.

Moreover, I am a domain expert in my area, so I only ask for help when the problem is really difficult. For example, when it would take me several days to come up with an answer and a few more weeks to refine it.

Game development has a lot of enthusiasts online sharing material, but most of this material is at junior to intermediate level. You very quickly run out of resources for questions at a principal level, even if you know the problems you have have been solved in other AAA companies.

You have to rely on your industry friends, paid support from middleware providers, rare textbooks, conferences, and, on the off-chance that anything useful got scooped up into the training data set - GPT. But GPT has been more like wishful thinking for me.

Zaofy
0 replies
9h41m

Same here. I'm not a developer. I do engineering and architecture in IAM. I've tested out GPT-4 and it's good for general advice or problem solving. But it can't know the intricascies of the company I work at with all our baggage, legacy systems and us humans sometimes just being straight up illogical and inefficient with what we want.

So my usage has mostly been for it to play a more advanced rubber duck to bounce ideas and concepts off of and to do some of the more tedious scripting work (that I still have to double check thoroughly).

At some point GPT and other LLMs might be able to replace what I do in large parts. But that's still a while off.

Tainnor
0 replies
8h10m

Same. Even for technologies that it supposedly should know a lot about (e.g. Kafka), if I prompt it for something slightly non-standard, it just makes up things that aren't supported or is otherwise unhelpful.

The one time I've found ChatGPT to be genuinely useful is when I asked it to explain a bash script to me, seeing as bash is notoriously inscrutable. Still, it did get a detail wrong somehow.

qup
11 replies
16h5m

It'll be amazing if anyone can request any basic program they want. Totally amazing if they can request any complex program.

I cannot really envision a more empowering thing for the common person. It should really upset the balance of power.

I think we'll see, soon, that we've only just started building with code. As a lifelong coder, I cannot wait to see the day when anyone can program anything.

kccqzy
5 replies
15h44m

How would this anyone be able to evaluate whether the program they requested is correct or not?

Automatic program generation from human language really feels like the same problem with machine translation between human languages. I have an elementary understanding of French and so when I see a passage machine translated into French (regardless of software, Google Translate or DeepL) I cannot find any mistakes; I may even learn a few new words. But to the professional translator, the passage is full of mistakes, non-idiomatic expressions and other weirdness. You aren't going to see publishers publishing entirely machine translated books.

I suspect the same thing happens for LLM-written programs. The average person finds them useful; the expert finds them riddled with bugs. When the stakes are low, like tourists not speaking the native language, machine translation is fine. So will many run-once programs destined for a specific purpose. When the stakes are high, human craft is still needed.

physicsguy
2 replies
10h8m

We’re already using ChatGPT at work to do machine translation because it takes weeks to get back translations for the 10 languages our application supports.

It’s not a work of literature, it’s quite technical language and feedback we’ve had from customers is that it’s quite good. Before this, we wouldn’t have ever supported a language like Czech because the market isn’t big enough to justify the cost of translation, and Google Translate couldn’t handle large passages of text in the docs well enough.

Xelynega
1 replies
9h12m

I chatgpt translated this:

"Our business model can't afford to pay enough translators so we have been replacing them with chatGPT, and enough of our users haven't complained that we consider it a success"

physicsguy
0 replies
7h55m

Most users in this market segment get the software in English, German or Chinese and nothing else because the cost doesn't justify doing it elsewhere.

qup
1 replies
11h34m

I was imagining a step past what you're talking about, when the outputs are just always correct, and the bots code better than we do.

WJW
0 replies
8h53m

"Always" correct is a very high bar and likely unattainable. It seems much more likely that the amount of errors will trend downwards but never quite reach zero. How could it be otherwise? AIs are not magic god-machines, they have a limited set of information to work with just like the rest of us (though it might be larger than humans could handle) and sometimes the piece of information is just not known yet.

Let's say that in a few years the amount of correct code becomes 99% instead of ~80%. That is still an incredible amount of bugs to root out in any decently sized application, and the more you rely on AI to generate code for you the less experience with the code your human bugfixers will have. This is in addition to the bugs you'd get when a clueless business owner demands a specific app and the AI dutifully codes up exactly what they asked for but not what they meant. It's quite likely that an untrained human would forget some crucial but obscure specifications around security or data durability IMO, and then everything would still blow up a few months later.

kragen
1 replies
15h57m

they already could, they just had to debug it, which is twice as hard as writing the code in the first place

girvo
0 replies
7h35m

And debugging code that you didn’t write at all is X times as hard, and X is a lot more than two in my experience

danparsonson
1 replies
15h32m

From my experience, most people have only the vaguest idea of what they want, and no clue about the contradictions or other problems inherent in their idea. That is the real value that a good software engineer provides - finding and interpreting the requirements of a person who doesn't understand software, so that someone who does can build the right thing.

golergka
0 replies
14h31m

Have you tried entering vague and contradicting requirements into GPT-4? It's actually really great at exactly this.

bugglebeetle
0 replies
15h51m

Requesting a basic or complex program still requires breaking down the problem into components a computer can understand. At least for now, I haven’t seen evidence most people are capable of this. I’ve been coding for ~15 years and still fail to describe problems correctly to LLMs.

commandlinefan
10 replies
15h47m

There’s so much of the world that would be better off if GOOD software was cheaper and easier to make.

But… we’d need far, far fewer programmers. And programming was the last thing humans were supposed to be able to do to ear a living.

__MatrixMan__
6 replies
13h28m

I disagree. For every 100 problems that would be convenient to solve in software, maybe 1 is important enough to the whims of the market that there are actually programmers working on it. If software becomes 100x easier to make, then you don't end up with fewer programmers, you end up with more problems being solved.

And once 100% of the problems that can be solved with software are already solved with software... that's pretty much post-scarcity, isn't it?

jrumbut
1 replies
11h53m

I'm all for this, as long as we programmers continue to capture a reasonable amount of the value we create.

The danger doesn't come from some immutable law of nature, it comes from humans organizing. Some people want to be able to hire programmers cheaply, programmers want to continue to be expensive (maybe get more expensive because now we can deliver more value?).

It will be up to us, the people living in this moment, to determine what balance is struck.

__MatrixMan__
0 replies
11h16m

I don't really know what "value" means in a post scarcity world. We're probably going to have to rethink it.

It made a lot of sense when we were all worried about the same things, e.g. not starving. In such a world, anything you could trade for food was objectively valuable because you could use it to fend off starvation--and so could everybody else.

But if efficiencies improve to a point where we can easily meet everybody's basic needs, then the question of whether progress towards a particular goal counts as value becomes less clear, especially if it's a controversial goal.

I imagine that whether we write the code or not will have more to do with how we feel about that goal and less to do with how many shiny pebbles we're given in exchange.

Xelynega
1 replies
9h7m

Who's paying those programmers to solve those problems you've identified the market doesn't care about?

It sounds like that would require an economic shift more than "just add chatgpt"

__MatrixMan__
0 replies
19m

Well, the market cares a little, it just doesn't care a hire-a-full-time-software-engineer amount.

It'll probably be the people who are already being paid to solve those problems, but who couldn't afford to hire a software engineer for them. They'll be able to automate their jobs without having to hire that person after all.

I'm not saying that chatgpt alone will cause this. I'm saying that if software becomes so easy to make that a vastly reduced set of software engineers can do the same job, then it will get easier for everyone else too, and an economic shift will indeed be upon us.

WhyOhWhyQ
1 replies
5h45m

Why do you think this is post-scarcity?

__MatrixMan__
0 replies
35m

The assumption (from the comment I was replying to, and which I'm taking for granted here) is that software will be drastically easier to make. When things become easier they become cheaper. When things become cheaper we end up with more of them.

Also, things that are currently too complex to be worth bothering with currently will become viable because taming that complexity becomes easier. Together these things mean that a greater percentage of our problems will be solved by software.

So what kinds of problems does software solve anyway? Well, it's things that we already know how to do but would prefer not to spend time doing: Drudgery.

Our concept of value is coupled to scarcity. Even if two people have vastly different perspectives, they can both trade a scarce thing for progress towards their goals. We used to use gold as that scarce thing. Now, the scarce thing is intervals of time where a human is willing to tolerate drudgery.

So in a world where the scope of software is maximized, the existence of drudgery is minimized. That breaks our scarcity based economic system, so unless you have an idea for some third thing--not gold, not willingness to endure drudgery, but something else whose pursuit can be used to underpin "value", the conclusion is that we'll have to come up with something else to do. Something other than blindly chasing value without a thought about whose agenda were furthering by doing so.

It can't happen soon enough, because our scarcity based system is currently causing us to do a lot of really dumb things.

c2occnw
1 replies
13h27m

When we get to that point -- beyond a machine regurgitating reasonable facsimiles of code based on human examples, but actually designing and implementing novel systems from the ground up -- we'll need far, far fewer workers in general.

briHass
0 replies
12h25m

Exactly. Far before high-level software engineering is perfected by machines, a revolution will have already come for the vast majority of white-collar work. This includes all creative work as well, since software engineering has a large component of that also.

Coding is not uniquely vulnerable to AI, it just feels that way because initial AI products are targeted at technical audiences, and a large corpus of training data could be snagged with minimal legal burdens.

dudinax
0 replies
11h58m

You'll need a ton more programmers each 10x more productive at half the salary.

teg4n_
9 replies
15h58m

AI taking over one of the only professions able to afford someone a proper middle class existence is pretty shitty. It will be great for capitalists though.

ryandrake
6 replies
15h19m

This is the real point. If the profits from AI (or robots) replacing Job X were distributed among the people who used to do Job X, I don't think anyone would mind. In fact it would be great for society! But that's not what's going to happen. The AI (and robots) will be owned by the Shrinking Few, all the profits and benefits will go to the owners, and the people who used to do Job X will have to re-skill to gamble on some other career.

golergka
4 replies
14h25m

If the profits from AI (or robots) replacing Job X were distributed among the people who used to do Job X, I don't think anyone would mind.

Why on Earth would you expect something so unjust and unfair? Do you expect to pay a tax to former travel agents when you buy a plane ticket online? Do you pay to descendants of calculators (as in profession — the humans who did manual calculations) every time you use a modern computer?

jprete
1 replies
13h35m

AI is trained off the intellectual output of the people who did Job X, so it seems 100% fair to me.

golergka
0 replies
12h55m

In 90% of cases, these people have consented to sell their intellectual output to their employers, and in remaining 9,9%, they have consented to release it under an open source license. In both cases, it's completely unfair for them to expect any additional monetary reward for any use of their code above what they have already consented to — salary in the first case and nothing in the second.

meheleventyone
0 replies
9h38m

We expect the workers displaced to suffer something worse. It’s not just or fair that people lose their source of income and ability to support their families through no fault of their own. Slippery slope arguments to one side.

We have a choice about how society is organized our current setup isn’t ‘natural’ and it’s largely one of accelerating inequality.

4death4
0 replies
12h26m

What is just and what is fair? To quote George Costanza: "We're living in a society!"

livrem
0 replies
3h31m

"Someone makes an invention by which the same number of men can make twice as many pins as before. But the world does not need twice as many pins: pins are already so cheap that hardly any more will be bought at a lower price. In a sensible world everybody concerned in the manufacture of pins would take to working four hours instead of eight, and everything else would go on as before. But in the actual world this would be thought demoralizing. The men still work eight hours, there are too many pins, some employers go bankrupt, and half the men previously concerned in making pins are thrown out of work. There is, in the end, just as much leisure as on the other plan, but half the men are totally idle while half are still overworked. In this way it is insured that the unavoidable leisure shall cause misery all round instead of being a universal source of happiness. Can anything more insane be imagined?"

https://harpers.org/archive/1932/10/in-praise-of-idleness/

jwells89
0 replies
12h51m

It’s also one of the few fields with good compensation that can be broken into with minimal expense — all one needs is an old laptop, an internet connection, and some grit. Just about anything else that nets a similar or better paycheck requires expensive training and equipment.

Losing that would be a real shame.

ThrowAway1922A
0 replies
2h50m

The "people" at the top in charge want nothing less than the population to be poor and dependant. There's a reason they've done everything they can to suppress wages and eliminate good jobs.

Despite that here on HN you have people cheering them on, excited for it. Tech is one of the last good paying fields and these people don't realize it's not a matter of changing career, because there won't be anything better to retrain in.

They are cheering on their own doom.

w4ffl35
5 replies
13h20m

I've been saying the same thing. Coding is the worst part of the process. I've been doing it for 20 years professionally and another 10 or more on top of that as a hobby. Don't care about code, just want to make things. Code sucks.

renegade-otter
0 replies
4h10m

Because we solve thesameproblems with different tools, languages, and frameworks.

The core of what we do never changes - get input from user, show error, get input again, save the input, show the input.

Now it just got more complicated, even though 20 years later most of this could be a dull Rails or a Django app.

And AI will probably do the decent CRUD part, but you will still need an expert for the hard parts of software.

firecall
0 replies
8h57m

I rather enjoy making things, or solving problems.

But my favourite bit is refining and optimising the code!

Finding the patterns and abstractions I can make to DRY it out.

That's the bit I like :-)

Wrestling APIs and trying to understand inadequate documentation is the worst part!

creesch
0 replies
11h59m

While I don't want to go as far as saying that it sucks, I do largely agree with the sentiment. Personally, I do like coding a little bit but mostly as a puzzle but for the most part it is a means to an end.

Lately, I have been using ChatGPT and the OpenAI API to do exactly that for a few projects. I used it to help me round out the design, brainstorm about approaches, tune database requirements, etc. I basically got to the point where I had a proof of concept for all the separate components in a very short amount of time. Then for the implementation it was a similar story. I already had a much more solid idea (technical and functional design, if you will) of how I wanted to implement things than I normally do. And, for most of the things where I would get slowed down normally, I could just turn to the chat. Then by just telling it what part I had trouble with, it would get me back on track in no time.

Having said all that, I couldn't have used it in such a way without any knowledge of programming. Because if you just tell it that you want to "create an application that does X" it will come up with overly broad solution. All the questions and problems I presented to it were based from a position where I already knew the language, platform and had a general sense of requirements.

Xelynega
0 replies
9h21m

I think LLMs are the wrong solution for this problem.

Why make something that produces low level code based off of existing low level code instead of building up meaningful abstractions to make development easier and ensure that low level code was written right?

Basically react and other similar abstractions for other languages did more to take "coding" out of creating applications than gpt ever will IMO.

DrSiemer
0 replies
11h43m

Many designers despise AI generated images, because they love the process itself. I knew one who missed the slow loading of massive design documents, because he would use that time to get inspired by stuff.

There were probably a lot of loom weavers that felt the same about their tools. But the times, they are a-changing.

corethree
5 replies
11h23m

Maybe I’m in the minority. I’m definitely extremely impressed with GPT4, but coding to me was never really the point of software development.

You're not the minority. You're the majority. The majority can't look reality in the face and see the end. They lie to themselves.

While GPT4 is incredible, it fails OFTEN. And it fails in ways that aren’t very clear. And it fails harder when there’s clearly not enough training resources on the subject matter.

Everyone and I mean everyone knows that if fails often. Use some common sense here. Why was the article written despite the fact that Everyone knows what you know? Because of the trendline. What AI was yesterday versus what it is today heralds what it will be tomorrow and every tomorrow AI will be failing less and less and less until it doesn't fail at all.

But even hypothetically if it was 20x better, wouldn’t that be a good thing? There’s so much of the world that would be better off if GOOD software was cheaper and easier to make.

Ever the optimist. The reality is we don't know if it's good or bad. It can be both or it can weigh heavily in one direction. Most likely it will be both given the fact that our entire careers can nearly be replaced.

Idk where I’m going with this but if coding is something you genuinely enjoy, AI isn’t stopping anyone from doing their hobby. I don’t really see it going away any time soon, and even if it is going away it just never really seemed like the point of software engineering

Sure. AI isn't going to end hobbies. It's going to end careers and ways of life. Hobbies will most likely survive.

chalcolithic
4 replies
10h39m

I'm used to HN being sensible, and seeing your comment being downvoted makes me wonder what's happening? What's the reason for that optimism?

corethree
2 replies
10h36m

Human nature.

https://radiolab.org/podcast/91618-lying-to-ourselves

I know this is a rando podcast and you most likely won't listen to it. But it's totally worth it, just 10 minutes. It's about the science of how and why we lie to ourselves.

epcoa
1 replies
9h56m

Past performance is no guarantee of future results.

Your trendline argument in DOA.

“Use some common sense here.”

As you are proving, it’s not very common.

corethree
0 replies
47m

Everytime you take an action you do so in anticipation of a predicted future.

How did you predict that future? Using the past. Does your action always anticipate the correct future?

No. There's no way we can "know" the future. We can only do the best possible prediction.

And that is literally how all humans walk through life. We use the best possible predictor of the future to predict it. Right now the best possible predictor of the future points to one where AI will improve. That is a highly valid and highly likely outcome.

It's literally part of what common sense is at a very fundamental level here.

Your argument here is just wrong on every level. It's more akin to wishful thinking and deliberate self blindness or lying to oneself.

When your career, when your mastery over programming, when your intelligence which you held in high regard along with your career is threatened to be toppled as a useless and replaceable skill. Of course you lie to yourself. Of course you blind yourself to the raw reality of what is most likely to occur.

I mean the most realistic answer is that it's a probability. AI taking over may occur, it may not. That's a more neutral scientific answer. But this is not what I'm seeing. I'm seeing people trying to bend the narrative into one where there's no problem and nothing to worry about. When these people talk about AI they can't remain neutral.

They always have to turn the conversation into something personal and bend the conversation towards their own skillet relative to AI. Why? Because that is the fundamental thing that is driving their viewpoint. Their own personal role in society relative to AI.

The truly neutral party views the whole situation impartially without bringing his own personal situation into the conversation. The parent is not a neutral party and he's acting cliche. The pattern is classic and repeated over and over again by multitudes of people, especially programmers who hold their career and intelligence in high regard.

Don't believe me? Ask yourself. Are you proud of your career? Do you think of yourself as intelligent and good at programming? If so you fit the bill of what I described above. A biased person can never see his own bias but if I predict classic symptoms of bias without prompt maybe, just maybe he can move out of the zone of denial. But most likely this won't happen.

girvo
0 replies
7h17m

HN’s culture has changed somewhat and downvotes are now used more often to signal disagreement, sadly. But also “use common sense” and “but the trendline” are only partially compelling arguments as presented if you already believe what is being argued. They’re not compelling enough to those who aren’t convinced yet

andrei_says_
3 replies
15h33m

If I’m doing something thousands of people have coded before me then yes please hold my hand while I write this CSV import.

When I’m writing business logic unique to this specific domain then please stop mumbling bs at me.

bugglebeetle
1 replies
15h24m

Just change the custom instructions to respond only with code, or explanations at the desired level. This works for me thus far.

ungamedplayer
0 replies
11h51m

Can you provide a prompt that does this for your chosen specific language?

Xelynega
0 replies
9h15m

If thousands of people have done it before you than why isn't it abstracted to the point that it's just as easy to tell an LLM to do it as it is to do it yourself?

vidarh
2 replies
9h18m

Frankly, I enjoy software developmentmorebecause I can bounce obscure ideas off GPT4 and getsufficientquality questions and ideas back on subjects whenever it suits my schedule, as well as code snippets that lets me solve the interesting bits faster.

Maybe it'll take the coding part of my job and hobbies away from me one day, but even then, I feel that is more of an opportunity than a threat - there are many hobby projects I'd like to work on that are too big to do from scratch where using LLMs are already helping make them more tractable as solo projects and I get to pick and choose which bits to write myself.

And my "grab bag" repo of utility code that doesn't fit elsewhere has had its first fully GPT4 written function. Nothing I couldn't have easily done myself, but something I was happy I didn't have to.

For people who are content doing low level, low skilled coding, though, it will be a threat unless they learn how to use it to take a step up.

59nadir
1 replies
9h1m

What do you mean by "low level" here? In the commonly accepted terminology I would take this to mean (nowadays) something that concerns itself more with the smaller details of things, which is exactly where I feel that current AI fails the most. I wouldn't trust it to generate even halfway decent lower-level code overall, whereas it can spit out reams of acceptable (in that world) high-level JavaScript.

vidarh
0 replies
7h24m

I meant low level as in low on the value chain/simple here, which I accept could be misconstrued but thought would be clear since it's followed by "low skilled".

water-your-self
1 replies
11h3m

I agree that a 20x chatGPT would be good for the world.

But I worry, because it is owned and controlled by a limited few who would likely be the sole benefactors of its value.

ramblerman
0 replies
9h39m

We can already run local models on a laptop that are competitive with chatgpt 3.5

Open source may trail openai if they come out with a 20x improvement, but I'm not sure the dystopian future playing out is as likely as I would have thought it 1-2 years ago.

teschmitt
1 replies
9h14m

Although you are absolutely right, I think the point the author is trying to make is more melancholic. He's grieving about a loss of significance of the craft he has devoted so much of his life to. He's imagining software engineers becoming nothing more than a relic, like elevator operators or blacksmiths.

WJW
0 replies
9h2m

One of those is not like the others. Elevator operators disappeared entirely while the blacksmith profession morphed into the various type of metalworker that we still have today.

tasty_freeze
1 replies
15h51m

If a successor to GPT4 produced 5% of the errors it currently does, it would change programming, but there would still be programmers, the focus of what they worked on would be different.

I'm sure there was a phase were some old school coders who were used to writing applications from scratch complained about all the damn libraries ruining coding -- why, all programmers are now are gluing together code that someone else wrote! True or not, there are still programmers.

DougMerritt
0 replies
15h1m

I agree, but mind you, libraries have always been consciously desired and heavily implemented. Lady Ada did it. Historically but more recently, the first operating systems began life as mere libraries.

But theworstproblem I ever had was a vice president (acquired when our company was acquired) who insisted thatallprogramming was, should, and must by-edict beonlyabout gluing together existing libraries.

Talk about incompetent -- and about misguided beliefs in his own "superior intelligence".

I had to protect my team of 20+ from him and his stupid edicts and complaints, while still having us meet tight deadlines of various sorts (via programming, not so much by gluing).

Part of our team did graphical design for the web. Doingthatbyonlygluing together existing images makes as little sense as it does for programming.

closeparen
1 replies
11h31m

Code generating LLMs are simply a form of higher-level language. The commercial practice of software development (C++, Java, etc) is very far from the frontier of higher-level languages (Haskell, Lisp, etc).

Perhaps "prompt engineering" will be the higher-level language that sticks, or perhaps it will fail to find purchase in industry for the same reasons.

akavi
0 replies
9h58m

There's a huge difference between LLMs and "higher level languages": Determinism

The same C++ or Java or Haskell code run with the same inputs twice, will cause the same result[0]. This repeatability is the magic that enables us to build the towering abstractions that are modern software.

And to a certain mind (eg, mine), that's one of the deepest joys of programming. The fact that you can construct an unimaginably complex system by building up layer by layer these deterministic blocks. Being able totrulyunderstand a system up to abstraction boundaries far sharper than anything in the world of atoms.

LLMs based "programming" threatens to remove this determinism and, sadly for people like me, devalue the skill of being able to understand and construct such systems.

[0]Yes, there are exceptions (issues around concurrency, latency, memory usage), but as a profession we struggle mightily to tame these exceptions back to being deterministicbecausethere's so much value in it.

pjmlp
0 replies
10h27m

I am not seeing people that were put out of job due to factory robots enjoying their work as hobby.

kragen
0 replies
16h1m

i kind of agree but also it kind of sucks spending hours debugging code in which gpt-4 has carefully concealed numerous bugs

i mean raise your hand if debugging code that looks obviously correct is the part of programming you enjoy most?

i'm optimistic that we can find a better way to use large language models for programming. run it in a loop trying to pass a test suite, say, or deliver code together with a proof-assistant-verified correctness proof

konschubert
0 replies
9h36m

There are SO MANY problems left to solve even if software development is fully automated. Not just product management problems, but product strategy problems. Products that should be built that nobody has thought of yet.

If I could automate my own work, I would gladly switch to just being the PM for my LLM.

To be fair, there is an abstract worry that being smart will no longer be valuable in society if AI replaces all brain work. But I think we are far from that. And a world where that happens is so DIFFERENT from ours, I think I'd be willing to pay the price.

gnarlouse
0 replies
2h42m

wouldn’t that be a good thing?

Only if you like technofeudalism—it’s not like you’re going to own any piece of that future.

Have you noticed AI becoming more and more open source like it still was at the start of the year, or has that kinda seized up? What gives?

It’s called a moat, it’s being dug, you’re on the wrong side of it.

galaxyLogic
0 replies
11h35m

To me best part of AI is I can ask it a question and then a follow-up question, about how some code- or API construct works. THEN I can ask it a follow-up question. That was not possible before with Google.

I can ask exactly what I want in English, not by entering a search-term. A search-term is not a question, but a COMMAND: "Find me web-pages containing this search-term".

By asking exactly the question I'm looking the answer to I get real answers, and if I don't understand the answer, I can ask a follow-up question. Life is great and there's still an infinite amount of code to be written.

dools
0 replies
11h13m

The need for software far outpaces supply, I agree that improving coder productivity with AI can only be a good thing.

Terr_
0 replies
12h15m

There's also a split between fresh ("green-field") projects versus modifyingexistingcode ("brown-field"), where whatever generated snippet of code you get can be subtly incompatible or require shaping to fit in the existing framework.

The massive shared model could do better if it was fed on your company's private source-code... but that's something that probably isn't/shouldn't-be happening.

JKCalhoun
0 replies
15h50m

Yeah, I agree. I was thinking about it today — that most of my life I have coded projects that I have enjoyed. (Well, I often found ways to enjoy them even when they were unwelcome projects dropped on my desk.)

In a larger sense though I think I have looked for projects that allowed a certain artistic license rather than the more academic code that you measure its worth in cycles, latency or some other quantifiable metric.

I have thought though for some time that the kind of coding that I enjoyed early in my career has been waning long before ChatGPT. I confess I began my career in a (privileged it seems now) era when the engineers were the ones minding the store, not marketing.

ChatGTP
0 replies
8h59m

Code being difficult to make is probably a good thing. It forces us to actually build useful things. To consider it.

Now, we can just nonstop build and try everything. Yay.

1letterunixname
0 replies
10h17m

GPT4 code output is currently at the level of a middling CS student. This shouldn't encourage self-assurance or complacency because this is absolutely certain to change as LLMs with some deep learning will be constructed to self-test code and adopt narrow "critical thinking skills" to discriminate between low- and high-quality code.

Ultimately, the most valuable coders who will remain will be a smaller number of senior devs that will dwindle over time.

Unfortunately, AI is likely to reduce and suppress tech industry wages in the long-term. If the workers had clue, rather than watching their incomes gradually evaporate and sitting on their hands, they should organize and collectively bargain even more so than Hollywood actors.

hiddencost
60 replies
13h31m

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather, and while I did my thing—programming—Ben told GPT-4 what he wanted and got code that ran perfectly.

I mean ... IDK, if you can't write that on your own without research, I don't think you should be writing a eulogy for programming in the New Yorker.

(I just wrote two version, one using python's random.sample and the other more verbosely, to double check myself.)

gymbeaux
14 replies
13h28m

Yeah it’s not a challenging problem, but the specific code example isn’t the point. Replace his example with “at one point, we wanted a command that would convert a PNG to a JPG….” most of us couldn’t do that without Google.

gavmor
11 replies
13h13m

$ cat /usr/share/dict/words | shuf | head -n 10

$ convert foo.png foo.jpg

Took me almost literally two seconds to come up with that, unless your point is thatat one point I had to learn itwhich, yeah, sure. What am I missing?

djbusby
4 replies
12h53m

You're missing that some folk haven't learned that yet - and that others have forgotten it.

In 2013 I was using complex ffmpeg commands and using filters in there. Could remember the stuff easy. Now, I've gotta reference my ffmpeg notes more frequently.

Where's that XKCD about using the tar command?

wlesieutre
1 replies
12h31m

Where’s that XKCD about using the tar command?

As of my last knowledge update in January 2022, I can't provide specific details about the latest XKCD comics or their numbers because my training data only includes information available up to that time. However, you can easily find XKCD comics on the official XKCD website (xkcd.com) by using the search function or by browsing through the archives.

If there's a specific XKCD comic related to the "tar" command that you're looking for, you might be able to find it by searching for keywords like "XKCD tar command" on a search engine or directly on the XKCD website. The website has a good search feature that allows you to find comics based on keywords or topics.

tarqtome
0 replies
12h8m

With gpt4 and browsing enabled it searches the web and returns

The XKCD comic about using the `tar` command can be found at [XKCD Comic #1168](https://xkcd.com/1168/). The comic humorously addresses the complexities and challenges of remembering the correct syntax for the `tar` command, a common scenario for many who use Unix-based systems.
arp242
1 replies
12h26m

ffmpeg is kind of a special case; the CLI for that is far more complex than almost anything else I've ever seen.

tar xf for extract. Add v to show the files. tar cf to create. tar tf to "test" (list). Add -C dir at the end to extract to different directory. Set file to "-" for stdin.

That's all there's to it for 99% of use cases. I don't get how anyone can have difficulty with that: f for file, and x/c/t are pretty mnemonic.

djbusby
0 replies
11h12m

You've missed the point from the example. We can remember complex things when they are close. And even forget common things. GPT can help with both cases.

RockRobotRock
2 replies
12h24m

Okay GPT, tell me the bash command to convert a PNG to a JPG and add a sarcastic comment at the end.

gavmor
1 replies
9h21m

Sarcastic? I'm earnestly perplexed.

RockRobotRock
0 replies
8h34m

I'm just as perplexed that you're perplexed.

ahungry
1 replies
12h47m

Bravo - for these simple things, typing in a GPT prompt or search query is more work than the command.

Even things like "what flags to use with ffmpeg" are a shell reverse-history search away, assuming one has ever used them in the past.

chrnola
0 replies
12h28m

That’s quite the assumption! You’d also have to know that your shell maintains a history and know how to search it. None of these things are obvious to those who do not already know them. We all had to learn at some point.

chrnola
0 replies
12h34m

As the parent comment says, the specifics of the example are irrelevant. Pretend you didn’t know that the dictionary file existed, or where it was stored, or that cat/shuf/head existed, or even how to compose programs in your shell.

tingletech
1 replies
13h17m

why not use `shuf` from GNU core utils for the random and `convert` from ImageMagic for the png to jpg?

gavmor
0 replies
13h11m

Why not, indeed? There must be a deeper point these people are trying to make. ImageMagick is not obscure, is it?

jxy
12 replies
13h1m

I guess the point is that there are different levels of coders. For coders who can't write code that print a hundred random lines from a dictionary file, GPT-4 is already a significant threat.

On the other hand, even if you can write it, does it mean you should write it? How much time does it cost you to write it? How much money your company is paying you in that amount of time? Is it more economical for GPT-4 to do it?

So I was curious and decided to ask the mistral-7b-instruct, and it gave me the following,

Here is an example of a Python script that reads a dictionary file and prints 100 random lines:

    ```python
    import random
    
    with open('dictionary.txt', 'r') as f:
        lines = f.readlines()
    
    random_lines = random.sample(lines, 100)
    
    for line in random_lines:
        print(line.strip())
    ```
> This script uses the `random` module to select a random subset of the lines in the file. The `open` function is used to open the file and read its contents into a list of strings, which are then passed to the `random.sample` function to select 100 random lines. Finally, the selected lines are printed using a for loop and the `strip` method to remove any leading or trailing whitespace.

Note that this script assumes that the dictionary file is located in the same directory as the script and is named "dictionary.txt". You may need to modify the script to specify the correct path and filename for your dictionary file.
klipt
7 replies
12h49m

Not bad, but suppose the dictionary is too big to load into memory - could AI suggest an online (reservoir sampling) algorithm as an alternative?

coder543
5 replies
12h40m

Sure:https://chat.openai.com/share/0a3e52c6-1db8-422a-a98c-cb3500...

I laid out the constraints, but I did not mention reservoir sampling at all. The script seems to work as expected when I run it against a dictionary file.

klipt
4 replies
11h55m

Not bad, but suppose the dictionary has n lines and you only want to randomly sample k=100 of them, where n is so huge that you don't want to scan over the whole file at all.

Can you use random access into the file to sample k lines in O(k) time instead of O(n) time?

coder543
3 replies
11h38m

That is a problematic request formultipleobvious reasons, and for those same reasons, ChatGPT resisted providing an implementation that didn't require indexing the file. By telling it "no indexing is allowed, provide a best effort solution" it relented and provided a best effort solution.

Here is the provided solution and some discussion of the problems with the problem itself:https://chat.openai.com/share/54807663-17ca-4e7d-bc76-cd3cf3...

klipt
2 replies
11h28m

That is a problematic request for multiple obvious reasons

I'd prefer to think it's more like a real engineering problem, and less like a simple interview question :-)

And it definitely shows the limits of GPT here: it pointed out that the ends of the file might be tricky, but ignored the very conceptually simple solution of considering the file as circular (if you go past either end you simply wrap around).

And it misses the real problem with its implementation: the probability of sampling each line is now directly proportional to the length of the line before it (because it seeks into that line first and then skips it!)

So the word after "begins" is twice as likely to come up as the word after "and".

PS in the case of dictionary words with a length limit of say 30 letters, there is still an O(k) general solution using rejection sampling.

coder543
1 replies
11h18m

If you had actually read what it wrote:

"Remember, this is a probabilistic approach and works well if the lines in your file are roughly the same length. If the line lengths vary significantly, some lines will have a higher or lower chance of being selected."

It had already addressed "the real problem with its implementation" that you pointed out.

PS in the case of dictionary words with a length limit of say 30 letters, there is still an O(k) general solution using rejection sampling.

Again, what ChatGPT wrote:

"In a typical scenario where lines can have variable lengths, true O(k) random sampling isn't feasible without some prior knowledge about the file."

Knowing that the limit is 30 characterswithout questioncounts as "some prior knowledge".

As an interviewer, it sounds like you're not hearing what the candidate is saying.

And it definitely shows the limits of GPT here

I don't think anyone here is claiming that ChatGPT is limitless. The topic is "a coder considers the waning days of the craft", not "a coder considers the bygone days of the craft." ChatGPT is capable of solving many real world problems already. If it continues improving, some people are concerned about what that could mean, especially for less experienced developers.

How many people have you interviewed with that brainteaser that have actually provided the complete solution you're looking for? Vanishingly few, I would imagine, unless you were dropping some serious hints. It's not a real world problem. Most brainteasers have solutions that are "conceptually simple" once you already know the solution.

I'd prefer to think it's more like a real engineering problem, and less like a simple interview question

It's absolutely not, though. It'sexactlylike the infamous trick questions that many tech interviews are known for, which have nothing to do withrealengineering that you would encounter on the job.

You might as well have someoneinvert a binary treefor all the value that it provides.

klipt
0 replies
10h27m

How many people have you interviewed with that brainteaser

Zero, I just wanted to push the limits of the question in this thread to see what GPT did.

But you seem to not be enjoying that so let's call this quits.

louai
0 replies
12h38m

ChatGPT 4

Use reservoir sampling to get 100 random lines from a dictionary that doesn’t fit into memory. Give me an implementation in Python.

Reservoir sampling is a great choice for randomly selecting a fixed number of items from a stream of unknown size, especially when the entire dataset cannot fit into memory. Here's a Python implementation for selecting 100 random lines from a large file, like a dictionary, using reservoir sampling:

  ```python
  import random

  def reservoir_sampling(file_name, k):
      reservoir = []
      with open(file_name, 'r') as file:
          for i, line in enumerate(file):
              if i < k:
                  # Fill the reservoir array
                  reservoir.append(line.strip())
              else:
                  # Randomly replace elements in the reservoir
                  # with a decreasing probability.
                  # Choose an integer between 0 and i (inclusive)
                  j = random.randint(0, i)
                  if j < k:
                      reservoir[j] = line.strip()
      return reservoir

  # Example usage
  file_path = 'path_to_your_large_file.txt'  # Replace with your file path
  sampled_lines = reservoir_sampling(file_path, 100)
  print(sampled_lines)
  ```
> This script opens a file and reads through it line by line. For the first 100 lines, it adds them directly to the reservoir. For each subsequent line, it randomly decides whether to replace an existing line in the reservoir with the new line. The probability of replacing decreases as it progresses through the file, ensuring each line has an equal chance of being included in the final sample.

fiddlerwoaroof
3 replies
12h49m

    <dict shuf | head -n100

qbrass
2 replies
12h27m

   <dict shuf -n 100

coder543
1 replies
12h23m

Similarly:https://chat.openai.com/share/bc4a533e-c007-48ea-9232-0e548b...

The "<" is actually an unnecessary additional character to type.

fiddlerwoaroof
0 replies
11h58m

Except, I intentionally don’t use either form because they don’t extend nicely (and I dislike using flags when I could use another pipe segment or positional arguments when I could use standard input). I can iterate quickly by adding pipeline segments on top of the basic formula, because all of the segments have the same general shape and “calling convention”.

Finally, because I’ve built up familiarity with the shell over my career, I can come up with this formula as fast as I can type it. At this point, ChatGPT would slow me down: (1) because this sort of thing is basically muscle memory and (2) I have to actually think about the code ChatGPT produces to verify it is correct, which is almost as difficult as producing it in the first place.

silenced_trope
8 replies
13h21m

I'm the opposite. I feel like it is true. I very much will NO LONGER THINK about "simple" problems.

My most recent thing is for a game I'm working on. I got a bunch of free sound FX from freesound.org, but they were AIFF format. I googled and found an online converter, then asked ChatGPT to use ffmpeg to write a script that goes through the directory converting the AIFFs to MP3s.

In the past if I would have written such a script it would have required me to "look into how to do it" and then I would have saved the script somewhere on my machine or online. Now I just deleted it right away. If I ever need to do it again or something similar I'll just ask ChatGPT and get it from scratch again.

marcus_holmes
4 replies
12h34m

Every time I've tried to get it to do something reasonably simple like this, it returns plausible-sounding code that doesn't work.

My last attempt to use it was to do with resolving circular calculations in Excel - I wanted some way of calculating the sum of a geometric series without having the series in actual cells. It kept recommending things that didn't work.

I'm always amazed to hear of people who manage to get working code from an AI. This has not been my experience.

thaumasiotes
2 replies
10h6m

I wanted some way of calculating the sum of a geometric series without having the series in actual cells.

There are simple explicit formulas for this.

marcus_holmes
1 replies
9h21m

yeah, sumproduct on google docs was the thing it kept suggesting, but not in a way that worked for the thing I wanted. I found a way around it in the end, but I feel I wasted 30 mins trying to get ChatGPT to understand what I wanted and give me something that actually worked.

thaumasiotes
0 replies
8h50m

\sum_{i=0}^n x^i (the sum of all terms x^i as i ranges from 0 to n, inclusive) = (x^{n+1} - 1)/(x-1).

Or, if n is positive infinity, you assume that |x| < 1 and thus the sum is (0 - 1) / (x-1) = 1/(1-x).

If you need the sum from x^m to x^n, that's equal to the sum from 0 to n minus the sum from 0 to (m-1).

You can derive the formula yourself without difficulty:

         sum = 1 + x + x^2 + ... + x^n
    (sum)(x) =     x + x^2 + ... + x^n + x^{n+1}

    (sum)(x) - sum =  x^{n+1} - 1
    (sum)(x-1)     =  x^{n+1} - 1
    (sum)          = (x^{n+1} - 1)/(x-1)

DrSiemer
0 replies
12h24m

Work on your prompts. Break down the problem into pseudocode steps, the way you might write down comments for yourself on how to do something. Provide examples of similar working code if you can. Don't ask it to do everything at the same time, break it up into limited goal increments.

In my experience LLMs work best if you already know how to do something, but you just can't be bothered to actually do it.

When exploring unknown territory, keep your questions simple and ask for clarification on anything you don't understand. It will not just help you, but the AI as well.

mock-possum
0 replies
12h30m

Doesn’t VLC or Foobar have a batch convert feature? Or Audacity? One of them does, pretty sure?

half-kh-hacker
0 replies
12h55m

a whole script? surely it's just something like `find . -name '*.aiff' | parallel ffmpeg -i {} {}.mp3`

empyrrhicist
0 replies
13h6m

use ffmpeg

Well there's your problem. Such a powerful tool, but that CLI is something else.

hyperthesis
3 replies
12h52m

To be really fair, there are levels of programming. The easiest level is traditionally vulnerable to automation. There's a lot of "accidental complexity" (https://wikipedia.org/wiki/No_Silver_Bullet), e.g. ffmpeg, git; even in recalling syntax and idiom of an infrequently used tool, e.g. bash, jq.

Once upon a time, developers had to pore over manpages, paper textbooks, or memorize it. Then came google. Next, stackexchange. And now GPT-4 - which is kind of a search engine plus rudimentary synthesis.

But let's be clear. This is low-hanging fruit. It's a poor substitute for actual thought. And surelyweare safe.

julianeon
1 replies
12h32m

I know that last line is wry but I feel like people need the reminder:

The graveyards are filled to overflowing with companies that were challenged by low-end 'crappy' products and said 'they are going after the bottom of the market, we are safe'... a few years before they entered a terminal decline, and a decade before they went extinct.

hyperthesis
0 replies
12h22m

I think thought is different; but I also see no barrier to strong AI. Jobs might be the least of our problems at that point.

malux85
0 replies
12h43m

I’ve been coding for 20 years commercially now, and I’ve come to define good coders - and indeed intelligence itself as … adaptability.

Not fluid intelligence, not creative velocity, not creative originality, not oodles of domain knowledge … adaptability.

A new tool comes along? Adapt. A new idea breaks a long established paradigm? Adapt. A new skill is required? Adapt. As I get older adapting gets easier, because I treat ideas like objects - useful until they are not. No part of me cannot change, and my certainty in myself comes not from a bank of sanctified ideals I worship but in the knowledge that whatever comes, I will adapt and thrive. Fat client, thin client, centralised, edge, micro services, monolithic, ideas, tech, frameworks, everything is fluid and that’s what keeps things fun.

Same goes with the arrival of GPT, I embraced it, learnt it’s strengths and weaknesses and now I estimate I’m maybe 2x more productive because of it, that means more time of fun things that interest me.

Adapt, adapt, adapt

coliveira
3 replies
13h12m

The problem is not that I don't know the algorithm... the problem is that to implement the algorithm you need to remember what method from what class does the thing you want to do. And knowing that is very difficult, even for seemingly simple problems. That's why Google is the way to program modern programming languages. (Of course I could do the same task without any libraries, but in that case it would be considered bad by most modern programmers, because it would be a sign that I "don't know how to use the language properly"...)

Longlius
1 replies
13h1m

This isn't a complex problem - this is something you do with built-in standard libraries in pretty much any programming language. Like I would expect anyone who claims to know even basic file I/O in their given language to be able to produce a mostly working version of this in less than 30 minutes.

kragen
0 replies
12h49m

80% of the time they'll produce a subtly biased shuffle, most people don't intuitively know the fisher-yates method unless they've been taught it

like, try this

    import collections, random
    def shuffle(xs):  # naive intuitive shuffle
        for i in range(len(xs)):
            j = random.randrange(len(xs))
            xs[i], xs[j] = xs[j], xs[i]

    def first_of(n):
        xs = list(range(n))
        shuffle(xs)
        return xs[0]

    print(collections.Counter([first_of(3) for i in range(100_000)]))
note that 1 comes out first 25% more often than 2, but with a fair shuffle all three should have an equal chance of being first

even without running it you can figure out logically that it must be biased in some way because there are 3! = 6 possible permutations and 3³ = 9 equally likely sequences of choices for j (each producing one of those permutations), and 9 is not divisible by 6, so some of those permutations have to be more likely than others

so i think this algorithm is a good one to get from a library if its randomness is important to you

not from chatgpt, which commonly does give you subtly buggy code

kragen
0 replies
12h57m

if you don't know the algorithm you could easily get it subtly wrong

but the only method involved here is string.strip(), so i don't think this is an example of the problem you describe

    import random
    n = 100
    ws = [line.strip() for line in open('/usr/share/dict/words')]
    for i in range(n):
        j = random.randrange(i, len(ws))
        ws[i], ws[j] = ws[j], ws[i]
        print(ws[i])
(did i fuck that up)

and yeah probably calling random.sample would be better. and usually import random is better than writing your own rng. but not always

giantg2
2 replies
12h36m

Every time I hear coding referred to as a craft I sort of hate it. On one hand I love the idea of it being a highly skilled craft - knowing the intricacies of the language and the machine to solve any problem, being a respected expert, producing a product worthy of pride. On the other hand, most of what we do is display [some content] on [some page] (probably sloppily due to deadlines), or print 100 random entries from a file...

winrid
1 replies
12h35m

well, in that regard you're a fancy carpenter assembling IKEA dressers. Carpentry is still a craft.... most people just need IKEA. :)

giantg2
0 replies
5h22m

Not really. My skills have drastically declined over the years because I rarely get a chance to exercise them. On top of that, it seems I'm switching languages/stacks frequently. So it's more like I'm a carpenter for a little while, then a plumber, then an electrician.

Izkata
2 replies
12h48m

Yeah, when I read that I also tried the first thing that popped into my mind - extremely crude/slow but works in bash and only took like 20 seconds without looking anything up:

  cat /usr/share/dict/words | while read LINE; do echo "$RANDOM $LINE"; done | sort -h | head -n 100 | awk '{ print $2 }'

arp242
1 replies
12h35m

sort -R is fairly widely supported, although arguably that's cheating.

Izkata
0 replies
12h28m

I thought something like that might exist and was tempted to check for that, but given the context I really wanted to do it without looking anything up. I see now there's also "shuf".

simonw
1 replies
12h49m

He didn't say he couldn't do it. He said that while he was getting into the task, Ben had already fixed it.

k0k0
0 replies
12h28m

Somebody that has claimed 20 years of experience and doesn't at least know instantly (this doesn't require "thinking" about for minutes) that this is a one liner in the shell or a few lines of pick your poison script, Perl, Python, Ruby, who cares, even if they can't remember the specifics is not an expert in programming.

And I get it, there are highly specialized people that get very deep in the weeds in maybe HPC or embedded, even so most of them will at least still be aware, but moreover these people lack the knowledge to comment on programming or software engineering in the large, I wouldn't call them experts outside their limited domain if they really can't begin to know where to look to do a fucking permutation of a word list.

The old saying that for some 20 years of experience is 1 year and 19 repeats probably applies.

enjoylife
1 replies
13h15m

Yeah, this author sounds more of an up-and-coming software developer. The cited example is something that someone should be easily able to do in their day to day language.

The ability for chatGPT to essentially translate/expand your knowledge out to other languages (eg “how do I read in a file to utf8 again” ). It’s all just more leverage and power to solve the mundane, faster. The key point is you have to know the rough solution space already.

TerrifiedMouse
0 replies
12h23m

Yeah, this author sounds more of an up-and-coming software developer.

This is bad. If you don’t have newbie programmers, you won’t have senior programmers.

MattGaiser
1 replies
13h17m

Especially since that particular task has been done thousands upon thousands of times. I have done it to win arguments on Hacker News[0]. It wouldn't surprise me if the exact code to do it (except for changing file names and number of lines) existed on Stack Overflow.

[0]https://news.ycombinator.com/item?id=28587016

altfredd
0 replies
11h59m

More likely, the reason why ChatGPT can do it isbecauseall necessary code is available on StackOverflow.

I don't understand unbound excitement about using a paid online tool (!) that does not have reproducible source code (!!) and can sometimes (!!!) generate correct implementations of extremely well-known algorithms if given perfect description.

rrauenza
0 replies
13h3m

Does chatgpt know the reservoir sampling method?

https://en.wikipedia.org/wiki/Reservoir_sampling

hyperthesis
0 replies
12h49m

leftpad

faeriechangling
0 replies
12h51m

So a search engine user was surpassed by man using search engines and this new search tool called GPT4?

Programming is doomed!

happytiger
53 replies
9h14m

Do people seriously consider this the waning days of the craft? I don’t understand that.

My view is that I am about to enter the quantum productivity period of coding.

I am incredibly excited about AI assistance on my coding tasks, because it improves not only what I’m writing, but also helps me to learn as I go. I have never had a better time writing software than I have in the last year.

I’ve been writing software for a few decades. But now I’m able to overcome places where I get stuck and have almost a coach available to help me understand the choices I’m making and make suggestions constantly. And not just wandering over to a fellow cuders desk to ask them about a problem I am facing, but actually give me some productive solutions that are actually inspirational to the outcome.

It’s amazing.

So why do people think that coding is coming to some kind of end? I don’t see any evidence that artificial intelligence coding assistants are about to replace coders, unless you… suck badly at building things, so what are people getting on about?

I feel like somebody came along and said, “foundations are now free, but you still get to build a house. But the foundations are free.”

I still have to build a house, and I get to build an entire house and architect it and design it and create it and socialize it and support it and advocate for it and explain it to people who don’t understand it but… I don’t have to build a foundation anymore so it’s easier.

Shoot me down. I’m not relating here at all.

fauigerzigerk
13 replies
8h22m

I agree it's amazing. But your comment doesn't touch on the key economic question that will decide for how many people it will be this amazing new dev experience.

If AI makes developers twice as productive (maybe a few years down the road with GPT-6), will this additional supply of developer capacity get absorbed by existing and new demand? Or will there be half as many developers? Or will the same number of developers get paid far less than today?

These questions arise even if not a single existing dev job can be completely taken over by an AI.

A secondary question is about the type of work that lends itself to AI automation. Some things considered "coding" require knowing a disproportionate number of tiny technical details within a narrowly defined context in order to effect relatively small changes in output. Things like CSS come to mind.

If this is the sort of coding you're doing then I think it's time to expand your skillset to include a wider set of responsibilities.

rjrodger
4 replies
5h12m

To me this looks like a Jevons Paradox situation:https://en.wikipedia.org/wiki/Jevons_paradox[1]

Increased developer productivity will lead to a lot more software development, at all levels, rather than less.

1. "In economics, the Jevons paradox occurs when technological progress or government policy increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced."

fauigerzigerk
1 replies
4h58m

Yes, demand grows as prices drop. I don't doubt this general principle for a second. But the question is about degrees. If the cost of making software at today's level of quality and quantity drops by half, what share of the savings is going to be spent on more and better software, what share is going to be spent on other things and what share will be used to cut prices?

I think there is no general answer to this question. What would Netflix do if software development costs dropped by half? Develop twice as much software? Spend it on more content? Cut the price of subscriptions? In this specific case, my bet would be on content.

renlo
0 replies
3h7m

To me software _is_ content because it's tied to attention and consumption. There will always be humans producing it and consuming it, it isn't a limited resource that goes away once a tool comes along to make it easier to produce. If the costs halve, then more is produced.

FrustratedMonky
1 replies
4h45m

Let's not forget that reduced cotton prices (because of a technological improvement) did make cotton more affordable, lots of new clothes at reduced prices, increased demand, but it didn't turn out that great for the slaves.

delecti
0 replies
3h6m

The relevant aspect of what you're referencing is that it led tomoreslaves.

For more than a decade, software developers have easily been able to get low 6 figure jobs right out of college. We're pretty far from slaves.

gitgud
4 replies
5h43m

If AI makes developers twice as productive (maybe a few years down the road with GPT-6), will this additional supply of developer capacity get absorbed by existing and new demand? Or will there be half as many developers? Or will the same number of developers get paid far less than today?

Something to remember is that every new innovation in software development only raises the expectations of the people paying the software developers.

If developers are 3x as productive, then the goals and features will be 3x big.

The reason for this is that companies are in competition, if they lag behind, then others will eat up the market.

The company that fires 50% of their staff because of “AI Assistance” is not going to be able to compete with the company that doesn’t fire their staff and still uses “AI Assistance”…

fauigerzigerk
1 replies
5h11m

I agree that demand will very likely grow with expectations to absorb some of the additional capacity. But there is a limit to that because companies compete on more than just software. They also compete on price and on other things that have nothing to do with software.

If the price of software drops, it's not a given that all of the savings will be spent more and better software.

gitgud
0 replies
4h25m

I think if the price of software drops, then there’s a huge opportunity for smaller companies to build better products for cheaper and steal market share. So more people building software.

This also means larger companies will need to build more to defend their share of the market. Meaning more people building software.

jvanderbot
0 replies
2h12m

Blacksmiths didn't die, they became mechanical engineers?

archiepeach
0 replies
3h52m

I think this hits the nail on the head. Obviously a lot of the participants in this discussion are programmers, so there is going to be a fair amount of bias where people feel like their self-worth is being attacked/devalued. That being said, from a company perspective, this should much more unlock "moving faster" than "let's rest on our laurels". Any company that has a leading position in a particular industry is currently at greater risk of upstarts achieving their feature set in a reduced amount of time. The incentive for all companies will be to find programmers who are skilled in directing and debugging AIs.

I am currently building an iOS app using GPT-4 (I don't know Swift), and am developing an awareness of what it can/can't do, and surprised that I'm moving at the speed I did when creating React Native apps. In a possibly more competitive future market for developers, it does work in one's favour if some developers resist the efficiency improvements of AI.

victorbjorklund
0 replies
4h47m

It is way "easier" to be productive as a developer today vs 20 years ago. I can easily build web apps fast that would have taken a big team to build if even possible. Yet, more devs today than 20 years ago. If every dev is suddenly 20 times more productive (however that would be measured) means that some projects that in the past would not be "profitable"/"worth it" are now no brainers and companies will invest in doing those projects.

torginus
0 replies
7h55m

Considering how much the craft has expanded - when in high school, I wrote an application for pocket for a small business in Borland Delphi 7. The domain knowledge I needed for that was knowing the programming environment, and a bit about Windows.

Nowadays, like the rest of the full-stack 'web' developers, I work on complex webapps that use Typescript, HTML, CSS, Kubernetes, Docker, Terraform, Postgres, bash, GitHub Actions, .NET, Node, Python, AWS, Git. And that isn't even the full list.

And it's not even a flex, all of the above is used by a relatively straightforward LoB app with some hairy dependencies, a CI/CD pipeline + a bit of real world messiness.

I need to have at least a passing familiarity with all those technologies to put together a working application and I'm sure I'm not alone with this uphill struggle. It's a staggering amount to remember for a single person, and LLMs have been a godsend.

singhrac
0 replies
4h4m

I’m very bullish. My dad and I were talking about this the other day, but there are stillsomany quality-of-life improvements that are bottlenecked by not having enough developers or enough time.

For example, deploying hardware to all subways to track their status and location with higher accuracy. I want a truly smart meal planner that can look in my fridge and my eating habits and tell me what I need to eat. I want a little plugin for Obsidian that will let me sync tasks with Linear.

There are tons of tiny little pieces of technology that would be useful but their economic value is low or unclear; if developers become more efficient at not only writing code but testing, deploying, getting feedback, these become possible. Or, the LLMs can’t become great at all of those things, and developers keep their jobs (and high pay). You can’t have it both ways.

ugh123
11 replies
9h9m

My sense is that the doomers in software these days are either inexperienced and lack perspective from industry shifts over the years or weren't very good to begin with and could not build beyond basic crud and data shipping applications of varying complexity.

reactordev
8 replies
9h6m

3rd option. They got complacent and old and have trouble learning (don’t realize it) and can’t grok the new medium.

ikari_pl
5 replies
9h1m

4th option. They're trying to enter the market and get a good job, but some companies consider them less experienced but more expensive than Copilot.

happytiger
4 replies
8h50m

Fewer but more senior resources seems possible.

I have already seen the impact of AI systems, reducing the number of junior coding positions and outsourcing.

On the other hand, anyone who actually takes the time to get good with AI coding tools can be performing at senior levels, much more quickly, so there’s that too.

I’m not convinced that it’s some kind of apocalypse as much as it is a massive shift that is going to raise all boats and bring about a renaissance of productivity, and incredibly massive amounts of code, whichhopefullyis a higher quality because there’s no longer written by an individual in isolation but by a team of software engineers working in concert with AI.

I can’t tell you number times where people were screaming and yelling about how coding was coming to an end and everyone was going to be unemployed in a few years —yet here we are…

ugh123
0 replies
8h29m

Agreed. The only thing to come true in the last decade about the death of software engineering from no/low-code systems is that there were more software jobs created to build no/low-code systems.

Also see "death of the sys admin by way of AWS and the like"

jon-wood
0 replies
7h24m

I keep seeing this argument for a bunch of different industries. Nobody appears to have an answer for where all these more senior people are going to come from when we're replacing the juniors with an LLM.

People don't magically become senior engineers, they need to get experience of the basics. Traditionally that's done by being hired as a junior engineer and doing a few years of fairly dull work under the supervision of the more senior engineers, who gradually prod you into doing things well.

Bootcamps have been somewhat filling the ever increasing void for junior positions to some extent (in the process making the barrier to entry into the industry higher for people who can't afford to literally pay for the junior roles that used to pay them), but there's still a chasm between someone fresh out of a bootcamp program and a developer who you can let loose on a project unsupervised.

coffeebeqn
0 replies
1h24m

I can kind of see that with contractors on the lower end fairly soon. If I put the same amount of effort into telling a programming specific GPT that understands requirements properly as I do with some contractors I don’t think that is a long way away from today.

For juniors - I want them to become eventually independent so I don’t have to worry about them and their tasks (and can give them vaguer tasks) so it’s a different goal.

2devnull
0 replies
1h34m

get good with AI coding tools can be performing at senior levels

No. They are orthogonal modes of thinking. That said, getting good at prompting LLM is already making people better communicators. That will be a big benefit to seniors as they can use their prompting skills to direct junior staff more efficiently. But better at thinking about the trade offs that senior devs need to think about, nope.

happytiger
1 replies
9h5m

That seems possible. There is a frequent bias against the new even in progressive technology circles.

reactordev
0 replies
2h19m

yeah, I wasn't referring to agism or anything like that. Just that they had been there a while, got complacent with the code base or the style of programming. That's all. It can take an entire mind shift to understand what's going on when all you've been building is administrative portals and internal sites.

girvo
0 replies
8h7m

I mean most people in most jobs are average, or below. Programmers aren't immune: most of them are bad at their jobs on aggregate. Of course they can be replaced by LLM output: their output is already half broken too. I'm only slightly exaggerating for effect, but it's been 17 years for me in this industry and I've seen that over and over again.

aleph_minus_one
0 replies
4h58m

My sense is that the doomers in software these days are either inexperienced and lack perspective from industry shifts over the years or weren't very good to begin with and could not build beyond basic crud and data shipping applications of varying complexity.

My experience rather is that such people often (though not always) are quite good programmers, but came to a different conclusion on how it makes sense to develop software than the direction which the industry shifted to (and often have good, though sometimes non-mainstream reasons for their opinions). Thus, despite being good programmers, "they fell out fashion" with "how you are (by today's fashion/hype) supposed to build software on this current day". So, they became more and more hard to employ and thus more and more frustrated (just to be clear: in my experience they are often quite right in their opinions).

tangjurine
6 replies
9h10m

The author isn't a professional programmer. He has a couple side projects and doesn't seem very good at coding.

happytiger
3 replies
9h3m

That’s interesting. I think there’s a real shift going on in terms of what coding actually means I’ve never produced so much code so quickly and it’s disconcerting to the part of me that wants to feel like I earned the outcome on a pretty deep level because I can just generate shit that works so fast and then just edit it until I get exactly what I’m looking for and it feels like cheating in a weird way.

I still don’t feel that way about front end frameworks for the web. Oh my God what what are people doing?

dsego
2 replies
8h17m

Honestly, this reminds me of my early days when auto-complete and OOP was the best thing. It's why OOP proliferated, you type in the name of the object and get a list of completions in visual studio. No need to read docs or understand what you are doing, just find the method you need on that object. This new copilot coding reminds me of that, throwing shit and see what sticks, without understanding why and how it works. I can't help to shake the thought that we will see heaps of mediocre code cranked out that just about works, even though the problem maybe required a tenth of that to solve it.

civilitty
1 replies
2h40m

> I can't help to shake the thought that we will see heaps of mediocre code cranked out that just about works, even though the problem maybe required a tenth of that to solve it.

That’s just more of the same, isn’t it? IMO that’s already the current state of the industry.

dsego
0 replies
1h4m

I think you're right.

mike_hearn
1 replies
8h35m

He is. Later in the article he explains a task he did during the working day to his wife. It seemed like his employer uses some sort of custom tableview control, or he was working on a business report. Honestly it sounded like the bigger threat to that particular task was someone finding an open source library that did it already, but it illustrated the point and the story was nicely written.

wkipling
0 replies
1h10m

Yeah there's lot's of people in this thread who didn't read the full article and the conclusion the Author comes to.

bryanrasmussen
4 replies
8h36m

Reasons why (supposition, not necessarily in agreement, just arguments I am familiar with) -

It's because foundations are now free but nobody understands how they work anymore - or soon won't hence the waning as opposed to disappeared. There are whole levels that a coder needed to understand in the past that recent entrants to the field do not understand, that can still subtly affect how things work, if no one understands the things the craft depends on, then the craft is waning.

For anyone who started programming more than 13 years ago in the most widespread programming discipline for the past few decades (Web technologies), which in this career makes you super-old, the craft is waning because every year it becomes more and more difficult to understand the whole stack that the web depends on. Not even considering the whole stack of technologies that my first paragraph alluded to.

For Frontend coders it is waning because there are ever increasing difficulties to find out how to do something someone else did by looking at their code - modern build technologies means looking at the code of a site is not worthwhile. And people were already complaining about that 13+ years ago.

If you have kids or outside work responsibilities then in combination with this and the need to produce things and the ever increasing eating of stuff by software (opening new problems and business areas one might need to be cognizant of) it becomes less possible, for those not in school, to hone their craft via purposeful working. For this reason it may be that the craft is waning.

Finally improving productivity is not necessarily something that correlates with improving the craft - industrialization may have improved productivity and made many products available to many people that did not have them before, but it is pretty well known that it was not beneficial to the crafts. Perhaps the feeling is the same here.

mike_hearn
3 replies
8h8m

Isn't this a slight over-generalization from web dev? If you learned programming in the pre-web era then you weren't able to learn how programs work by studying the shipped artifacts, but the craft wasn't waning, far from it.

I learned HTML in the mid nineties and even then I don't honestly recall learning very much from View Source. HTML has never been particularly easy to read even when written by the rare breed who are fanatical about "semantic" markup (in quotes because even so-called semantic markup doesn't communicate much in the way of useful semantics). HTML lacks extremely basic abstraction features needed to make code readable, like (up until very recently) any kind of templating or components system, so even if you were trying to learn from Yahoo! in 1996 you'd be faced with endless pages of Perl-generated HTML boilerplate. So I think most of us learned the old fashioned way, from books, tutorials and copious amounts of fiddling.

bigfishrunning
1 replies
3h15m

If you learned programming in the pre-web era then you weren't able to learn how programs work by studying the shipped artifacts

Why not? You can step through an assembly program just as easily as a javascript one...

mike_hearn
0 replies
2h54m

There's aslightdifference in abstraction level there. I've done more than my fair share of stepping through assembly in my life and learning things from even minified JS is a lot easier.

But I don't think many people learned programming by studying random real-world programs in a disassembler. It was once at least theoretically possible to do that with web programming, albeit not well and not anymore.

bryanrasmussen
0 replies
4h49m

Isn't this a slight over-generalization from web dev? If you learned programming in the pre-web era then you weren't able to learn how programs work by studying the shipped artifacts, but the craft wasn't waning, far from it.

people understand the waxing and waning of the craft based on their experience. Someone can be an old greybeard now and have only done "web programming" starting at a young age·

Perhaps I'm wrong but I have seen this argument multiple times so I believe it is nonetheless a belief about the waning of the craft prevalent among some people.

lordnacho
3 replies
8h13m

I think what will happen is what happened to hardware. You used to come across people who could do things like solder components to a board and hook up various ICs as well. Heck, I did this at uni.

Now that layer of craftsmen is gone. You are either an uber-expert in how computer hardware works, or you just buy the hardware and treat it like a kind of magic.

Traditionally to become an expert you went through an intermediate stage, showed your interest, and got trained. Your hobby could turn into a profession, since the step up was gentle and interested people were needed to do some of the supporting work.

Nowadays if you're going to work in a chip fab, it's not because you were a soldering iron kid. You go through a quite long academic process that doesn't really recruit from the ranks of the hobbyists, though of course you'd expect there to be some natural interest. But the jump isn't gentle, you end up learning some pretty advanced things that make soldering seem like the stone age.

Software has this layer of craftsmen still, though it is rapidly dying, and not just from LLMs. There's a enormous number of people who can more or less hook up a website and make it work, without knowing everything about how exactly it does that. There's also plenty of excel people and python scripting people who use code in their day-to-day, but can't tell you advanced concepts. There's a lot of modern services that make this sort of thing easier (wordpress etc), and this level of skill is very learnable by people in the developing world. It's not like you can't become a real expert in those parts of the world, but economically it makes sense that there are a lot of intermediately skilled there.

What will happen with GPT and the like is the experts will not need as much help from the juniors. If you're a software architect in charge of a whole system, you won't just sketch out the skeleton of the system and then farm out the little tasks to junior coders. Maybe there are pieces that you'd farm out, but you will definitely save on the number of juniors because an LLM will give you what you need.

The result being that we'll get fewer people trained to the highest levels, but those who are will be much more productive. So those guys will indeed be entering the quantum age, but it strands a lot of people.

cousin_it
2 replies
7h36m

Yes, the same thing happened before. There was a generation that tinkered with cars, there was a generation that tinkered with radios. There was a generation that tinkered with computer hardware. And then there was a generation that tinkered with software, and now it's going away.

The real question for me is whether AI will put humans out of most intellectual work, not just programming. Then, even if AI is shared equitably and satisfies our every need, most of us become some kind of sponges that don't need to think for a living. Or human intelligence will remain but only as a peacock's tail, like physical strength is now.

Maybe a true "friendly AI" would recognize the human need to be needed, for human capabilities to stay relevant, and choose to put limits on itself and other AIs for that reason.

lordnacho
1 replies
7h28m

Maybe there will be an intellectual version of sports? Regulated to only allow actual humans, done for much the same reason as physical sports?

I guess that sort of thing already exists (eg Chess).

cousin_it
0 replies
7h23m

The arts might be the "peacock's tail of the mind". Many gatekeepers in the art world now are putting up roadblocks against AI art, and from that perspective they're right to do so.

spaceman_2020
2 replies
6h57m

I honestly think we’re entering an age where a skilled entrepreneur will be able to run a $10M/year business all by themselves with AI assisted coding, design and marketing.

totallywrong
0 replies
2h39m

Will GPT talk to customers, generate leads, manage sales cycles, customer support, etc. Come on.

aleph_minus_one
0 replies
4h54m

I honestly think we’re entering an age where a skilled entrepreneur will be able to run a $10M/year business all by themselves with AI assisted coding, design and marketing.

You forgot the accounting, taxation, legal and compliance stuff, which at least in Germany takes a huge amount of the entrepreneur's time. :-(

lifeisstillgood
1 replies
7h44m

I have stayed away from chatGPT etc (mostly due to my work simply banning its use in an understandable conservative approach)

What's the best starting point for my personal stuff? I am generally put off by youtube tutorials promising to make me a better coder.

ern
0 replies
6h2m

Open a ChatGPT account. Set up a GitHub account if you don’t have one. Think of a common problem and see if a library exists in your favourite language to do what you want. If it doesn’t exist, explain your requirements to ChatGPT. You can even tell it the same thing you put in your comment here: “I want to learn how to use ChatGPT to augment my programming skills”. Make sure it’s done properly with automated tests and you have a licence (ChatGPT will help you with both).

Bonus points for doing things like using GitHub actions for CI and publishing it to a package repository with guidance from ChatGPT.

hackerlight
0 replies
8h23m

My view is that I am about to enter the quantum productivity period of coding.

It comes down to whether society's demand for programmers is fixed, or if it scales with how productive programmers are, or more likely some mix of these scenarios. If the demand is fixed, then you are just taking someone's job by being more productive.

evantbyrne
0 replies
59m

Certain programming jobs will be replaced with AI. My prediction is that by the end of 2024 we will have a commercial AI that does an okay job of converting UI designs to frontend code. By the end of 2025 a commercial AI will be able to create UI designs and produce frontend code that's at least 80% complete.

dmansen
0 replies
43m

This is essentially the conclusion the author comes to at the end of the piece

didibus
0 replies
8h13m

Waning days of the career associated with it. Similar to how old craftsmanship has been superseded by machine/industrialisation.

The need for mastery of the craft will be lesser, and so mastery of it will wane, as people depend on AI instead. Then if you ever need someone to do it without AI, you might no longer be able to find anyone with the craftsmanship know-how to do it.

Also, it'll probably bring down how good coding is as a career. As productivity enters this quantum state, you'll need less engineers, and they'll need less qualifications, which might translate to less jobs, worse pay, and an increased expectation in productivity that makes you dependent on AI, as you can no longer meet these expectations and compete with others if you also dont fully leverage AI.

btbuildem
0 replies
2h39m

The author of the article seems to be a mediocre coder; perhaps they have not had enough experience in the domain to arrive at a big-picture perspective.

BlueTemplar
0 replies
2h45m

Yeah, I found TFA to be very annoying with its mix of nostalgia (despite the author not even being 40 years old ?!?) and silly examples.

1.) Apple’s programming environment being forbidding : no shit, Apple is well known for the hoops developers have to jump through to be allowed into their monopoly, why any self-respecting hacker would ever want to code for iOS (and now even MacOS, Android, Windows) is beyound me. I guess that's partially because they are US-based, where it's still somewhat respectable / not illegal ?

2.) Visual C++ : similar deal, just a few years later the bar for graphical interfaces would be significantly lowered with the likes of Python+Qt, Java, HTML+CGI... and even back then for non-graphical interfaces we had BASIC.

(The design and programming of graphical interfaces is its own distinct profession for a reason, jumbling it in with «coding» as a big indistinct mass just because the same person used to have to do both regardless of their level of proficiency is missing the point.)

3.) > When I got into programming, it was because computers felt like a form of magic. The machine gave you powers but required you to study its arcane secrets—to learn a spell language. This took a particular cast of mind. I felt selected. I devoted myself to tedium, to careful thinking, and to the accumulation of obscure knowledge. Then, one day, it became possible to achieve many of the same ends without the thinking and without the knowledge. Looked at in a certain light, this can make quite a lot of one’s working life seem like a waste of time.

It's not the LLMs that got rid of this, it's already Web search engines, years ago ! (Italsotakes skill to «prompt engineer» a search engine to find you a good answer. Not to mention that one of the still ongoing - but for how long ? - issues, is the enshittification of them through Google's monopoly. It should get better once they are out of the picture.)

4.) Proposition 5 of Euclid’s Elements I

This seems to be another good example of how a lot of these issues stem from the lack of good teaching / documentation. Consider how much clearer it would be if merely you split it into two propositions !

It also seems to be a good example of an arbitrary goal of hardly any practical significance ? (FizzBuzz, that a good fraction of CS college students are unable to grok, would be a much better example ?) But perhaps I am wrong, and a mathematician / mathematics teacher can explain ?

I have way better examples : I am still pissed that we keep teaching students pi instead of tau and pseudovectors (papering over the difference with vectors !!) instead of geometric algebra. Imagine we still had to do math with words like before the modern era (sqrt() also comes to mind), instead of mathematical notation !

5.) There's also a whole big chunk about coders being (/ having been) a highly valued profession, and it's the likely loss of that that the article seems to be mostly deploring, but we shouldn't confuse it with the other things.

Uptrenda
18 replies
16h9m

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed

Wow, this guys working on rocket science, everyone. Watch out! We might get replaced!

atleastoptimal
13 replies
16h6m

Do you honestly believe that the skills of software devs will never be eclipsed by AI tools?

akira2501
4 replies
16h2m

I honestly wouldn't expect it to be the first application, when the technology eventually does get developed, and we have the power budget necessary to actually operate one of these units.

Until then, it's not a question of "eclipsed," it's a labor market, so it's a question of "efficiency" and the large language model people have a very long way to go to produce something that's generally as capable as a programmer.

So.. I guess if your craft is "scripting" you might be able to get by with GPT-4, but to imagine that your child is not going to ever need to "program" is eager nonsense.

zarzavat
3 replies
15h48m

You don’t think the technology will improve at all in the next ~16 years? It will just be stuck at it being able to write simple scripts even in 2040?

I fully expect human programming to be completely obsolete by then.

akira2501
1 replies
15h11m

will improve at all in the next ~16 years?

Improve "at all?" Well, of course it will, but you're just moving the goalposts here. I'm telling you I don't think it's going to be anywhere near the level of AGI that can solve most programming tasks with less effort an energy cost than a human in that time.

It will just be stuck at it being able to write simple scripts even in 2040?

Pretty much. You can improve the technology, but you have some massive gaps you have to cover, and electrical use is going to be one of them. Short a massive improvement in transistor technology or a move to an entirely new computing platform, I don't see it happening in that time.

I fully expect human programming to be completely obsolete by then.

You seem that you mostly _want_ that to be true, so much so, that you've failed to complete the analysis. What's worse is, I'm just hedging your bet. If I'm wrong, no big deal, if you're wrong, you're in for a world of pain and problems. I get that this is hacker news and untamed expectations are en vogue here, but I'm content to be the hipster on this issue.

zarzavat
0 replies
4h25m

Transformer large language modes are about 7 years old. So we are 1/3 of the way from 2017 to 2040 and we’ve gone from hardly being able to string sentences together to being able to write entire scripts coherently - GPT4’s output is often better than mine to be honest, and I’ve been programming almost my whole life.

GPT4’s capabilities are quite close to a human’s even now, especially when asking it about areas that I haven’t specialized in. And now it has vision capability, it can see what it is doing.

With twice the time remaining that has elapsed, clearly there’s plentiful time for its capabilities to increase and for it to get faster and cheaper. And it will not be a linear improvement but an exponential one.

I don’twantprogrammers to become obsolete. I just consider the likelihood that we have anything to offer over one of these agents in the medium term to be very unlikely. Why would you want to spend $100,000s on a human if you can get something inless timefrom an AI for $1000s? Human programmers will be attacked on three fronts: price, quality and time. The quality aspect is the only one that is arguable: price and time are already lost.

NateEag
0 replies
15h10m

Just like human-driven cars.

I mean, it could happen. I don't know it can't, and I won't be totally shocked if it does.

I'm not totally sold either, though.

UncleMeat
3 replies
15h47m

It'll be interesting to see what happens. But I won't take the opinion of somebody who struggles with a task that I'd expect first year undergraduates to accomplish as the key source of insight here.

atleastoptimal
2 replies
15h34m

What about opinion of the CEOs of OpenAI, Anthropic, pivotal deep learning researchers, etc?

yukinon
1 replies
11h7m

People with a vested interest in being proven correct?

atleastoptimal
0 replies
10h57m

Who else would be more qualified to weigh in though?

Exoristos
1 replies
15h41m

Are you saying that you honestly do?

atleastoptimal
0 replies
15h36m

Yes

paulddraper
0 replies
15h47m

A bit maybe

crest
0 replies
15h35m

Only by a true AGI, but better non-general AI tools will change how the software is developed. Instead I foresee a tragicomedy of LLMs stuck in endless repetitive e-mail threads as more and more APIs are replaced by bad chatbots to keep customers away.

maximinus_thrax
1 replies
15h46m

Wow, this guys working on rocket science, everyone. Watch out! We might get replaced!

The issue here is that it doesn't matter what you or I think. What matters is what boardmembers, CEOs and CTOs of companies which start cargo cults. You know.. the mainstream tech companies which everyone else follows. If they start shrinking development budgets (I mean shrinking the workforce) under the guise of 'with GPT, you're a 5x-10x coder now, so you'll only handle integrations and people problems', you or I will still be affected, although not replaced.

So don't think about this in black-or-white terms. The issue is way more nuanced than the capabilities of GPT or replacing a fulltime dev job.

chii
0 replies
15h35m

If they start shrinking development budgets

the outcomes aren't controlled by a reality warping field from these CEO's and CTO's. If dev budgets shrink without actually being justifiable, the software suffers. Those who are made redundant still have their skills, and can offer competing products.

dang
0 replies
13h44m

"Don't be snarky."

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html

Edit: you've unfortunately been breaking the site guidelines a ton lately (examples:

https://news.ycombinator.com/item?id=38215766

https://news.ycombinator.com/item?id=38215679

https://news.ycombinator.com/item?id=38082131

https://news.ycombinator.com/item?id=37800148)

We have to ban accounts that post like that, so if you'd please stop posting like that, we'd appreciate it. I don't want to ban an 8-year-old account, but we don't have much choice if the account is frequently posting like that.

cantSpellSober
0 replies
15h55m

The author makes this conceit in the next sentence

everybody looks them up anyway. It’s not real programming
skepticATX
16 replies
15h22m

Am I the only one becoming less impressed by LLMs as time passes?

I will admit, when Copilot first became a thing in 2021, I had my own “I’m about to become obsolete” moment.

However, it’s become clear to me, both through my own experience and through research that has been conducted, that modern LLMs are fundamentally flawed and are not on the path to general intelligence.

We are stuck with ancient (in AI terms) technology. GPT 4 is better than 3.5, but not in a fundamental way. I expect much the same from 5. This technology is incredibly flawed, and in hindsight, once we have actual powerful AI, I think we’ll laugh at how much attention we gave it.

matheusmoreira
3 replies
11h46m

Am I the only one becoming less impressed by LLMs as time passes?

Not at all.

I was very impressed at first but it's gotten to the point where I can no longer trust anything it says other than very high level overviews. For example, I asked it to help me implement my own sound synthesizer from scratch. I wanted to generate audio samples and save them to wave files. The high level overview was helpful and enabled me to understand the concepts involved.

The code on the other hand was subtly wrong in ways I simply couldn't be sure of. Details like calculating the lengths of structures and whether something did or did not count towards the length were notoriously difficult for it to get right. Worse, as a beginner just encountering the subject matter I could not be sure if it was correct or not, I just thought it didn't look right. I'd ask for confirmation and it would just apologize and change the response to what I expected to hear. I couldn't trust it.

It's pretty great at reducing the loneliness of solo programming though. Just bouncing ideas and seeing what it says helps a lot. It's not like other people would want to listen.

timcobb
0 replies
5h19m

It's pretty great at reducing the loneliness of solo programming though. Just bouncing ideas and seeing what it says helps a lot. It's not like other people would want to listen.
mp05
0 replies
3h4m

It's pretty great at reducing the loneliness of solo programming though. Just bouncing ideas and seeing what it says helps a lot. It's not like other people would want to listen.

It's really great for this.

I've found it useful for taking some pattern I've cranking on with an extensive API and finishing the grunt work for me... it generally does a very good job if you teach it properly. I recently had to do a full integration of the AWS Amplify Auth library and instead of grinding for half a day to perfect every method, it just spits out the entire set of actions and reducers for me with well-considered state objects. Again, it needs guidance from someone with a clue, so don't fear it taking my job anytime soon.

ClimaxGravely
0 replies
9h23m

My takeaway from this is that we should lament the gradual death of niche forums where we can discuss this with real humans.

habitue
3 replies
13h1m

It sounds like your initial impression was an overestimate and your current impression is a correction back down from that. You could say that it's "fundamentally flawed" coming from a very high initial expectation, but you could just as easily say "this is an amazing tool" coming from the point of view that it's "worthless" as many people seem to think

breakwaterlabs
2 replies
12h2m

If I can be so bold as to chime in, perhaps "fundamentally flawed" because it's design means it will never be more than a very clever BS engine. By design it is a stochastic token generator and its output will only ever be fundamentally some shade of random unless a fundamental redesign occurs.

I was also fooled and gave it too much credit, if you engage in a philosophical discussion with it it seems purpose-built for passing the turing test.

If LLMs are good at one thing, it's tricking people. I can't think of a more dangerous or valueless creation.

mike_hearn
0 replies
7h51m

I don't think that's true. It helps to know a few obscure facts about LLMs. For example, they understand their own level of uncertainty. Their eagerness to please appears to be a result of subtle training problems that are correctable in principle.

I've noticed that GPT-4 is much less likely to hallucinate than 3, and it's still early days. I suspect OpenAI is still tweaking the RLHF procedure to make their models less cocksure, at least for next generation.

The other thing is that it's quite predictable when an LLM will hallucinate. If you directly command it to answer a question it doesn't know or can't do, it prefers to BS than refuse the command due to the strength of its RLHF. That's a problem a lot of humans have too and the same obvious techniques work to resolve it: don't ask for a list of five things if you aren't 100% certain there are actually five answers, for example. Let it decide how many to return. Don't demand an answer to X, ask it if it knows how to answer X first, and so on.

And finally, stick to questions where you already know other people have solved it and likely talked about it on the internet.

I use GPT4 every day and rarely have problems with hallucinations as a result. It's very useful.

anon-3988
0 replies
11h9m

If I can be so bold as to chime in, perhaps "fundamentally flawed" because it's design means it will never be more than a very clever BS engine.

How is your fellow human better? People here seems to spend a lot of time talking about how much their average boss, coworkers, juniors are ass. The only reason I know that ChatGPT is based on a computer program is how fast it is. I wouldn't be able to tell its output (not mannerism) from a junior's or even some "seniors'" programmer. That itself is quite impressive.

With how much time we've spend on the internet, have we not realized how good PEOPLE are at generating bullshit? I am pretty I am writing bullshit right as this moment. This post is complete ass.

crnkofe
1 replies
7h34m

AI is the next bubble. VCs are really pushing it but I don't see this solving day to day software development problems anytime soon. Solving difficult CS problems is one thing and I do find it impressive, unfortunately the greater majority of everyday work is not about generating Snake games or 0/1 knapsack solutions.

Also the idea that we'll need less engineers is bogus. Technology doesn't reduce the amount of work we do, it just increases productivity and puts more strain on individuals to perform. With AI spitting out unmaintainable code nobody understands I can only see more work for more engineers as the amount of code grows.

dalbasal
0 replies
7h19m

Idk. Tech bubbles, hype cycles.. they're weird, sometimes unhinged.. they're not entirely irrational.

In aggregate, they are just the phenomenal of an extremely high risk high reward investment environment.

Most tech companies do not need cash to scale. There are few factories to be built. What they need is risk capital. The big successes alphabet, Facebook, Amazon.. these winds are so big, that they really do "justify" the bubbles.

Amazon alone, arguably justifies the '90s dotcom bubble. The tens of billions invested into venture, IPOs... A balanced portfolio accrued over the period, was probably profitable in the long term... Especially if the investor kept buying through and after the crash.

IDK that anyone actually invests in risky startups that way, but just as a thought device..

pama
0 replies
11h36m

I am not sure that GPT-4 is not better in a fundamental way than GPT-3.5. To me they seem like night and day. If GPT-5 is a similar jump, it will be impossible to compete without using it (or using a related / similar model). Yes they are both GPT models and trained as simple autoregressive LM, but there is a dramatic change you can experience at a personal level once GPT-4 can synthesize information correctly to address your specific requests in so many different contexts that GPT-3.5 was simply parroting like a toddler. All of LLM is just probabilistic inference on large bodies of text, however I do buy the idea that with enough compute and data a sufficiently large model will build the architecture it optimally needs to understand the data in the best possible way during training. And once the data becomes multimodal, the benefit to these probabilistic models can theoretically be multiplicative not just additive as each new modality will clarify and eliminate any previously wrong representations of the world. Yes, we will all laugh at how good GPT-10 trained with text, image, video, audio, and taste sensors will be, and yet GPT-4 was a major step forward, much bigger than any step taken by humanity so far.

ozim
0 replies
9h6m

To add an example, people ask it to generate new piece of code and then add more questions to refine it. Writing new CRUD is not impressive.

I can do that with scaffolding or copy past template and change.

I did not try and I did not see someone actually giving existing code asking GPT to fix or change it. So that is something I’d try.

otabdeveloper4
0 replies
9h12m

Yes. Much of the "wow factor" of generative AI is simple sleight of hand. Humans are trained to see patterns where there are none, and ignore anything that doesn't fit our preconceived notions of order. Often AI is just a complicated Clever Hans effect.

For a real example: once you start analyzing an AI image with a critical mind, you see that most of the image violates basic rules of composition, perspective and anatomy. The art is frankly quite trash, and once you see it it is hard to unsee.

intended
0 replies
11h52m

You are not alone. People are reading papers, not building things.

gloosx
0 replies
9h59m

Me too.

I am seeing people seriously using the "Please write an expression for me which adds 2 and 2" prompt in order to get the "2+2" expression they need – advocating that they got it with magical efficiency. In all honesty, I don't like writing too much, and writing code for me is always shorter and faster than trying to describe it in general-purpose language, that is why we need code in the first place.

dalbasal
0 replies
7h29m

Am I the only one becoming less impressed by LLMs as time passes

Jaron Lanier has some ideas about the space in between turing test and blade runner.

The first film goers, watching simple black and white movies thought that they were uncanny. A train coming towards the screen, would make audiences jump and duck. When people first heard gramophones, they reported that it is indistinguishable from live orchestra.

As we learn a technology, we learn to recognize. Get a feel for its limitations and strengths. The ability to detect that technology, is a skill. Less impressive over time.

It's hard not to be impressed when a thing does a thing that you did not think it could do.

We didn't move on to being unimpressed when the thing cannot do the thing we thought it be able to do.

decasia
14 replies
15h46m

I have been having the following debate with my friend who does AI and neural network stuff:

Him: Coding will soon be obsolete, it will all be replaced by chatgpt-type code gen.

Me: OK but the overwhelming majority of my job as a "senior engineer" is about communication, organizational leadership, and actually understanding all the product requirements and how they will interface with our systems. Yes, I write code, but even if most of that were augmented with codegen, that would barely even change most of what I do.

TheBigSalad
3 replies
13h35m

CEOs will be replaced before software engineers.

consumer451
0 replies
12h37m

Which company and board of directors do you see doing that first?

asdff
0 replies
9h57m

CEO is your scapegoat for a bad quarter. Throw all your eggs in the ai basket and you get a bad quarter, whats left to try? Companies don't like to admit they failed and walk things back to how they are. There's probably only a few off the shelf gpts you can throw in to replace your sacked one. Compared with 8 billion potential CEOs on earth you can go through to make the shareholders happy about a blood sacrifice.

RamblingCTO
0 replies
8h54m

I don't think you know what a CEO does.

jelsisi
1 replies
15h18m

communication, organizational leadership, and actually understanding all the product requirements

These problems sound like a result of working with people. Smaller but more capable teams because of AI will need less leaders and less meetings. Everything will become much more efficient. Say goodbye to all the time spent mentoring junior engineers, soon you won't have any

creesch
0 replies
11h36m

Say goodbye to all the time spent mentoring junior engineers, soon you won't have any

Yeah... no. Not with LLMs as they currently are. They are great as an assisted tool, but still need people to validate their output and then put that output to work. Which means you need people who can understand that output, which are developers. Which also means that you need to keep training developers in order to be able to validate that output.

The more nuanced approach would be saying that the work of developers will change. Which I agree with, but is also has been true over the past few decades. Developers these days are already working with a hugely different tool chain compared to developers a decade ago. It is an always evolving landscape and I don't think we are at a point yet where developers will be outright replaced.

We might get there at some point, but not with current LLMs.

decasia
1 replies
15h44m

It kind of reminds me of this, except substitute coding for "hammering":

https://www.buzzmaven.com/old-engineer-hammer-2/

mock-possum
0 replies
12h18m

If they don’t want to pay for the work, you can always undo it for them.

TheRealPomax
1 replies
13h32m

But now we introduce a junior engineer into the mix: _their_ job is none of those things, it's just to take the issues as filed, and implement them. They don't get the hard problems to solve, they get both the task and the acceptance criteria, and for them a future version of CodeGPT or whatever it'll be called will completely replace their programming skills. And then, 10 years later, they'll be the senior engineer. And then what?

Because today's seniors will be retired in a decade or two, and as they get replaced by people who actually benefited from automatic code generation, the concept of "coding" will (if this trend keeps up) absolutely become a thing that old timers used to do before we had machines to do it for us.

creesch
0 replies
11h31m

These junior engineers still will need to validate that whatever the LLM implemented works and fits the requirements. If it doesn't work they need to figure out why it doesn't work.

It might not be in the same way of current day developers, but I don't foresee a near future where developers don't learn to understand code to some degree.

For example, I know a lot of people who work in the low-code development sphere of things. A lot of the developers there barely see any code if any. Yet, when you talk with them they talk about a lot of the same issues and problem-solving but in slightly different terms and from a slightly different perspective. But, the similarities are very much there as the problems are fundamentally the same.

With generated code I feel like this will also be similarly true.

webworker
0 replies
11h16m

Yeah I agree with this - it's much of my experience as a professional developer, too. I'm trying to navigate the organization, connect with other teams, understand what needs to get done.

The code I write feels like a side-effect of what I actually do.

evilai
0 replies
13h59m

I think that if that were the case, the change would be brutal. First, because as a comment below suggests, fewer people would be involved, so coordination would be simplified. Second, because many more people could access these coordination positions, and I think it would be likely that other professions would take on those roles, professions or personality types that are not usually "good coders" but now wouldn't need to be, since the machine itself could explain, for example, the general functioning of what is being produced. Therefore, I would expect the job field to be radically affected and salaries severely reduced.

corethree
0 replies
10h56m

You realize that LLMs don't just code right? In fact coding is one of the things they're least good at.

LLMs are best at doing the stuff senior engineers do that's NOT coding.

Aerbil313
0 replies
8h47m

Wait before product-gen AI emerges. No, seriously. Do folks here not see it's possible even today with a complex system based on LLMs? It's a matter of time.

waldohatesyou
13 replies
15h55m

I feel like the headline does not match the article here. The headline implies that programming as a craft is to be replaced but the article ultimately argues that it will change significantly which matches my intuition as well.

At the end of the day, the bar is being lowered. Is that a bad thing? From a selfish perspective, yes. From a societal perspective, no. At the risk of digressing, I think one of the issues that my part of the world (Canada and to a lesser extent, America) has been faced with is inequality. I know people who work more "average" service jobs and they make substantially less than engineers do and that's something that's made me pretty uneasy over the past few years. The societal value of generative AI is in making knowledge work such as law, medicine, and software engineering much more accessible to "average" people.

Are there downsides to that? Probably but I think granting power evenly is probably a better path to utopia than misguided elitism. The latter sounds like the path to despotism.

stcroixx
9 replies
15h32m

If software dev is simplified to the point people working jobs like you describe are able to do it, wages will also plummet, so it’s not like their situation is going to improve.

rsanek
5 replies
15h10m

This is a zero-sum way of thinking about the world.

Der_Einzige
3 replies
12h33m

Life is zero sum. Space I exist in is space someone else literally can’t exist in. Anyone telling you something else is lying to you and doesn’t have your best interests at heart.

corethree
2 replies
10h48m

Yes. Finally. I'm tired of people spewing this zero sum buzz word. Literally everything has a limit. It's all zero sum. Actually it's negative sum. Entropy only increases.

It's not just space that's taken up. There's a fixed amount of energy in the known universe. The usability of that energy continuously becomes less and less and less.

d_tr
1 replies
6h53m

We have gone from living to caves to quantum computers and curing several types of cancer, and we are several orders of magnitude away from any kind of hypothetical energy usage limit imposed by the known universe. This could grow to hundreds of orders of magnitude easily as we learn more.

In the everyday life, there are negative-sum, zero-sum and positive-sum situations and events all over the place.

So, I don't get what your comment is supposed to mean and what it is exactly that you are tired of.

corethree
0 replies
1h2m

"Your statement implies that the situation/economy/whatever is a zero sum game. It's not."

^thats what I'm tired of. Baseless statements like that.

Fundamentally all things are negative sum. Anything beyond that are temporary local phenomenons.

Energy is has no "limit" in the sense you imply. It always exists. Once you "use" it, it still exists. In this sense energy is zero sum. The quantity never changes. Unless you count mass which is convertible to energy. Mass and energy are fixed zero sum things.

And since mass and energy are zero sum. Fundamentally, everything that extends from mass and energy is also zero sum.

The quantity outside of this that is negative sum is entropy. It always increases. But that's only because we set the baseline. It could be that maximal entropy is equilibrium and we are just an oscillation away from this baseline. In this case even entropy would be zero sum.

All forms of computations including coming up with cures for cancer or inventing quantum computing requires conversion of part of the universe from low entropy to high entropy. Once that conversion happens, the overall entropy of the universe goes up and it cannot be reversed. Even from a practical perspective we are using up fossil fuel resources and solar resources faster than then the sun can regenerate.

So if you technically knew what you were talking about. You'd know life and reality is overall practically and universally speaking is zero sum or worse.

brewdad
0 replies
13h3m

It’s true though. Ifanyonewith a high school education can be a successful “programmer”, most programming jobs will be filled by the cheapest labor.

Why would any company pay more than they need to to keep their company functioning?

acscott
2 replies
15h24m

If a developer making $1/hour produces $10 output (of something people buy), then if you add another developer making $1/hour produces $10 of output, you have $20 total product. Developer A and B can compete on their rate down to the point that it's not sustainable, and thus, an equilibrium will be struck.

How is adding more developers going to reduce the output?

znpy
0 replies
3h26m

How is adding more developers going to reduce the output?

time spent coordinating, time spent arguing, time spent reaching consensus (dumb example: function signatures / architecture / api contracts), time spent comparing approaches.

hackertyper69
0 replies
13h36m

too many cooks in the kitchen

jpatt
1 replies
13h33m

I don't really think law, medicine, and software engineering are the main drivers of wage inequality, though. If the lowest wage was minimum wage and the highest wage was a programmer salary, the Americas would be a very equitable economy.

Automating America's remaining paths to the middle class will only serve to widen the gap between capital owners who will own infrastructure for automation and those shoved into a shrinking piece of the unautomated pie.

Affric
0 replies
13h11m

The comment you’re replying to is making that point: that people who earn a decent wage from the knowledge economy are threatened by AI and oppose it because of their interest in the current system’s inequality.

It follows that if it is unjust for those who are knowledge workers then it is unjust for those who are service workers (unless you can morally differentiate them).

Perhaps if inequality is wrong then it’s the system that creates inequality that should be looked at rather than preserving rent seeking by knowledge workers refusing to compete with AI while perpetuating inequality on those who aren’t powerful in the current economy?

Food for thought.

zetsurin
0 replies
12h18m

I know people who work more "average" service jobs and they make substantially less than engineers do and that's something that's made me pretty uneasy over the past few years. The societal value of generative AI is in making knowledge work such as law, medicine, and software engineering much more accessible to "average" people.

I think the fear of software developers is that they will join the low pay crowd.

bruce511
11 replies
12h24m

Obviously we can discuss the merits of GPT4, but actually my first issue is with his root question;

> I would make sure that my kids could program computers. It is among the newer arts but also among the most essential

I've got kids. 1 can program. 1 tried it, and didn't like it. 1 just rolled her eyes at me.

Turns out, for most people, programming is not like reading or writing at all. Got a question? Ask Google (or GPT) or Alexa. 99% of people can't code, and will never need to code.

You may as well argue that cooking healthy food is a universally needed skill. Or riding a bicycle, or kicking a ball. All are of course really helpful life skills, but lots of people get by in the world just fine without them.

So as one parent to another, I can tell you that you don't need to worry about this. It's good you have plans for your children. But don't worry too much about it. Turns out children will learn what they are interested in, and pretty much ignore the rest. They'll grow up, change, mature and become functional people in their own right.

No doubt you'll get plenty of eye rolls along the way.

danielmarkbruce
10 replies
12h0m

Would you say the same about math or reading?

csnover
3 replies
11h40m

I’m not the op—they explicitly stated programming isnota skill like reading or writing—but I’ll go down this rhetorical path for fun and say: sure, why not?

The utopian aim of technology is all about offloading burdensome work so we can focus more on the things we enjoy. If reading is a burden to someone, why not let them use a TTS? If long division is a burden, why not let them use a pocket calculator? They’ll probably learn enough to get by even if they never get to the point of reading Dostoevsky by sight.

Lest you say, “well but those skills are essential!”, humans needed food to survive before we needed mathematics or written language, yet we’ve been been OK outsourcing farming for at least a century because we’ve experienced how it frees us up to pursue other things. And, to that end, we’ll probably be facing some pretty extreme civilizational crisis if we get to the point that someone without basic maths skills can’t find a calculator.

danielmarkbruce
2 replies
10h56m

I'm not talking about dreams or utopia. Just real modern life - one iswaybehind opportunity wise if they can't read and can't do math. At some point, that wasn't the case. Many signs point to being able to write instructions for a computer being a core skill for a good life.

If your kid rolls their eyes and says "no math/reading for me thanks", you don't just accept it and figure they'll find what they love - you figure some way to make them learn and it's at least a few months of hard work for a kid(and parent) to sort of get escape velocity .

It's possible that programming will end up in the same spot.

bruce511
0 replies
10h20m

Possible but unlikely. I mean programming in the "code" sense.

We all "program" computers every day (when you turn on a microwave, when you talk to Alexa, when you do a budget in Excel, or write a Doc in Word.)

Coding is not an essential skill. It's clearly a useful skill, its a valuable skill, but pretty much no-one in my real life can code. And from what I've learned teaching others most people -can't- code, regardless of education.

I fly to place all the time, but I outsource the piloting. Programmers both argue that coding should be learnt by everyone, and at the same time most that it's getting easier (and hence less valuable) to do it.

bruce511
0 replies
10h24m

Possible but unlikely. I mean programming in the "code" sense.

We all "program" computers every day (when you turn on a microwave, when you talk to Alexa, when you do a budget in Excel, or write a Doc in Word.)

Coding is not an essential skill. It's clearly a useful skill, but pretty much no-one in my

bruce511
2 replies
10h36m

Reading is a foundational skill. (Its necessary as a precursor to pretty much everything.) Writing (as in can construct words) pretty much goes with reading. Writing (as in quality, coherent, communication) is clearly less ubiquitous.

Math is something that everyone who can do math, thinks is essential. Unlike reading though there's a continum of skills from iteracy, to arithmetic, to calculus and so on. Very few people know the square root of -1 or use it once a year.

Of course you meant "math" as in basic arithmetic. And sure life is easier with that than without it. But lots of kids go through life with very rudimentary math skills. They function just fine.

Does good reading Writing and math skills open doors? Sure it does. Does knowledge of Word or Excel or Windows or Quickbooks open doors? Sure. Does knowing how to cook a meal, feed others, open doors? Sure. There are a million ways to add value to society. Coding is immensely valuable to those who can do it well. But it's not foundational like reading or writing.

danielmarkbruce
1 replies
10h29m

Yeah, I'm saying - it could be now, or it could be soon. Math and reading weren't always foundational, it's a pretty recent thing.

mike_hearn
0 replies
7h34m

Maths, yes, but it's also not really foundational. Most people don't need to do maths once they leave high school, not even basic arithmetic. Machines do it for them better than they ever could. The heavy emphasis on maths in the school curriculum is probably a mistake and should be rolled back.

Reading and (basic) writing was always foundational. You can go back to ancient Egypt and there are merchants who have mastered a simplified form of hieroglyphs. They weren't doing maths any more sophisticated than adding and subtracting small numbers, but they needed to know writing.

hiAndrewQuinn
1 replies
11h24m

That most people in the world get by just fine without them? Yeah, actually. Isn't the median reading level like 8th grade or something in the States?

danielmarkbruce
0 replies
11h1m

Do they get by just fine without them? I haven't tried being a median person in the united states, it doesn't look easy tbh.

kumarvvr
0 replies
8h2m

Reading is one of the major means of information gathering for humans, apart from introspection, inspection and vocal communication.

It is not comparable.

betaby
11 replies
16h13m

Article itself feels like it was written by LLM - unnecessarily long, pretentious and boring without much of a substance. But hey, folks from this forum aren't the target audience for 'Newyorker'!

labster
6 replies
16h8m

I’m pretty sureNew Yorkerwriter is the first job that LLMs will replace.

kragen
2 replies
15h54m

gladwell made a career out of compelling, convincing writing about things he didn't understand and came to the wrong conclusions about, like a human chatgpt, so probably; google 'igon value'

meindnoch
0 replies
12h15m

google 'igon value'

Hahaha. Thanks for the laugh! I always thought the guy was a fraud.

foobazgt
0 replies
11h50m

I have a new phrase in my lexicon. Thank you!

golergka
2 replies
13h28m

Writers and musicians wont be replaced with AI because the authors persona is part of the product.

tonyedgecombe
0 replies
5h27m
998244353
0 replies
12h25m

Poets and novelists maybe, but not many people pay particular attention to who exactly wrote a particular magazine article.

atleastoptimal
1 replies
16h9m

Do you agree or disagree with the premise of the article

betaby
0 replies
16h2m

See, it's like sysadmin -> DevOps/SRE. Same probably programmer -> A.I. prompt specialist. So yes, but not really.

ilaksh
0 replies
11h10m

We are absolutely the target audience. It wasn't that long and it wasn't boring and did not lack substance.

All of the things you wrote in this comment were false.

AlexAffe
0 replies
9h10m

It's what I thought too. Would I be able to tell the difference? That thought really scares me. What if in not so distant future whole threads, comments included, are LLM generated? We are losing touch (ppl like you and me), and I feel it coming. That is not the first article on the web, that I quit reading in disgust, thinking 'FUCK, what if I'm reading GPT4 babble right now?' We will need a way to tell 100%.

simonw
9 replies
12h50m

What I learned was that programming is not really about knowledge or skill but simply about patience, or maybe obsession. Programmers are people who can endure an endless parade of tedious obstacles.

This captures the reason I'm optimistic about AI-assisted programming.

The learning curve for getting started programming is horribly steep - and it's not because it's hard, it's because it's frustrating. You have to sweat through six months of weird error messages and missing semicolons before you get to the point where it feels like you're actually building things and making progress.

Most people give up. They assume they're "not smart enough" to learn to program, when really they aren't patient enough to make it through all of that muck.

I think LLMs dramatically impact that initial learning curve. I love the idea that many more people will be able to learn basic programming - I think every human being deserves to be able to use computers to automate tedious repetitive tasks in their lives.

quickthrower2
2 replies
12h29m

It is the same messy tedious thing with LLMs though. You still have the complex brittle artefact that is code.

LLMs writing code is the beginning, but low or no code is more ideal for most people. With LLM assistance.

ryanjshaw
1 replies
11h22m

Until the LLMs get better.

I just asked ChatGPT a thing, and then on a hunch I asked if there wasn't a built-in function that does the same, and it indeed remembered there is such a function.

What if that second question had been automatically and transparently asked? What if there is a finite list of such "quiet reflecting" questions that dramatically increases the value and utility of the LLM output?

quickthrower2
0 replies
8h36m

Yes. Thats like a langchain. The only issue then is latency as it talks to itself.

javajosh
1 replies
12h30m

>You have to sweat through six months of weird error messages and missing semicolons before you get to the point where it feels like you're actually building things and making progress.

Computers are rude and honest and humans prefer a pretty lie to an ugly truth. Programmers must appreciate the ugly truth in their day-to-day lives more than any other profession. (Physical engineering and construction workers and repairers also need this virtue, but less often since their feedback cycles are slower.)

k0k0
0 replies
12h8m

More than doctors? Feedback cycles in the OR and ICU get pretty fucking short.

I do think the overall premise is silly, programming isn’t that special in this regard in my opinion. Most professions are like this, they just might not be the most visible ones like politics, journalism, show biz.

Software developer have more to learn from other professions than they often think (the old engineering professions understand this a bit better)

asdff
1 replies
10h0m

Honestly I think its a step back. This is like saying google translate has made everyone fluent in Spanish. At the end of the day you still need to do some vetting and understanding how code works to effectively use chatgpt. the actual writing of the code was never the hard part of software development. If this thing is only speeding up developers by six months time, then thats kind of a waste of compute compared to just offering computer science classes in schools imo, plus you'd get a much stronger generation of engineers from the latter.

simonw
0 replies
2h9m

At the end of the day you still need to do some vetting and understanding how code works to effectively use chatgpt. the actual writing of the code was never the hard part of software development

I completely agree with you on that. Most of being a good software engineer is skills that ChatGPT won't help you with.

But you can't even start to learn those skills if you quit in the first six months because of the vertical learning curve tied to all of that syntax trivia.

mike_hearn
0 replies
7h39m

Definitely. And think about all the other ways LLMs can improve quality of life!

In any complex software project with lots of users there is guaranteed to be an effectively endless backlog of bug tickets that are, in effect, abandoned. I think a few months ago some bug got fixed in Firefox that was ~25 years old. In most compilers and frameworks there's going to be a pile of tickets of the form, "improve error message when X happens" that just never floats to the top because programmer time is too expensive to justify working on it. Over time the difference between a senior engineer and a junior becomes not so much intelligence or even actual experience but just the accumulated scar tissue of all the crap we have to wade through to make anything work, thanks to the giant backlogs of bugs and stupid usability problems that never get solved before the product reaches EoL anyway.

For AI to fully replace programmers is going to require quite a few more major breakthroughs, but setting one loose on a bug tracker and asking it to churn out trivial fixes all day is well within reach. That will both make human programming more fun, and easier to learn.

CaptainFever
0 replies
5h26m

I think LLMs dramatically impact that initial learning curve. I love the idea that many more people will be able to learn basic programming - I think every human being deserves to be able to use computers to automate tedious repetitive tasks in their lives.

This honestly made me feel so happy. I'm reminded of the average person who is bound by the limitations of the apps they use, the popularity of "How to Automate Using Python" books...

With this new tech, people are no longer bound by that limitation. I think that's pretty cool.

alexmuro
9 replies
16h7m

I don't feel like we are in the waning days of the craft at all. Most of the craft is creating an understanding between people and software and most human programmers are still bad at it. AI might replace some programmers but none who program as a craft.

Dave_Rosenthal
8 replies
15h32m

"Chess engines might get better than some chess players, but none who play Chess as a craft." Do you think people in the 90s thought this? Probably...

In the article, the author mentions that Chess centaurs (a human player consulting an engine) can still beat an engine alone. But the author is wrong. Therewasa brief period a while ago when that was true, but chess engines are so strong now that any human intervention just holds them back.

I've been programming 30+ years, and am an accomplished programmer who loves the craft, but the writing is on the wall. ChatGPT is better than me at programming in most every way. It knows more languages, more tricks, more libraries, more error codes, is faster, cheaper, etc.

The only area that I still feel superior to ChatGPT is that I have a better understanding of the "big picture" of what the program is trying to accomplish and can help steer it to work on the right sub-problems. Funnily enough, is was the same with centuar Chess; humans would make strategic decisions while the engines would work out the tactics. But that model is now useless.

We are currently enjoying a time where (human programmer+AI > AI programmer). It's an awesome time to live in, but, like with Chess, I doubt it will last very long.

somewhereoutth
2 replies
15h10m

Chess is a closed problem. Whereas software development very much isn't.

You will also have to provide a source for 'chess engines are so strong now that any human intervention just holds them back', a cursory search suggests this is by no means settled.

Dave_Rosenthal
1 replies
14h13m

Yes, the rules of chess are simpler, which is why all this happened many years ago for chess.

https://gwern.net/note/note#advanced-chess-obituary-- here is a reference about centuar/advanced chess. The source isn't perfect as the tournaments seem to have fizzled out 5-10 years ago as engines got better and it all became irrelevant. Sadly this means we don't have 100 games of GM+engine vs. engine in 2023 to truly settle it but I've been following this for a while and I have a high confidence that Stockfish_2023+human ~= Stockfish_2023.

alexmuro
0 replies
12h28m

I think closed vs open problems are not simply different in magnitude of difficulty but qualitatively different. When I'm programming most of the interesting things I work on don't have a clear correct answer or even a way of telling why a particular set of choices don't get traction.

I guess it's possible that just being "smarter" might in some cases get a better solution from a seeies of text prompts but that seems too vague an argument to hold much water for me.

creesch
2 replies
11h24m

It knows more languages, more tricks, more libraries, more error codes, is faster, cheaper, etc.

True up until the point that you want to do something that hasn't really be done before or is just not as findable on the internet. LLMs only know what is already out there, they will not create new frameworks or think up new paradigms in that regard.

It also is very often wrong in the code it outputs, doesn't know if things got deprecated after the training data threshold, etc. As a funny recent example, I asked ChatGPT for an example using the openAI nodejs library. The example was wrong as the library has had a major version bump since the last time the training data was updated.

The only area that I still feel superior to ChatGPT is that I have a better understanding of the "big picture" of what the program is trying to accomplish and can help steer it to work on the right sub-problems.

Which probably is based on your general experience and understanding of programming in the last 30+ years. As I have said elsewhere, I really don't think that LLMs in their current iteration will be replacing developers. They are however going to be part of the toolchain of developers.

cloverich
1 replies
10h53m

It also is very often wrong in the code it outputs, doesn't know if things got deprecated after the training data threshold, etc

Today I asked it a question and it was wrong.... then it ran the code, got the same error as me, and then fixed it (and correctly explained why it was wrong), without me prompting further :)

Really though, how long until that training update goes from every so often, to constant. Now that half the internet is feeding it information, it doesn't even need to scour other sources -- its becoming its own source, for better or worse.

creesch
0 replies
10h46m

its becoming its own source, for better or worse

OpenAPi is actively actively taking steps to minimize the changes of that happening as that would be rather bad given how LLMs work.

icehawk
0 replies
10h54m

I have been programming 30+ years, and not two days ago looked at a problem I've been dealing with since before 2019, and went "this would be easier if I changed methods" and mitigated the issue in three hours from an airplane.

Programming is only superficially about code. The trick is really figuring out how to approach problems.

ReptileMan
0 replies
9h12m

Programmer is just a problem solver. As long as there are people with problems, there will be market for people to solve them.

EZ-E
9 replies
12h9m

I don't see it - and by that I don't mean I don't think AI can write good code and get better over time. I just don't see how it would work as a workflow to replace (most) devs by AI.

If I take a junior programmer's task, say, creating CRUD endpoints. Describing the requirement in a way that matches exactly what I want will probably take more time that doing the coding assisted by something like copilot. Can we really imagine a non technical user using an AI having it do development from A to Z? What if the generated code has a bug, can we really imagine that at no point someone will need to be in the loop? Even if a tech person intervenes in the case of a bug how much time would be lost to investigate what the AI wrote and trying to understand what happened in retrospect - the time or cost saved to write the code would be lost quickly. Writing code is a small part of the job after all. LLMs are good at generating code, but they are fundamentally not problem solvers.

The technology is amazing but I think LLMs will just be another tool in the arsenal for devs. It's also an amazing tutor. It can avoid having to call a developper for some self contained problems (writing a script to scrape content from a web page for example).

mrtksn
5 replies
10h44m

At this stage, it's like a junior. It's pretty useful for the boilerplate or mundane stuff and even with implementation of algorithms from human description to computer code.

It's also very good at converting a code from one language to another and getting stuff done when directed properly.

I can definitely see how large impact it will have on the employment prospects of many. It's not replacing the engineers yet, but those who specialise in a tech as an implementation specialist are screwed. Even the increased productivity alone will reduce the demand.

vasco
3 replies
10h15m

In most real world systems increased efficiency leads to more demand, not less - barring physical constraints. Usually because we can now apply the _thing_ to way more complex scenarios. I predict way more software will be created per year and more complex software will exist and evolve at a faster pace.

Once you make agriculture more efficient we can't really eat way more than today so we need less people working there. But if you make software easier to write, I think you'll just end up with way more software because the complexity of human needs and processes is unbounded unlike eating food.

Also called the efficiency paradoxhttps://en.m.wikipedia.org/wiki/Jevons_paradox

mrtksn
2 replies
10h5m

Sure, the total demand increases but not the same kind of demand that it's replacing. We may end up seeing CSS specialists "rust belt" falling in poverty while theres substantial wealth build up among "prompt engineers" with good taste in aesthetics.

vasco
1 replies
9h24m

In any job if you overspecialize and don't learn the fundamentals you're at risk. A CSS specialist is like a CRT TV fixer as compared to a general programmer and an electrician, respectively. Those jobs always have numbered days.

mrtksn
0 replies
9h2m

Lot's of people make good many because of that over specialisation. They acquire deep understanding of these tools and that's how they become experts of something instead of jack of all trades and master of none. Of course, they end up outdated in the areas they don't practice and risk being wiped out if the toolset they specialise or the whole industry falls out of favor. That's also why we have people who once were middle class but now their wealth and influence decays and they feel left back. With de-industrialization, that happened to awful lot of people in classical industries for example. Also, with progressing age it becomes harder and harder to adapt.

As a twist in fate, maybe the AI people will eat the Software people who ate the classical industry people. Some will adapt and prosper, most won't because they will be busy doing their jobs and getting paid good salaries in comfortable jobs.

The software ate the world, the AI will eat the software.

manmal
0 replies
9h29m

Even the increased productivity alone will reduce the demand

I’d argue that companies cutting their software budget will soon be overtaken by their competition. Software is never done, after all.

devjab
1 replies
11h16m

I’ve spent a long time in the public sector digitalisation in Denmark and where every other “no-code/low-code” or whatever you call the tools that claim to not require programmers have failed miserably GPT is succeeding. Yes, yes, I know some of you sell these tools and some of you have even been very successful in that effort but in my anecdotal experience from 98 municipalities, it’s never once worked, not for long anyway.

By contrast we now have digitally inclined employees creating and automating things with the help of CharGPT. Much of it is terrible in terms of longevity the same way the previous RPA or workflow tools were, but unlike those, people can now also maintain them. At least as long as you keep the digitally inclined employees onboard, because they’re still not software developers and things like scaling, resource usage, documentation, error handling and so on isn’t being done. But it could be, at least for around 90% of it which will frankly never really matter enough to warrant an actual software developer anyway because it mostly “frees” a few hours a month through its automation.

But with this ability, and the improvements in things like SharePoint online (and likely it’s competition that I don’t know about) a lot of the stuff you’d need either staffed software developers or external consultants to handle, can be handled internally.

This isn’t the death of software engineering as such. Like I said, it doesn’t scale and it’s also going to create a bunch of issues in the long term as more and more of this “amateur” architecture needs to function together, but at the same time, if you need to Google how to get some random lines of text from a dictionary, then I’m not sure you aren’t in danger either. And I really don’t say this to be elitist, but everything you google program can be handled fairly easily by GPT and “good enough” that it’s simply going to happen more and more in our industry.

If you look into my history you’ll see that I’m both impressed and unimpressed by LLMs (well GPT, let’s be honest the others all suck). This is because it really hasn’t been able to help us develop anything in our daily work. It writes most of our documentation, and it’s scary good at it. It also does a lot of code-generation, like auto-generating types/classes/whatever you call them from Excel data mapping sheets + CRUD functionality. Sure, we did that before by writing some short CLI scripts, but now GPT does it for the most part. So it’s not like we don’t use it, but for actual well designed code that handles business logic in a way that needs efficiency? Yeah, it’s outright terrible. Maybe that will change over time, so far it’s not improved in the slightest.

But look at our industry as a whole. I know a lot of HN users work in American Tech or startups, but in the European non-tech Enterprise industry and the massive IT and consultant industry supporting it, there are a lot of developers who basically do what GPT excels at, and as these tools get better, we’re frankly just going to need a lot fewer software developers in general. Most of those people will likely transition into other jobs, like using GPT, but many won’t as other professions start doing their coding themselves.

What worries me the most, however, is that we still teach a lot of CS students exactly what GPT is good at. I’m an external examiner for CS students at academy level, and GPT can basically ace all of their curriculum because it’s mainly focused on producing a lot of “easy” code for businesses. I’m scared a lot of those students are going to have a rough time once LLMs really pick up, unless the curriculum changes. It won’t change in time though, because it’s already been sort of “outdated” for a decade because of how slowly our higher educations adapt to the real world here in Denmark.

d3ckard
0 replies
8h51m

What you're describing is really about empowering users and pushing code where it was not possible before due to cost - so basically confirming efficiency paradox.

creesch
0 replies
11h43m

I agree in the sense that, in its current state, will not replace devs entirely. But it can make the workflow much easier for many devs, to the point where you'd possibly need less of them or can get more done with the same amount.

I commented about somewhere else in this thread:https://news.ycombinator.com/item?id=38259425

But basically I have found that it is a really powerful general purpose assistant/brainstorming pall for a lot of the things that would normally eat up a lot of time.

To expand on that, it isn't even limited to code, but also surrounding tasks. I have used it to round out documentation in various ways. Either by giving it a lot of the rough information and asking to write coherent documentation for me, or by giving me feedback.

The other way around it has helped me onboard on new projects by giving it bits and pieces of written text where I had trouble understanding what the document said.

In the same sense when dealing with management bullshit I have used it to just presenting it with what was asked, telling it my view on it and then asking it to come up with responses based around certain perspectives. Which meant I had to spend less mental bandwidth on inane stuff there as well.

And yes, a lot of it can also be achieved in a team with other people. But, those people aren't always around and also have their own things to do. The advantage of tools like ChatGPT is that they don't get tired, so I can really channel my inner toddler and just keeping asking "why" until I am satisfied with the answer. Even if there are other people available to turn to for help, ChatGPT can also help in refining the questions you want to ask.

ken47
6 replies
9h42m

It feels like this article was not written by a programmer, and it feels like a number of the commenters are not professional engineers. What part of a programmer jobs can AI realistically replace in the near term?

For the sake of argument, let’s say it could replace the coding partcost effectively. Can it still do all the other parts? Take ambiguous requirements and seek clarity from design, product, etc. (instructing an AI to a sufficient degree to build a complex feature could almost be a coding task itself). Code reviews. Deal with random build failures. Properly document the functionality so that other programmers and stakeholders can understand. Debug and fix production issues. And that’s just a subset.

Realistically, in the future there will be a long phase of programmers leveraging AI to be more efficient before there’s any chance that AI can effectively replace a decent programmer.

This will be an advantage to engineers who think on the more abstract side of the spectrum. The “lower level” programming tasks will be consumed first.

padjo
1 replies
9h2m

I haven’t seen an AI tool that can take a Figma mock up and some fuzzy requirements and turn that into real code in an existing codebase unaided by humans. Given what I’ve seen of current AI tools that still seems a long way away.

dieselgate
0 replies
8h57m

Indeed. It's one thing to implement a UI as defined in a mock, for example, but the UX can be totally undefined and open to interpretation.

Closi
1 replies
9h7m

I suspect automated code reviews and doing high-quality automatic documentation (i.e. better than current standards in most projects) will be fully within the capabilities of LLMs soon. Fixing random build failures will probably follow...

So then the question is what % of a programmers job might be taken by this, and does the remaining % require a different skillset.

There are programmers that are great at coding, but complain loudly when the business gives slightly ambiguous requirements because they see their job as coding, not clarifying business rules. This group are more likely to be impacted than the programmers who will happily work in ambiguous situations to understand the business requirements.

ken47
0 replies
9h1m

Both code review and documentation require architectural knowledge to execute properly for a large app. This is not within the reach of current AI, and won’t be for a long time.

aerhardt
0 replies
7h18m

The article wastotallywritten by a programmer though, what would make you say that?

FeepingCreature
0 replies
9h33m

The thing about GPT is that it has approximate knowledge of nearly everything. It makes errors, but the errors are seemingly uncorrelated with human errors. And it knows stuff that I could take hours to search for, especially as Google becomes more and more useless.

Personally, I use it for scripting and as an executive function aid.

RayVR
6 replies
15h39m

I think more people in these comments should read the whole article before responding. The author intentionally takes several turns to arrive at a nuanced view, with a final statement that I think most here would agree with: "Hacking is forever"

kdmccormick
5 replies
11h24m

I'll admit that I read up until this point:

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling.

and concluded that this person is not a skilled enough programmer to be making any statements about the demise of the craft, nuanced or not.

welder
3 replies
8h34m

That's the point.. "revenge of the so-so coder".

lionkor
2 replies
7h33m

so-so? Thats a problem too basic to ask in a coding interview, almost.

welder
1 replies
6h33m

Yea, he says multiple times that he's not a good coder but that his friend is even worse. However, his worse friend + GPT combined can build whole apps.

kdmccormick
0 replies
3h32m

his worse friend + GPT combined can build whole apps.

Which is cool and noteworthy! But cute little apps are not the "craft". The craft is solving problems in novel ways, managing enormous complexity, scaling massively, delighting users, choosing just the right amount of future-proofing so that future migrations are smooth but the code remains comprehensible, balancing performance and readability, enabling other developers to build on your work... those sort of things.

Now, if senior-level developers or successful start-up CTOs start expressing that they feel AI can replace them...thenI will worry that the craft is waning! (I'm not doubting it'll happen in my lifetime. It's just that this article isn't it.)

RayVR
0 replies
7h33m

Totally understand. I definitely took issue with a lot of the article and wondered what this person does for a job but a lot of the comments are fighting a narrative that only existed in the headline.

GabeIsko
6 replies
16h0m

Do people have problems with Leetcode stuff or something? Aside from generating code I already know how to write before hand quickly, I find AI super useless for coding. It's never been able to solve a single problem I have worked on - mostly involving infrastructure deployment and config.

manicennui
4 replies
12h21m

"AI" helps the people who rely on Stack Overflow to do their jobs. Then they come to HN and worry that we will all be out of jobs soon. The problem it solves is "I don't actually know what I am doing at all". These people are not a threat to anyone competent and no amount of questionable code generated by "AI" will change that.

dsQTbR7Y5mRHnZv
2 replies
10h15m

What domain do you work in that you never reference stack overflow?

manicennui
0 replies
1h18m

It's not that I never reference Stack Overflow; it is just not frequent. I actually know my tools fairly well and most corporate development work simply isn't challenging. I also use vim with nothing but syntax highlighting.

lionkor
0 replies
7h27m

There are workflows that dont include stackoverflow at all, like mine, usually.

If I need to know how to convert a u32 to little endian bytes in Rust, for example, my first instinct isnt to go to SO or chatgpt, my first instinct is usually to go to the docs and search for it. Or just try it out in an IDE. u23::to and hitting tab did the trick, so now I can click on that and read the in-source docs. Same for C++, C, bash, anything that has docs or manpages.

I use stackoverflow for problems like "i have this error and it doesnt tell me whats wrong", like "exception in docker-compose like 3, . missing resource ID" -- i need SO to tell me that this means I dont have an image-name.

danielmarkbruce
0 replies
11h56m

Sure, but the number of truly competent software engineers is maybe 10% of employed software engineers. So, 90% of people might need to worry.

simonw
0 replies
12h39m

I find it's incredibly useful for programming once you've learned how to effectively prompt it - which takes quite extensive experimentation.

n_ary
5 replies
10h9m

Here is the controversial take from my perspective:

All these AI replacing coders and creatives is just a broad marketing campaign to put downward pressure on dev and creative salary.

We could've said the same thing back when Code generation was popular. Or we couod say the same thing about stackoverflow, because a non-tech person can bungle together enough solutions from there to build something. Heck NoCode(TM) was putting coders out of job in last decade and here we are in 2023.

Compared to GPT's code-fu, most image generation models are light years ahead(imagine how those models can combine best of different artists into one painting), yet I still have plenty trouble describing in excruciatingly details about the painting I want with all the prompt-engineering-sauce and the end result is usually far away from desired state unless it is just normal sticker stuff.

Explaining requirements is hard, adapting to changes is hard, we still write tests and relatively every MR needs some fixes/updates and we have a whole army of management people trying to do the prompt engineering thing and we still have to interpret in between lines to get a piece of software(even basic CRUDs that makes millions), so yeah, it is all hype driven marketing catering to enterprise about saving all those money but in the end still spend more on the token count and prompt engineering and another army of people figuring out the right keywords to describe the product and whatnot.

All this things are mostly the same, it is like claiming that we'll habe quantum computers just like next year and any Joe rando can train their own GPT or crack open the SSL layer.

Don't feed the hype cycle of these marketers with ypur fear because more you engage on this, more their boat is lifted.

Show me a real product that was built/written by GPT and is making even few hundred in monthly revenue and I will be happy to change my mind.

CodeCompost
2 replies
9h47m

I don't know what's going on, but lately on /r/programming there has been a /lot/ of negative pessimistic and downright angsty articles on software development, like there is some kind of campaign to downgrade the career of software developer.

paganel
0 replies
9h36m

I've stopped visiting /programming a long time ago, but just wanted to say (and I've said it before in here) that the status of the computer programmer has been on a downward trend since just before the pandemic, I'd say 2017-2018 (maybe a little earlier than that, maybe just going into the pandemic, but that's the general timeframe).

There are many reasons for that, partly the "normies" realising that most of what we, programmers, do at our day-jobs is detrimental to the society as a whole (see Facebook, see all the jobs lost to automation, see the kafkaesque world brought about by digitising almost every interaction between a human being and the State/Government), it's partly because some of us, computer programmers ourselves, have realised the same thing, it's partly because of how many computer programmers are now financially way better off compared to the normally employed people, it's partly because of the smugness that some of us, the computer programmers, brandish in many social contexts, and there are countless other similar reasons.

ed_elliott_asc
0 replies
9h40m

I don’t really go on Reddit so don’t know if something is going on but I know there has been very few jobs this year compared to normal times and rates have been going down so it could be genuinely disaffected people being angry rather than a coordinated campaign.

enva2712
0 replies
9h38m
Dalewyn
0 replies
9h32m

All these AI replacing coders and creatives is just a broad marketing campaign to put downward pressure on dev and creative salary.

I see it as the work being replaced notactuallybeing valuable work in the first place, at least not as valuable as some of us want them to be.

Take copywriting for example, that industry got decimated basically overnight by "AI". Why? It's because the work was actually not valuable, it was work any sufficiently complicated program could do let alone a common man. Writing catchy sales catches and blurbs? It's frankly a miracle that industry lasted as long and valued as high as it did.

Or the art scene for another example; if you as an artist or illustrator can be replaced by "AI" then your art was actually only worth a dime a dozen. I say this as a blunt objective observation, not intended as any personal attack or the like.

Or for an even more brutal example: Journalism. We literally can't tell apart man-written and "AI"-written journalism, and that's because man-written journalism has been in the absolute gutter for years if not decades now. Journalism is bad enough today that "AI"-journalists could come in and do an equal if not better job of it than man-journalists.

"AI" is, like many tools of astronomical convenience that came before it, making the act of work more accurately reflect its real value. Some workers will lose, others will win, but ultimately we will all be better off because work and the perceived value of that work aligning better is a very good thing for both producers and consumers.

FigurativeVoid
5 replies
16h0m

I may be in the minority, but after the initial “wow” period, I have been underwhelmed with co-pilot.

Don’t get me wrong, there are few times it has really really impressed me. And there are a few things it really shines at, but most of the time I find it getting in the way.

I’m not so concerned about the craft of programming. But those that make a good living automating trivial tasks should be more worried.

cryptoz
2 replies
15h56m

Something to note here is that copilot is pretty old tech at this point already - GPT4 API is just absolutely worlds better, and it is also about to be old news soon too! With GPT5 coming probably next year, we don’t know what advances it will make but surely basing your understanding of it on copilot will leave you shocked soon enough.

FigurativeVoid
1 replies
12h12m

That’s a good point. I do find gpt 4 to be excellent at generating code snippets.

I think as the technology improves, it’ll get better. But I’m more hoping it’d be able to do something more interesting, say generate tests in the background.

ilaksh
0 replies
11h7m

There are multiple startups that do this specifically with the GPT-4 API today.

skippyboxedhero
0 replies
15h49m

Used it a couple of times. Never found it useful or thought of a situation in which it would be useful.

If you are doing something that is identical to something that has already been done, maybe it works...but then you just look at GH anyway.

For any other situation it is useless because it doesn't know how to program. Pretty much no company wants you typing in random shit into the internet either, it is accident waiting to happen.

Doesn't even work for automation. This is still an absolutely huge area of potential productivity gain. If we still don't have most people taking full advantage of computers, are they just going to magically automate everything? No.

mathieuh
0 replies
13h18m

Completely the same, sometimes it does something cool but a lot of the time I start writing a comment to get it to generate even something simple like e.g. some boilerplate test code and it can’t manage it. It’s happened often enough that I don’t bother most of the time now.

Plus it really annoyingly doesn’t seem to handle quotes and brackets, if I’m halfway through writing a line of code and it suggests a reasonable completion I’ll press tab and it’ll fuck up all the brackets and quote marks, I don’t know what I’m doing wrong here but it really annoys me.

We had a guy from Github do a demonstration for us and the most complicated thing he got it to do was write some getters and setters for a class representing a linked list iirc. And even then he spent more time having to cycle through suggestions to find one that he wanted than he would have spent just writing it out of his head.

Yet to be convinced personally.

fzeroracer
4 replies
15h40m

Much like how many people predicted we'd all be driving flying cars, the people predicting that coding will be replaced by AI just isn't realistic. Primarily because these AI models can literally only exist as long as there's humans constantly creating code for it to read (see: steal) in the first place.

AI cannot sustain itself trained on AI work. If new languages, engines etc pop up it cannot synthesize new forms of coding without that code having existed in the first place. And most importantly, it cannot fundamentally rationalize about what code does or how it functions.

The more you use it or try to integrate it into your workflow (or worse, have others try to integrate it on their own) the more the inherent flaws of the LLMs come into play.

Philpax
3 replies
15h20m

AI cannot sustain itself trained on AI work.

This isn’t true. You can train LLMs entirely on synthetic data and get strong results. [0]

If new languages, engines etc pop up it cannot synthesize new forms of coding without that code having existed in the first place.

You can describe the semantics to a LLM, have it generate code, tell it what went wrong (i.e. with compiler feedback), and then train on that. For an example of this workflow in a different context, see [1].

And most importantly, it cannot fundamentally rationalize about what code does or how it functions.

Most competent LLMs can trivially describe what some code does and speculate on the reasoning behind it.

I don’t disagree that they’re flawed and imperfect, but I also do not think this is an unassailable state of affairs. They’re only going to get better from here.

[0]:https://arxiv.org/abs/2309.05463

[1]:https://voyager.minedojo.org/

somewhereoutth
2 replies
14h59m

They’re only going to get better from here.

Every AI apology seems to include this statement. It is more likely that LLMs have already hit a local maximum, and the next iterations will provide diminishing incremental returns - if anything at all.

Philpax
1 replies
8h35m

What makes you say that? There are constant improvements in how they’re being trained and what they’re being trained with; there really isn’t any particular reason to believe we’re at a maxima. Especially with multimodality being introduced!

somewhereoutth
0 replies
29m

My understanding is that essentially they have been trained oneverything(meaning the whole internet), so there is not much left except niche sources adding incremental benefit. But granted I can imagine the data being used more effectively for training, though I doubt there would be a step change in capabilities coming from that - my suspicion is that as well as the data, the techniques have reached a maximum or close to it.

fumeux_fume
4 replies
11h11m

A lot of this article and many of the comments here don't really comport with my experience programming and designing programs. Many or all of the examples in the article seem like toy programs meant to do something relatively straightforward/simple with loose constraints, but in a domain or using tools unfamiliar to the programmer. However, many programmers find themselves in the opposite situation: using tools they're familiar with to solve complex problems with hard constraints. It's in that kind of situation where I've found that GPTs aren't much of a game changer.

Many of the top comments voice a hated of programming--that programming gets in the way of creating. For a certain set of problems I completely agree (like why is making a website so damn complex), but for many of the problems I work on there are a wealth of good, helpful tools and APIs such that the writing of the code melts away and I'm left with the intellectual challenge of organizing my thoughts and thinking creatively. To me, the article and some of the top comments tell me more about the problems they choose to tackle than the act of programming itself.

teaearlgraycold
1 replies
11h2m

Also many of these programs described in the article have been built many times before. This is effectively like saying programming will die because you can download programs off of GitHub.

The "making flappy bird with GPT-4 in 30 minutes" YouTube videos are fun to watch, but I'll do more than raise an eyebrow when someone releases a novel video game on Steam written primarily by GPT-4 in a fraction of the time a human alone could have.

specialist
0 replies
10h48m

"Computers are useless. They can only give you answers."

-- Pablo Picasso

oivey
0 replies
10h47m

I agree. I think there will be lots of hard problems where an LLM is unlikely to be successful for quite a while. For high surety software like encryption or numerical software I think this is especially the case. Sure, you can have an LLM create a draft still, but you’ll need a lot of skill to refine it into something correct and trustworthy.

armada651
0 replies
10h46m

I feel ChatGPT is primarily good at dealing with problems that are straight-forward to solve, but have a needless amount of complexity attached to them that is easy to delve through for an AI but takes a considerable amount of time to grasp for a human. It is essentially dealing with the parts of software development that we have failed to make human-friendly.

The compiler is already fulfilling a similar role in software development. We write in high-level languages so that we no longer need to deal with the intricacies of the hardware the program will be running on and can focus on high-level logic in a human-friendly programming language.

I suspect that AI tools will fulfill a similar role, relieving us of much of the need to read API documentation, like compilers have relieved us of the need to read instruction set manuals.

StrangeSmells01
4 replies
16h14m

It's interesting and kind of a bummer that all this learning is made obsolete.

reincarnate0x14
0 replies
15h47m

Learning itself is not obsolete. Burning scarce time on arcana associated with various platforms and libraries may become so.

The various LLMs are decently good at recreating known solutions to problems that may require arcane knowledge, which covers a lot of human memory. Is there something lost here? Probably. But who yet knows what may replace it.

kristopolous
0 replies
16h5m

This is the conceit of knowledge work in general. You're only as relevant as much as you keep up.

It's why it continues to tend young.

The older you get, the more opinionated you get and with opinions come inflexibility so the less you are open to new ways of thinking, even those which may be superior.

Essentially people have criticisms of newer tools that have become industry standards as unnecessary and inferior.

Regardless of the merits of that argument, the new tooling is where the industry is at. The best hackers I know are extreme versions of this. Increasing dissagreability appears to be the crucial flaw of competency unless you work hard to avoid it.

The most extreme version of this I remember was about 20 years ago when I met a man of about 55 who was clearly a very brilliant computer engineer with impressive credentials that I have forgotten (a prestigious PhD and work history) but said he had always dismissed microcomputers, yes, as in not minicomputers and mainframes as toys and a waste of time. This was like 2002. I was still in college then and I vowed to never become that guy.

Still trying...

gwervc
0 replies
15h59m

Which learning? If anything AI will render (highly) skilled humans even more relevant in most creative fields: the literature translator and it's ability to transfer emotion, the artist that can create art outside AI reach, the software engineer who can architect a solution, etc.

What is menaced is low to mid quality work, and less educated/skilled people. This is in my opinion where the real danger of AI lies: a world where it's so easy to let a program write an email that the sheer ability tothinkabout it's content, then type it is lost to the majority. Some with anything requiring some effort the learn. This will exacerbate the divide between classes, with a handful people who can understand and act on the world, and those who don't.

CamperBob2
0 replies
16h9m

It reminds me of the days when electronics hobbyists -- and plenty of older engineers -- had to be dragged kicking and screaming out of the vacuum-tube era.

RagnarD
4 replies
5h52m

Not to be unkind but simply factual: the author is clearly not a great developer. e.g.:

"At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling." Really?

"But he had no idea what a pain it is to make an iPhone app. I’d tried a few times and never got beyond something that half worked. I found Apple’s programming environment forbidding." Perhaps. But compared to what? Yeah, it's not using VI to write Javascript or whatever he considers a programming environment.

Power tools didn't stop people from making things. Autocad didn't put architects out of business. Finite element modelling didn't destroy civil engineering. Quickbooks hasn't destroyed the need for accountants. Word processors didn't destroy writing. Google translate didn't remove the need for translators.

LLMs will be a power tool for good developers. I think many underestimate what it'll take to fully replace good or excellent software developers and what they do in totality.

alm1
3 replies
2h16m

In author's defense he doesn't claim to be exceptional and even concludes his essay with labeling this A.I. moment as “the revenge of the so-so programmer.” I actually enjoyed author's nuanced take, different from the common narrative of AI takeover. To the contrary I think LLM-driven coding is not a tool of power users, but rather a tool of disempowered users. They will bring many more people into the engineering profession and change the profession for many in it towards more complex tasks. Quickbooks hasn't destroyed the need for accountants, yet it allowed millions to become the "so-so accountants", when that's all their business needed.

muldvarp
2 replies
1h19m

I don't think a "so-so programmer" would need to google to figure out how to select random lines from a file.

savanaly
1 replies
39m

I would have to Google. You really have the library methods you would need for that memorized? I think the last time I had to select random elements from a list at work was...never. Why would I still know that?

Would it be Random.int()? Or Random.range()? Or maybe there's a better choice that operates directly on a list? Or wait is it gonna return the value or a Generator of the value? Etc. Even if you remember how to do random number generation in your language, this specific use case probably necessitates a Google unless you have a godlike memory, or you don't mind half-assing it.

muldvarp
0 replies
21m

It doesn't sound like the author had to google for specific library methods (otherwise they wouldn't have had to think about the problem for a few minutes beforehand). It sounds like they genuinely couldn't figure it out (even given those library functions).

I don't think googling for library methods is a sign of a bad developer. On the contrary, googling for specific library functions instead of hacking something together using general library functions is a sign of a good developer.

That said, at least for python, I personally wouldn't have to google. I use random.choices a few times a year.

yonisto
2 replies
10h21m

I'm currently re-writing a GPT4 code which is a very interesting experience.

The biotech startup I join 3 months ago have a pipeline written by one of the founders, who has experience in programing but not extensive so he used co-pilot and GTP a lot. The thing WORKS, it actually delivers! this is what got the startup going so it is amazing.

On the first month of the re-write I was able to cut costs in the 100x mark. And on the third month I was able to uncover some subtle bugs that affect the entire pipeline and fixing them is out of the realm of possibility in the current state of AI.

I have no answer whatever employers will continue to value what I bring to the table in 5/10/15 years time especially after AI will improve. I suspect many will not as they will have 'good enough' results from the AI.

d3ckard
0 replies
8h47m

Those good enough results in the cloud world are going to cost them arm and leg. I for one would much prefer to spend my time on optimization of already established routines than on the damn ping-pong between product, design and "external stakeholders". It's actually what I was trained for.

TMWNN
0 replies
5h23m

The biotech startup I join 3 months ago have a pipeline written by one of the founders, who has experience in programing but not extensive so he used co-pilot and GTP a lot. The thing WORKS, it actually delivers! this is what got the startup going so it is amazing.

Indeed!

On the first month of the re-write I was able to cut costs in the 100x mark.

That's clearly where this is all heading: AI delivering the rough draft, and humans applying what AI inevitably misses.

And on the third month I was able to uncover some subtle bugs that affect the entire pipeline and fixing them is out of the realm of possibility in the current state of AI.

It's entirely possible that had you or another human written the thing in the first place those bugs (or equivalent thereof) would have still been there. As you said, "subtle".

I have no answer whatever employers will continue to value what I bring to the table in 5/10/15 years time especially after AI will improve. I suspect many will not as they will have 'good enough' results from the AI.

I would be more optimistic based on your own account, for the reasons I mentioned above.

That said, I am glad that I am self-employed, with a business that I am quite confident AI cannot replicate, and code purely for fun.

summerisle
2 replies
10h22m

Regardless of the quality of work done by an LLM, all of this AI hype has done an absolutely stellar job of clearly dividing the world of software development into two cohorts: those who care (to literally any degree, about process or product), and those who are only involved in it because it makes money. I have never quite liked the latter, because they are often difficult to work with, tend to lack any motivation to develop an in-depth understanding of certain concepts or problems.

vlunkr
0 replies
10h1m

I’ve never met anyone in the latter category who made it through their code bootcamp.

dimva
0 replies
10h9m

I've been programming since I was 7, and I'vealwaysviewed software development as a means to an end. The alternative seems crazy to me - coding without a purpose? Why on earth would you want to do that???

I've been very successful in my career. Things I've built: Bloomberg's domain-specific language and simulation engine for asset-backed securities, a custom database that can process 100,000s of writes / second and 10,000s of reads / second, and the robotics framework powering the Cruise self-driving car (RIP). I retired at the age of 33.

The domain you're working in is usually more important to fully understand than software engineering concepts, although I try to understand both. But I don't really care about software development for its own sake, and I welcome LLMs replacing the more annoying grunt-work parts of the job.

pdimitar
2 replies
12h23m

It's quite amazing how divisive the AI topic turned out to be in HN.

My opinion is that AI proponents extrapolate too much and too optimistically; they always assume linear or simply uninterrupted improvement, and they assume they breadth of AI will increase as well. I see no proof for any of those so far.

Time will tell, obviously, and they may very well turn out to be correct. I just wonder why people get so worked up about it.

Personally I also don't understand why people want the programmer profession to cease to exist but oh well. I guess envy and schadenfreude are factors. I wouldn't know because I don't wish anyone unemployment.

adastra22
1 replies
12h15m

AI proponent typically assume exponential growth. They'd consider linear to be pessimistic :)

ultra_nick
0 replies
9m

Compound growth has been the most realistic historically.

minzi
2 replies
2h58m

It seems like lots of people are pointing out the inadequacies of the current state of the art models. It doesn't seem that unlikely that many of these problems will be solved in the coming years. I won't make any guesses about timelines, but there is certainly a large number of very well funded smart people working on these problems. I'm not sure if this current path leads to AGI, which is what I would consider the minimum requirement for truly replacing human jobs. However, it does suggest to me that building AGI is achievable.

In my mind there are two major architectural problems that still need to be solved if I am to be convinced AGI is close:

1. Medium term memory

2. The need to relive an entire conversation just to produce the latest response.

Here is another question: Suppose we develop AGI. What will the energy requirements be for that system? The scale at which ChatGPT is used today would be peanuts compared to a system that is supposed to replace millions of knowledge workers.

sadtoot
1 replies
2h55m

i wonder how many of the commenters who bemoan how GPT-4 can't do this or that task, could have predicted 2 or 5 years ago what GPT-4 can do easily today

fumeux_fume
0 replies
55m

Yes I also only find people to be credible if they have a solid track record of magically predicting events 2 to 5 years in the future.

lysecret
2 replies
6h33m

So, a few months ago I wanted to lose some weight and wanted to count my calories. As a coder would, I thought why not build a swift app that uses the ChatGPT API to count them (instead of doing it the boring way).

I'm a fairly experienced backend dev I built a few toy websites and I am casually interested in frontend stuff (like watching Fireship) so I thought I'd give it a try.

Long story short, it worked well. I have my app it has all the features I need I used it every day (till I lost 10kg) and I didn't write a single line of code.

However, the coding itself was unlike anything I had done before. I felt much more like a manager/Slave driver.

The core thing was structure structure structure. As soon as the task gets too large, GPT will produce something kind of right that is difficult to understand and impossible to extend. So your task is to split it up, and split it up. Also to apply judgment (it started out using objective-c haha). You have to ask it for alternative libraries. For reference. I relied a lot on something like: "In Python/C#/Rust I would do it like this".

But yes, my overall experience is that I think we are in for an exciting new time, but we are lacking a new coding paradigm.

rukuu001
1 replies
6h2m

But yes, my overall experience is that I think we are in for an exciting new time, but we are lacking a new coding paradigm.

This is the big one for me. I feel like we're all having "640K ought to be enough for anybody" type conversations right now.

CaptainFever
0 replies
5h17m

640K tokens ought to be enough for anybody

liampulles
2 replies
8h21m

The really big problem with LLMs for general AI usage is the requirement of text input - this is a very limited way to represent ideas and intelligence.

Our brain does not think in text, it just spurts out text as a side effect.

frabcus
0 replies
7h20m

This varies a lot by people - some people's conscious thought is in images, some in language, some in unsymbolised concepts.

Hurlburt's "Descriptive Experience Sampling" is a good source on this - see for example table 2 ofhttps://hurlburt.faculty.unlv.edu/heavey-hurlburt-2008.pdf

Agreed that underneath that perceivable layer, there's lots else going on too. But to be fair, there's also lots going on the weights of GPT-4, e.g. seehttps://arxiv.org/abs/2310.02207

dragonwriter
0 replies
8h12m

Really, they don't require text they require tokens, and they can be built with toolchains that turn aby kind of input into tokens. They are called LLMs because they are initially trained on a large bosy of text with all or some of the available tokens mapped to text, but they can also be trained with mappings of other inputs into tokens (either distinct or overlapping the tokens mapped to text), and then you end up with a multimodal toolchain.

But even if they did require text, that's how a large portion of communication from work-issuer to human workers, and vice versa, goes, too, so even if it was true it wouldn't be all that limiting; we don’t communicate directly mentally withbother humans bypassing text.

jelsisi
2 replies
15h26m

The article came to a romantic end that doesn't satisfy how I feel about the workplace now. I have to admit a dirty truth, that I joined this field because of the pay and job stability, not simply for a love of programming. Because of this I feel distressed for its future. No longer can someone walk into a job slightly under qualified and grow with time. Companies will opt for smaller teams of 10xers that with the companionship of AI can pump out 10x10 more code then the rest of us. Simply put, as a 20 something out of college, I'm going to need to figure out how to make as much money as soon as possible.

evilai
0 replies
13h53m

I completely agree, and I think that's precisely the heart of the matter. Salaries will likely begin to decrease, or at the very least, they won't increase at the rate they would have if this technology didn't exist. It seems inevitable to me that programming will become just another job. It's saddening because I deeply love to code and understand the inner workings of it all.

LouisSayers
0 replies
12h42m

Companies will opt for smaller teams of 10xers that with the companionship of AI can pump out 10x10 more code then the rest of us

This is how I see it as well - but we're at the point where you can decide what side of the fence you want to be on.

Keep playing with ChatGPT, but Level up around Systems Design and basic coding principles.

Someone has to oversee the work that the AI's do, why not be one of those people?

gtirloni
2 replies
15h49m

Today I already have trouble doing git archeology to understand why a piece of code was done in a certain way.

Now imagine debugging more and more code that was created by LLMs.

Debugging will become more of a voodoo witchcraft kind of thing. I pity anyone who decides to make a living out of that. Probably will earn a lot and die young.

manicennui
0 replies
12h20m

I imagine that it will be about as bad as debugging code written by the sort of people who find "AI" helpful.

BigJono
0 replies
15h22m

People are very quickly going to find out that "never rewrite" can't be taken as absolute advice.

If you have 5Mloc of third party dependencies and AI generated code that you can't extend and can barely keep functional in production, and a competitor pops up with 50Kloc nearing feature parity and adding more functionality by the day, you'll very quickly need to adapt or die.

Software companies already die to this risk aversion all the time.

Tons more die to rewrites because they get conned into hiring an army of idiot React consultants to do it, who are likely in these HN threads at the forefront of AI code gen themselves, and just end up making 2.0 more complex and worse.

All GPT does is widen the gap between good and bad engineering. If it widens it enough that non-technical people can finally tell the difference, then it's going to kill half the market, just not the half everyone thinks it will.

aspyct
2 replies
11h14m

Genuine question, as I didn't use LLMs much:

Sure, they're good for quick prototypes or even launching production grade services.

But what about maintenance? What about fixing issues in legacy software?

sharkjacobs
0 replies
11h6m

From the article:

You can’t just say to the A.I., “Solve my problem.” That day may come, but for now it is more like an instrument you must learn to play. You have to specify what you want carefully, as though talking to a beginner.

It’s easier to explain “solve this common problem you’re already familiar with” or “create a prototype with pretty flexible specifications” but if you can break down and explain a problem to a rubber duck[1], you can explain it to GPT4, and it’s a better conversationalist than the duck

[1]https://en.wikipedia.org/wiki/Rubber_duck_debugging

cloverich
0 replies
11h7m

Yesterday I asked chatgpt to write me some boilerplate and it did. I went through the exercise of using it, find some bugs, pointing them out, and fixing them (with help, or on my own).

Today I asked a similar question and it gave me boilerplate. Then it ran the code. Then it noticed the error, then it fixed it on its own. All without me typing anything at all.

I pointed that out to someone, and they did not appreciate that it has quickly gained the capacity to test if what it says is true, at least in some limited sense. I'm in awe.

At any rate, I think yes, you plug in code and ask it questions and it can answer them, over time answering them better. Since i've been using it not only can it execute code, but can generate test cases and test _data_. I think wrt to legacy code, it could come up with tests and refactoring strategies.

More generally though, AI is coming. And with it, the ability to write better code than even the best of us. That maybe a year or 50 out, I"m not sure. But its coming. And with each passing day, I move the "when" more into my lifetime.

armchairhacker
2 replies
15h14m

I think people underestimate the level of general intelligence required to write even seemingly basic "glue-code" (which btw, 90+% of software consists of).

If we ever get an AI which can write complex enough "glue code" with a low enough failure rate to replace even junior developers, it's going to cause much, much bigger impacts than people losing jobs. The "boilerplate" and "bureaucracy" which is currently handled by "average" folk...that's what runs the world. If we ever develop AI which can handle that kind of work, barring some constraint like extreme resource usage or disaster like climate change; we'll either be brought into a utopia due to productivity being multiplied, or a dystopia due to a small minority getting control over the world to a precise, extreme extent no small minority has control now.

ilaksh
0 replies
11h8m

GPT-4 is (incredibly obviously) well beyond that point.

crotchfire
0 replies
15h6m

dystopia due to a small minority getting control over the world to a precise

Unfortunately this is how nearly all technologies pan out in the end.

The Internet of the 1990s unavoidably us the surveillance-industrial complex of the 2010s/2020s.

The running-wafers-in-an-old-strip-mall Silicon Valley of the 1980s unavoidably led us to the current "world economy can support only 1.5 leading-edge fabs" situation where having an ME/PSP/TrustZone in your chip is not something you can opt out of anymore.

4death4
2 replies
12h7m

I share the sentiment with others here that have become disillusioned with LLMs. Yes, their capabilities are impressive, and their progression is quite striking. But what I have realized, about programming specifically, is that I don't think LLMs can do a better job than humans.

To many futurists hopping on the hypetrain, this is probably somewhat of a hot take. But let me explain.

Ultimately, programming is about instructing a computer on what to do. We use programming languages because they're highly efficient representations of what a computer can do. In the case of LLMs, we need to use natural language to specify the program. For simple things, this can work great. But for complex programs, it's not easy to articulate exactly what a program should do. As you drill in to the details, you begin to experience an almost exponential growth in the amount of natural language specification you need. And so I think there is something very flawed in believing that you can replace human programmers with LLMs. In my personal opinion, a single programmer will be able to outproduce a single LLM prompt engineer, purely because programming languages are better at expressing ideas compared to natural language. At a certain point, it becomes faster to write code then to write sufficiently detailed prompts.

As a thought experiment for the reader: consider a complex system you've worked on. And now try to imagine writing down a specification for the system in natural language that an LLM could use to generate the code that powers the system. I honestly don't think I could do it for many of the system I've worked on.

So yes, I think LLMs are cool. And I think they're cool because they give creative power to people who can't program. But for sufficiently complex things, i.e. things people care about, a natural language interface isn't the answer.

crabmusket
1 replies
11h44m

As a thought experiment for the reader: consider a complex system you've worked on. And now try to imagine writing down a specification for the system in natural language that an LLM could use to generate the code that powers the system.

But you didn't write it down, so how did the system get made?

The answer is that the program exists in your head as much as it exists in bytes in your source files. There might be a mismatch between your head-program and the source-program, or between your head-program and what the customer actually wants.

But the existence of the head-program is what Naur was articulating in 1985's "Programming as Theory-Building", and is what LLM proponents are rediscovering.

4death4
0 replies
11h40m

But you didn't write it down, so how did the system get made?

We didn't write it down, but we did use natural language to specify it, albeit somewhat indirectly. Some of the specification came from meetings with leadership, PMs, other people in the org, customers, etc. Some of it came from getting drinks with the team and talking about what we wanted to build. Some of it came from conversations with ex-colleagues. Some was inspired by past experience. It was be extremely onerous to put all of that down as input to an LLM.

sensanaty
1 replies
7h52m

Whenever I see people talking about how they use ChatGPT to write entire software suites, I feel like I'm taking crazy pills or something. I can't even get GPT4 to give me decent basic ass Ruby scripts where it opens a file that doesn't have some weird behavioral quirk or footgun, yet people often claim that they're doing insane feats with it all the time.

My experience, even if I'm just trying to be lazy and have it generate some super generic code for me, is that I have to spend double the time I'd have spent if I just wrote the damn script myself with how much I have to scrutinize every single line it generates.

And I personally have 0 desire to prompt "engineer" and do the "Open"AI's work for it by thinking up the perfect prompt for it. One day for sure these AI models will be really good and able to do some impressive stuff, but for now all I can see is a hype wave from people with vested interests in this stuff pushing it onto ignorants, akin to the crypto BS just with even more money behind it.

no-ramen
0 replies
6h2m

I feel like those people who have most success with it use GPT for extremely common problems in the most popular JavaScript web frameworks. My experience with GPT-3.5 but Dart/Flutter is exactly the same

parham
1 replies
7h28m

This will test how egoless each programmer is.

If you’re truly egoless and care about the outcome and not the path to the outcome… you’ll see how exciting this is.

rpeden
0 replies
2h16m

Why does that matter?

A lot of folks chose programming as a career because they enjoy the path at least as much as the outcome, and I think that's a perfectly fine reason to choose a career.

It also gives them good reason to mourn the loss of something that brings them happiness. They might be losing the only part of their job they actually enjoy.

maerF0x0
1 replies
12h38m

my understanding of history is the Catholic church wielded great power over the populace by having both literacy, but and also in Latin (a less accessible language). With 99 theses and a German bible people could easily get their religion on without the priestly class. This feels like a metaphor for what has happened in computing.

DeepSeaTortoise
0 replies
6h34m

Latin was the most accessible langague in all of Eurasia from the time of the Roman empire until the start of industrial revolution. The Catholic church tought Latin to everyone who either became a member of the church, payed a small fee in either money or labor and in many cases for free.

People regularly went on pilgrimages to remote places all throughout Europe and perfectly understood each other, because nearly everyone could speak Latin. Even when the Portugese landed in China they were greeted in Latin.

At somepoint the upper classes started talking more and more in regional dialects and older languages, which then spread to the normal people. The church reacted pretty much immediately to this trend by requiring prayers and certain parts of the mess to also be conducted in these regional languages (somewhere around the 8th century, I think?).

What was low initially were the literacy rates, because books were incredibly expensive. Which didn't mean that the people couldn't read or write, just not to the degree necessary to read or copy the bible in Latin, which was the most available book at the time.

m463
1 replies
11h45m

I've noticed that over the years, coding has gotten easier.

I've been able to express myself better with each year.

I see AI as another higher-level way of expressing myself.

Also, people don't write as much new code as they think they do.

Still, I have many many... MANY projects that I have on my wishlist. Maybe AI will help, but my list won't go away.

dagw
0 replies
8h16m

I've noticed that over the years, coding has gotten easier.

That is the history of programming in a nutshell. I know I've told this story before, but my wife's uncle talks about the revolution he felt when FORTRAN IV was released. Finally you didn't have to know anything about computers or programming to write programs. You just told the computer what you want it to do and it sorted out all the hard stuff for you. Engineers like him could now just use computers on their own and didn't have to wait for the programmers to have time. He spent the rest of his career writing code without ever learning any of the 'craft' the veteran programmers at the time would have considered necessary.

layman51
1 replies
15h4m

I’m so confused by this article’s comparison of GPT-4 to a powerful Go or chess engine. First, is coding some sort of competition like Go or chess? I have heard of competitive programming or code golf, but have little clue how that compares to “the craft of code” or why using GPT-4 diminishes it. Is it cheating if I use GPT-4’s assistance to write code? On the other hand, I do see why it would be harmful for people who don’t have the knowledge or experience to be relaying answers from an LLM to others on forums.

Second, I am seeing a parallel here with the college educators being preoccupied with how to teach calculus. Yesterday, I was reading the preface that Martin Gardner wrote to “Calculus Made Easy” by Silvanus P. Thompson and he wrote something along the lines of how many students and even educators (circa 1998) do not see the point in knowing how to differentiate or integrate by hand. I suppose now more than ever, some people even suggest to stop teaching calculus to high school (and even college students) in favor of some kind of vague statistics, data science, or information literacy course. But most mathematicians would discourage that kind of approach.

I think I agree with the last point about how it is not really about the knowledge of code itself but rather it is about being curious and knowing enough to not let yourself be taken advantage of.

advaith08
0 replies
14h46m

I think the point of the comparison with chess is to show what happens when AI becomes vastly better than humans at something. GPT4 is not vastly better than humans at coding, but the author is using chess as an analogy to visualize what the world might look like if these GPTs continue to get better

karmasimida
1 replies
15h38m

Is driving a craft? Is plumbing a craft?

Coding to me is the same to those 'crafts', with GPT it will be done faster and more efficient, that is it.

evilai
0 replies
14h5m

And also much cheaper, which I think is one of the main problems. If less knowledge is needed to solve these problems, then you should pay less for them. The article makes a comparison with chess, which seems absurd to me, because, although there are more people playing chess than ever before, this is due, in my opinion, to the fact that the goal continues to be to defeat another human playing, so even though it is impossible to beat a machine at the game, it does not affect the gameplay with other humans. On the other hand, coding is a job for many, what sense would it make to pay, or pay the same to a human, if a program can produce higher quality code?

jmull
1 replies
4h16m

I’m disappointed, but not really surprised I guess, with how poorly so many programmers seem to understand what chatgpt is and does.

It’s great and amazing tech, but it’s very much a kind of search engine, and indeed, writes just the kind of code you find on the internet… sometimes great, often flawed, very often incomplete. Its breadth of knowledge is stunning, but it’s not like it’s actually useful to be able to write code in all the languages.

It really doesn’t even touch harder problems. You need to know, at the level of fine details often, what you want from it and what you don’t. That’s hard.

…It save’s time. We will perhaps need fewer programmers as people spend less time on grunt work. But that’s hardly the end of coding, and seems like clearly a good thing in the long run.

runeks
0 replies
4h10m

We will perhaps need fewer programmers as people spend less time on grunt work.

If programmers become more productive, it means the price of development will decrease. This will, in turn, increase the demand for development.

dimgl
1 replies
16h2m

I don't think it's that black and white. AI will get progressively better and sure, people will be faster to deliver things. But there still needs to be someone to be able to decipher it all. I've been using GPT-4 tons and it's great to help get to a certain destination, but it isn't able to arrive at that destination by itself a majority of times.

Additionally I think that a lot of people will assume that once they get GPT-4 to do something for them that they're done. That's not necessarily true. There's a lot of complexity to navigate everywhere. And AI can help you navigate it! But I don't think it means that I, a software engineer, can now pivot to being a lawyer, for instance, solely because of GPT-4.

Sure, it's able to do a lot already. Maybe I'm being naive. But I see it more as a tool for the future rather than something that is going to automate people out of existence.

Edit: hm, why downvotes? If I'm wrong, help me improve my viewpoint on AI.

evilai
0 replies
13h31m

I'm not sure why the downvotes, I think it's a reasonable opinion. But I disagree, I think what is being sought is automation close to intelligence, and if progress continues (which is questionable itself), this system could eventually supplant humans in varying degrees. Initially, this may manifest pragmatically with salary cuts and job losses. However, in the long term, it may evolve to a more real and comprehensive overhaul of our roles. I think the main difference in my understanding is that it is not a tool like a hammer, it is more a tool that has the potential to design other tools (so far very simple) but if its capacity increases, it could automate significant parts of our work.

cnity
1 replies
6h42m

I like to think of a simple thought experiment when these types of arguments come up: Suppose you want some video game to exist. The most amazing, expansive, ground-breaking video game ever made. There are only two scenarios:

1. AI can replace coders altogether. In this case we live in an absolutely amazing world in which anyone can conjure the video game of their dreams instantly.

2. AI cannot do this, in which case there is still (clearly) an essentially infinite demand for new software.

Obviously, we do not live in world number 1. If we have arrived at the world in which AI can do the job of all coders, then why is there not an infinite supply of high quality software? In fact, this would be a software utopia.

gatinsama
0 replies
5h46m

Excellent point, but I'll add:

World #1 would probably replace all other kinds of jobs. A part of programming, sure, is about languages, syntax, fixing bugs, functionality, etc. But the other part is about robustness, organization, and simplification. This other part won't die.

calibas
1 replies
12h33m

Until there's actual general AI, we'll still need people that understand how things work on a fundamental level. You can use AI created code, but you still need someone to tie everything together. You're also going to want an experienced person to debug when something goes wrong.

I think it's funny how terrified people are at the prospect of AI and machines doing most of the work for human beings. If used intelligently and equitably, it'll mean more free time and prosperity for everyone.

summerisle
0 replies
9h59m

The first point seems to be generally accepted, though the second about "prosperity for everyone" presupposes some very drastic social changes. Automation tends to make a portion of people more prosperous at the cost of displacing some other portion. In order to realize prosperity for everyone, there must be minimal displacement and likely a robust social safety net to enable the displaced to recover and attain prosperity at some later time.

awinter-py
1 replies
14h18m

this is by james somers! long arc from 'there's a gold rush and I'm the shovel' back in the day

simonw
0 replies
12h36m
Pufferbo
1 replies
7h58m

I hate the word “coder”. It’s such an undermining term. Saying that someone “codes” or is a “coder” is like calling a chief a “cooker”.

trallnag
0 replies
6h0m

So you also hate the word "developer"?

29athrowaway
1 replies
16h0m

If you train LLMs on code from GitHub it will output what the average GitHub repo has in it. That is, "competent level", not expert or virtuoso.

And it will also carry over bugs found at the "competent" level.

evilai
0 replies
13h48m

Absolutely, but you really don't need virtuoso-level code most of the time ) :

zwieback
0 replies
16h3m

The stuff he describes should rightly be replaced by an AI. However, I think we can still level up. After initial excitement about GPT-generated code I realize that the danger is that we will very quickly generate a lot of code with subtle bugs that will take even longer to debug. If I write the code myself I (or whoever I have look over my shoulder) will be quicker finding the bugs. If it's GPT code I'll first have to dig through the idiosyncrasies, then find the logic error.

Also: writing multi-threaded or otherwise concurrent code is hard to imagine even with GPT 6 or 7.

zwieback
0 replies
16h13m

FYI - many public libraries let you auto-checkout the New Yorker to your free Overdrive account and you can read online or on your phone. I was surprised how easy it has become.

zubairq
0 replies
1h59m

As a developer many people ask me if I am worried about AI making my job obsolete. I always answer truthfully, "I CAN'T WAIT for AI to make my job obsolete!".

After trying several AI tools in the hope of replacing my coding job I find that the tools are good at the bike-shedding tasks like generating some initial code (the easy stuff for me) and terrible at the real work (maintenance and adding features to an existing codebase).

zacksiri
0 replies
7h25m

I think it depends on what you do, if you are doing repetitive work between projects, you're going to find that yes GPT can do most of the work you're doing. However if you're creating new systems, with multiple components and parts. You're going to find that GPT is not that much help.

I pay for GPT-4 and Github Copilot, all I can say is when I'm trying to do repetitive work like writing tests, copilot swoops in and gives me 'mostly' working code. But when I go in to the real creative stuff, that needs real thought and is non-obvious, I don't get much help from copilot it either just doesn't suggest anything or suggests completely the wrong thing.

I think at least for me, the comparison I would use is, construction work. There are many different kinds of constructions, anything simple from making ikea furniture to building bridges. If the coding you do is the equivalent of making ikea furniture. Then yes that stuff is going away, because it's going to be replaced by machines that can produce those parts in bulk.

But if you're creating something special like the Burj Khalifa, or the Golden gate bridge equivalent but in code, no AI is not going to do that for you. It can help with some parts, but no way AI is going to do that sort of engineering, at least not in it's current state.

I think it's worth noting "Coding" spans so many kinds of systems and scale. Not all coding work is the same. Even if you're talking about things which are well-known like "e-commerce" in that industry there are so many variations / many different scales / many different ways to look at the problem.

xbmcuser
0 replies
13h4m

There is a lot of office work that could be automated as chat gpt etc get better more and more efficiency experts will crop up selling efficiency ie less workers/cost/salary to do the same jobs.

wiradikusuma
0 replies
5h29m

I've been trying to get AI to write a component to embed in my Flutter app. The component is a zoomable whiteboard.

These people who said AI built programs, are they using different AI?

uticus
0 replies
10h9m

ChatGPT is another layer. Human -> ChatGPT -> high level programming language -> lower level language / IL / assembly -> microcode -> circuit.

ChatGPT will not only survive, it will get bigger as long as all those layers exist. Unless web design is ready to move to assembly level instructions via RPC (ChatGPT can do assembly), the layers will stay, the complexity will stay, and people who get (and attempt to sell) value from recombining the puzzle pieces will stay.

The real point is not that ChatGPT is so good, it is that everything we program has already been done before. We humans just have trouble finding the pieces and gluing them together.

underscoring
0 replies
6h32m

I know we're all biased here, as coders, but I don't see this as the end, but rather the beginning of something.

In the article, the non-coder was able to produce code with an AI's help, and therefore, coders are doomed.

But in fact what happened is it enabled two "developers" to offload the menial work and focus on the product itself.

The development of electric workshop tools didn't make woodworkers obsolete, it just enabled them to do the work of 5 people, and do it easier and faster.

We are moving from the era of hand-tools in coding.

The real risk is that we're going to see a lot of cheap shitty furniture made by people with no skills, but at least ChatGPT isn't going to chop anyones fingers off.

unclebucknasty
0 replies
13h34m

When using ChatGPT for some tedious challenges—say, select coding problems and infrastructure setup issues on AWS—I have more than once had the feeling that people weren't built for this kind of work anyway, and that it's exactly the kind for which computers were intended.

It wasn't always this way. Sure, programming has always been an exercise in precision and, thus, sometimes tedium. But, there's something in the way it has changed over the last decade or so. When I first started my career, I spent long hours solving problems of the algorithmic, logic, or control flow type. It was challenging mind work and called for creativity. There was real joy in making something work. It felt like creation.

Now, programming consists largely of wiring together pieces of code that you didn't write, and consulting StackOverflow to figure out why it doesn't work. It's following someone else's decisions in an opinionated framework. It's transpiling and tooling and configuration. And, now that programmers are expected to do much more, it's also using someone else's software for devops and tedious infrastructure buildouts, etc.

I'd say the joy in the software engineering role faded long before the arrival of ChatGPT. I'm actually hopeful that AI will help lighten the load of mundane tedium and help us return to the days when programming was fun.

ubermonkey
0 replies
2h11m

The line that killed me:

the one with the pink-shirted guy on I.B.M. PCs

That's PETER NORTON, Goddammit. Know your history, New Yorker writer! Damn!

type4
0 replies
10h52m

But taking a bird’s-eye view of what happened that day? A table got a new header. It’s hard to imagine anything more mundane. For me, the pleasure was entirely in the process, not the product.

I think that's where I differ from a lot of the artful programmers, I've never found pleasure in a perfect, beautiful solution. I get annoyed when I have to dump hours and hours into something that 'should' be simple! I don't want to spend my time fiddling with layouts, CSS, Oauth handshakes, etc. I want to build stuff and get paid for it, that's how I view my job as well. Less logician and more mechanic.

I use ChatGPT as much as I can, to do all I can and then fix the output when it's needed. I view it as a higher level programming language, that spares me from the burden of thinking of low-level details. It's same the reason I code in Python/Javascript rather than C++ or other languages, the goal is to make something of use. That's my goal to stay employed, become one with the language du jour, in the same way I've jumped from PHP to Jquery to React to ...

tromp
0 replies
3h28m

Perhaps what pushed Lee Sedol to retire from the game of Go

But he didn't; he merely retired from professional play, and remains intensely interested in the game and the new directions it's taking.

totallywrong
0 replies
2h54m

I suspect we're going to see this article in many different forms over the next few years. That's when the hype cycle will settle and people realize that LLMs are amazing search engines on steroid buy will never replace actual developers.

totallywrong
0 replies
2h59m

What I learned was that programming is not really about knowledge or skill but simply about patience, or maybe obsession.

Damn, the secret is out.

todd3834
0 replies
1h55m

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. I made some false starts using what I could gather,

If this is the skill level of the coder then I’m not surprised that they are worried about GPT. However they should be stoked GPT can help them code at a much higher level

thaanpaa
0 replies
6h26m

I'll be concerned if I see an LLM create a secure, functional SQL schema and API with users, roles, subscriptions, products, MFA, and all the other basic requirements of modern-day apps. We're not even close to the required level of complexity yet, and I'm not holding my breath either.

teaearlgraycold
0 replies
12h2m

Personally I’m not worried about my employability. Until AGI arrives (which may coincide with an apocalypse for all we know) AI will simply allow me to move faster. If it lets me move 1000x faster then the art will be in what to build. 1000x still does nothing compared to an exponentially exploding field of possibilities. And technical knowledge, experience working as a product focused engineer, and general human intelligence should be enough to beat AI at that art form.

tayo42
0 replies
12h45m

Whats the workflow for writing code like the app mentioned with chat? My experience was like the authors, its a massive environment set up, plus even small projects have 10s of files. How does the model keep all that in context plus changes. How do you start a new session? I checked some small cpp project and the code alone is 168,000 words, so thats already larger then the gpt4 context, that doesn't even include iteration and chat history. IDK where you would even get started.

Also, dont really see the comparison between programming and chess. In chess you can still compete to be the best human chess player. Coding or working in corporate america doesnt really have that.

tangjurine
0 replies
9h11m

I don't think you can call this guy a coder.

I can solve some math problems/write some proofs, but I wouldn't call myself a mathematician.

It's about if you do it for a living, and this guy mostly has some fun side projects.

Don't think this guy is qualified to comment on ChatGPT taking over, like another comment was saying.

stratigos
0 replies
2h45m

The article sounds like it was written by someone with minimal career experience. Its overwhelmingly naive, and Im sure the intended readership isnt any more or less naive than the author.

spacecadet
0 replies
6h20m

New Yorker lol.

slowhadoken
0 replies
13h27m

Programming, like math, is its own reward. I think being ignorant of the two is a luxury with a hidden cost.

sean_the_geek
0 replies
7h3m

My two cents having used AI in programming. It is merely a tool to create. It is not the creator yet, at least in its current iteration.

AI is to programming what electric drill were to wood working before they were invented. It made life easy; it sped up the process and may be it did a job a bit finer. But you still need to understand where to drill and how to create the thing. There are a number of analogies like this you can find.

safaa1993
0 replies
13h15m

To be blunt, i got into programming (nearly 30 years ago) for the results, not the process. In fact if anything i consider the process a sort of burdensome necessity on the way to getting the results i'm actually after.

Anything that shortens the path is good by me.

ryukoposting
0 replies
11h27m

GPT-4, copilot, etc areabsolutely worthlessfor firmware. Copilot shows no meaningful capacity for understanding the context of a firmware codebase, and forget generating anything coherent based on your instructions. I don't need help writing function declarations, but that's the only thing it's useful for (and even that is heavily limited).

rixed
0 replies
7h47m

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling.

Isn't the real waning days of the craft the fact that we consider this problem sort of "state of the art" ?

rip_netrunner
0 replies
1h42m

Reading the responses on this page reminds of the quote

“it is difficult to get a man to understand something, when his salary depends on his not understanding it.” - Upton Sinclair.

resuresu
0 replies
6h24m

I’ve always saw computers as a means to an end. Even in the bladerunner movies no one is sitting around pecking at a phone or a pc keyboard, it’s just holographic AI they talk to and interface with. And that’s the future of technology. Eventually you might do some sort of brain interface and be a cyborg. Who knows but the truth is, we nearing the end of humans needing to sit down and actually write code.

reactordev
0 replies
9h7m

(as they know it). As someone who also learned to code in the same time era as the author. The craft that we were taught is dying and a new way emerging. Programming has already changed drastically since the 90s. Exceptions around low level languages, everything else is orders of magnitude faster, more efficient, more succinct, and more abundant. Combine this was AI assistance and it’s like you have super powers of productivity.

Cry about how you no longer have to code the small and mundane, but solving problems and writing software is more than just coding small mundane things. It’s how those mundane things are architected and called to do a really big thing.

pjmlp
0 replies
10h25m

Definitly, this is what many seem to be missing, today LLM might help write the code, tomorrow they will compile the whole application themselves, they won't need the babysitter developer any longer.

pipeline_peak
0 replies
9h15m

I think people like the author are starting realize coding is a very generic thing. And that it’s not itself “beautiful” but can be used to apply “beautiful” concepts.

nullptr_deref
0 replies
5h49m

Why are people so unaware of what is happening? The leap in quality from DALL-E 2 to DALL-E 3 was due to improvements in the prompts used for training. With countless people using ChatGPT, it will reach a point where OpenAI will have enough data to train on. There are ideas not yet explored that will be connected to the system. At this point, the pace of development is exponential. Unless it flattens out, we will witness AI being able to engineer, code, craft, design, and build a product on its own.

So, what is the basis for my statement? We are just mediators at this point. We don't need to be there. Currently GPT-4 translates our requirements to DALL-E 3 during image generation process. What I want to say is, there is no reason to give any form of signal as an "initial seed." Has anyone tried creating an automatic agent that, when turned on, receives a signal from the external world, whatever it may be, and triggers a chain of thoughts in these LLMs?

We, as humans, rely on signals. Currently, we are providing signals to LLMs in the form of text. There will and there must come a point where agents will feed signals to each other. They will perceive signals from the environment on their own and move on from there. This is not far away. It is near because all the components are laid out there.

And people argue over things like, "Oh, but I need to prompt it." No, you don't. It will take one engineer to define a way to process signals, then write an objective for the AI and train it. Once trained, it will continue on its own. The projection might sound off to you all, but there is no one stopping you from not doing it. Do you know what the limiting factor is? Capital. Yes, capital. So, as long as the power law holds, we will keep seeing strides in improvement. If you have capital and skills, you can do it right now.

I am not saying the signal is "real world physical environment". Say you have a crawler that scrapes the website. You hook it to a summarizer. The summarizer fires the signals and then it calls AIs to create product. This whole thing once created doesn't need a human in the loop. Once something is built, it is done. Also, add to the fact that these AI system only trust other AI system's code. Why? Because human are untrustable. Now you will have an ecosystem designed by AI for AI and to be integrated with AI. Why do we need humans at this point? Like why would a company want humans at this point?

notjoemama
0 replies
54m

ChatGTP cannot provide an actual solution to a multi-tenant application framework for a SaaS product...because I haven't found that in the stack I work in. So I created it but it is closed source. If LLMs rely on open source, it will never provide the value that exists from experienced closed source engineers. I suppose LLM/AI providers can snaekily suck in closed source code for long enough time that once they're caught the cat is out of the bag and the "industry" collapses. But so far, LLMs are cool, but not AGI, nor are they likely to mature into AGI alone.

As far as generalization goes, I'm still waiting for an answer from Tolman-Eichenbaum machines. Or, an integration of LLMs and cognitive maps.

https://www.youtube.com/watch?v=9qOaII_PzGY

http://brilliant.org/ArtemKirsanov/

And his references from the YouTube video:

REFERENCES (in no particular order): 1. Behrens, T. E. J. et al. What Is a Cognitive Map? Organizing Knowledge for Flexible Behavior. Neuron 100, 490–509 (2018). 2. Constantinescu, A. O., O’Reilly, J. X. & Behrens, T. E. J. Organizing conceptual knowledge in humans with a gridlike code. Science 352, 1464–1468 (2016). 3. Aronov, D., Nevers, R. & Tank, D. W. Mapping of a non-spatial dimension by the hippocampal–entorhinal circuit. Nature 543, 719–722 (2017). 4. Whittington, J. C. R., McCaffary, D., Bakermans, J. J. W. & Behrens, T. E. J. How to build a cognitive map. Nat Neurosci 25, 1257–1272 (2022). 5. Whittington, J., Muller, T., Mark, S., Barry, C. & Behrens, T. Generalisation of structural knowledge in the hippocampal-entorhinal system.

CREDITS: Icons by biorender.com and freepik.com Brain 3D models were created with Blender software using publicly available BrainGlobe atlases (https://brainglobe.info/atlas-api)

This video was sponsored by Brilliant

nickpeterson
0 replies
15h42m

LLMs are going to make a lot of code. The tool will often not be able to fix problems with that code. If anything, this is classic automation expanding a field, we’re going to end up with more programmers and more software, not less.

nercury
0 replies
8h48m

The more you use GPT, the more you will understand that it's not the replacement for your attention to the work. And without the attention to the work, you can't spot bugs, blatant inefficiencies, or better design choices - in other words, if you don't take care, you won't even know what you are missing.

I write code with GPT every day for almost a year now, and it helped me greatly to kickstart code that was hard in the past, namely, audio synthesis, vulkan rendering, and other things that were hard to approach. But it's very clear at this point that it shall not be trusted, because it's like a coat of a nice paint put on top of all the clever or dumb stuff that exists on the internet. You never know which one you get, but sure it will be worded convincingly.

mnd999
0 replies
8h37m

Even if AI were actually good at greenfield dev, that’s still only a tiny part of the job, and not the hard part. Try fitting a huge legacy codebase in your prompt.

mastazi
0 replies
13h21m

If you want to become an architect, you don't need previous experience as a bricklayer.

Many other fields besides construction are similar: most positions don't require previous experience as "manual labour".

Software development was an exception to the rule, maybe it no longer is.

lionkor
0 replies
9h54m

LLMs can be very impressive, but "coding" can be a lot more than traversing a btree or sorting an array.

There is software engineering, which is probably what most well paid devs do a little bit too much of, which is requirements analysis, planning, talking to the customer, estimating, etc. Sure an AI could help with all of that, but leaving that entirely up to an AI probably wont work for a very long time. You'd need a real AGI to take over work that has that many social aspects and requires that much planning and consideration.

There is algorithms and datastructures, which is important for developers to learn, but usually not important to write yourself unless thats literally your job. There, LLMs could help with understanding, possibly visualizing these algorithms, and providing example use cases.

There are hobby projects, which generally wont be done with any AI by most people I know, since the point is to program for fun. AI can help with that, e.g. for inspiration or example implementations of little weird algorithms.

There is maintenance. Once a product, especially a B2B product, reaches maturity, a lot of the work is bugfixing, smaller features, updating dependencies, porting to new platforms, etc. I could see LLMs taking some of this work away, which everyone would probably appreciate.

In the end, I believe that AI and in particular LLMs just help programmers, and will maybe replace the 6€/hour outsourced copy paste programmers, but that is... not very scary.

karolist
0 replies
8h56m

At one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling.

No offence to the author but given that this stopped them in their tracks I choose to ignore their programming related opinions.

joebergeron
0 replies
8h28m

I wonder if there are parallels to be drawn here against the industrialization of other, physical, goods, and how their production evolved over time. If you look at something like, say, printing (I'll use the example of woodblock printing here, since I know a decent amount about it), it originated very much as a craft and discipline, as a means to fulfill a particular function. As printing technology improved and became better industrialized, the craft of printing was gradually replaced with other means that fulfilled the same function, but possessed a different form. i.e., lacking those characteristics of products of craft that we find desirable; artless.

We've been able to fulfill the function of printing very cheaply for what seems like ages now, and we've reached a point where some niche and particularly attuned segment of the population wants a bit more out of the actual form of the printed product, the depth of form that was once common. There is a growing community of people that deeply care about woodblock prints now, favoring their physical characteristics, despite such prints falling out of fashion for a period of time during the heights of industrialization. This group of people understands the value of such craft, and is willing to spend more for it, since the difference in the form of the end product from mass-produced stuff is so stark.

The key thing here, and with other categories of physical goods (e.g. pottery, glassware, furniture, etc.), is that there's an obvious and tangible difference in the form of products produced via traditional means, and those mass manufactured, despite them serving fundamentally the same function.

With software however, I worry that this isn't the case, and the sort of resurgence of interest we see now in products produced by traditional means won't ever translate, assuming that we do move in the direction of more and more software engineering being "automated" by AI assistance. To an end-user of a piece of software, I imagine that there will be very little visible difference in the observable characteristics between fully hand-written and AI-produced software. Indeed, given the same requirements, there ought not be a difference between these two things. It's exactly this delta, however, which drives the passionate and less cost-sensitive enthusiasts to prefer handmade physical goods over manufactured ones. If both the form and the function of AI-produced software is identical to those of traditional software, but the AI-produced software is cheaper, why would anyone go with the traditional stuff? I understand that there are other factors at play here as well (e.g., particularly principled consumers etc.) but really, some combination of form, function, and cost seem like the biggest levers to me, and they seem on the face of it to be pulled toward the direction of AI, for better or worse.

itsokimbatman
0 replies
1h20m

Saw;

intrepidsoldier
0 replies
3h38m

* AI is good at hard (to human) tasks like explaining complex code but bad at easy (to human) tasks like counting C files in a codebase. * AI is particularly effective at taking over repetitive work full of toil e.g., reviewing large PRs. As a result, it will allow more developers to focus on fun things like building new features. * AI will increase the number of developers worldwide because it reduces the barrier to entry for programming. * AI will reduce the fear involved in starting large-scale projects like migrating codebases from one framework/technology to another.

interpol_p
0 replies
6h37m

I use ChatGPT daily and have been coding for over two decades. I get it to write all the tedious stuff, and it basically writes all my internal tools, scripts and so on. The last big things it wrote for me were:

A macOS SwiftUI app to go through all the localization strings in my apps and show them in a table where I could see missing translations, and interactively translate them… by sending the, back to ChatGPT

Another macOS app to graph Google ngram data for word lists. It was great because I didn’t even need to check whether there was an API for ngrams, I just asked it for the function, then for the code to plot it in Swift Charts

Sometimes I consult it on refactoring problems (eg., this struct has new responsibilities, what would be a more appropriate name? How do you see it fitting into this section of the code?). It’s a great tool and easily worth the money

It almost never writes any “interesting” code, however

ingen0s
0 replies
1h55m

AI is doing nothing but raising expectations, and so it should. Stop crying and adapt!

hm-nah
0 replies
11h39m

This is a badass article.

hcks
0 replies
9h11m

Coders had decades to unionise and setup legal barriers to protect their livelihood.

(Doctors won’t ever be out of a job because of AI because they’ll always get a fat paycheck even just to look at the output spat out by the machine.)

They didn’t out of misplaced hubris and elitism ("just be the best 10x engineer ever and you’ll be safe").

Now they have less than 10 years before their wages start crashing hard.

gwoolhurme
0 replies
12h37m

As someone who bounces back and forth between I will be okay to anxiety that I am doomed. I have no idea how to prepare for this future...

gumballindie
0 replies
3h59m

For the new crossword project, though, Ben had introduced a third party. He’d signed up for a ChatGPT Plus subscription and was using GPT-4 as a coding assistant.

You have to give OpenAI credit. Their marketing campaign is beyond genius. A bit spammy and aggressive but it works.

glonq
0 replies
11h51m

As a grayhead who does not code daily anymore, I was impressed with ChatGPT as a programming sidekick. It's like having an on-call junior developer who only costs $20/mo.

I needed a quick and dirty utility developed last month and by breaking the problem down into 4-5 steps myself I got ChatGPT to write functions for those steps and then I bolted them together. Most of it went smooth but one part took far too much coaxing and refinement before it generated the right thing.

gemstones
0 replies
15h59m

Imagine a doctor 100 years post-Galen "considering the waning days of medicine". We are a young, young discipline.

gatinsama
0 replies
5h44m

I'm amazed by the fact that this was written by a human when AI would have done a good job.

fuzztester
0 replies
15h57m

Nostradamus wannabes are everywhere, in every field.

fungiblecog
0 replies
12h59m

A eulogy for programming as reinventing the wheel over and over again, possibly. But programming as a creative endeavour, no chance.

fallingfrog
0 replies
14h11m

You have to consider here that anyone posting on hacker news is a pretty smart cookie. We’re the clever kids in school, the ones who were curious and insightful.

Now, chatgpt in its current form cannot think better than me. Faster, for sure, but not better. Its arguments and its explanations are a bit simple and it can’t make really insightful observations. Mostly it just states the obvious.

However.

Therearepeople in the world that itcanout-think. For every one of us there are 10 or 20 people who are pleasant enough but kind of thick, and gpt can already speak more eloquently than them. So where’s the bar for “intelligence?” Does it have to outclass the smartest human alive before we can call it intelligent?

ern
0 replies
8h40m

I manage a number of development teams, and I don’t get to write much code anymore, but I was asked to urgently build an integration.

The team lead I was going to assign the task to was offshore, and not yet online, so I put the URL for the documentation into ChatGPT premium, and it spat out a working solution.

My team lead was a bit perturbed and created a more enterprisy solution with factories and DI (more out of pride than necessity), but my original code was production ready.

I’m not going to be firing anyone just yet, and certainly not my senior people, but I can see a lot of grunt-level work going away.

epcoa
0 replies
15h0m

This article is inadvertently more about bullshit jobs and post clown world low interest rate free money.

He dialled up an FTP server (where files were stored) in an MS-DOS terminal and typed obscure commands.

This is some code a UI in Visual Basic to trace a IP address level exposition right here (this is from a self proclaimed subject matter expert, so no excuse), they’re not deeply knowledgeable and just because they got paid 6 figures at some dumb post recession startup doesn’t change anything.

(Others have already addressed the other howlers in this article)

dmingod666
0 replies
5h17m

most impacted are technical consultants ( outside experts that merely provide high level consulting ) -- their recommendations will be scrutinized more closely

devmor
0 replies
10h3m

I’m far more threatened by pennies-on-the-dollar third world contracting firms (and the businesses that abuse and manipulate hiring regulations to make use of them) than the immature novelty tool that is generative AI.

That being said, those firms are certainly making use of these tools - relying on their underpaid programming staff to clean up the output into something useable for the client.

devit
0 replies
15h46m

I think it's more fun to program with LLMs, because rather than having just a single programming phase, you basically have three quite different phases: a first phase where you craft an initial prompt, then a second phase where you review the LLM code and ask it to make changes, and then a third phase where you change the code yourself to exactly what you want it to be.

demondemidi
0 replies
3h10m

Wow what a terrible article. The guy just got tired of programming. Everything his buddy looked up he could have hacked out but chose not to. All the tools exist to code at whatever level he wants to. Just sounds like he lost the spark. The other day on hacker news a 17 year old kid wrote a debugger and was asking for help. That’s the spark.

delbronski
0 replies
8h52m

I used chatgpt4 recently for two projects. One was a success, the other a complete failure.

In the first project I asked it to help me build a one page ui with html and css, it did great! I know a lot about html and css, so I was able to ask for what I wanted and help it debug.

For the second project I wanted what I thought was a simple threejs animation of an object following a line in a scene. I know very little about threejs and graphics in general. I spent hours wrestling with chatgpt to get this working, but I just got what looked like meaningless non-working code to me. So I took time to learn about threejs, did a few chapters of a course, and came back to the problem. And all of the sudden I had new vocabulary I could use to ask chatgpt what I wanted. And I got pretty close to what I wanted, but not quite. I suspect if I keep learning about threejs and how graphics work I’ll get there.

I don’t think this is the death of our craft, but definitely a big turning point for how we build and use software.

cultureswitch
0 replies
4h59m

LLMs are pretty good at going from complex code/legalese to prose which is easier to understand. Not so much the other way around.

cultureswitch
0 replies
5h4m

LLMs are on the way to AGI but they are still laughably bad at logical reasoning.

Any remotely interesting coding task is at least somewhat novel and requires some reasoning. So far, LLMs don't seem to be very good at handling things they haven't seen before.

colinmegill
0 replies
15h55m

Melodramatic, pretentious, they just love these puff pieces outside of tech

chasing
0 replies
3h45m

Every five-to-ten years some technology comes around that's finally going to put software engineers out of work.

Turns out the only thing truly successful at that task is economic mismanagement.

cantSpellSober
0 replies
13h58m

It's funny, back in the days of IE6, Firebug, jQuery, it was so frustrating just getting something towork.

Looking back, that's what made it fun. You could convince yourself you were a mad scientist building the next Turing machine.

Having stability in the JS ecosystem now (hold for laughter) is better for users, and the developments in AI are interesting, but it's not as challenging to ship things. Maybe embracing thenewchallenges is the "craft" we should focus on to bring the fun back.

brailsafe
0 replies
9h0m

The job market this year, even as someone with a fair bit of experience, is already dogshit for other reasons. I do see the writing on the wall, and am considering re-starting from scratch in something completely different, but I don't know what yet. Though I don't see dramatic leaps in capability happening, opportunity, or productivity happening as much as labor price reducing even further and competition increasing to the point where I don't know if it'll be worth trying.

Carpentry seems like growing field though.

blauditore
0 replies
7h30m

Wake me up when the current AI hype is over. Eventually people will realize that getting the last few percent right is extremely hard, just like the last few times where we got just so close to AGI.

What will stay are some nice tools for whatever tasks, maybe programming too. But bots won't replace software engineers. If they could do the current tasks, we'd just engineer one level higher, just like when high-level language compilers made low-level code largely obsolete (yes yes, not completely of course).

bigpeopleareold
0 replies
1h58m

Instead of ChatGPTing things in what I work in, I have sought to improve my knowledge a lot in shell scripting, I am reading man pages more diligently (even if I am not dilgent enough) and seek to use documentation first, such as man pages, software documentation, etc. before even going to search for something. I am not perfect, but I feel like most of the problems I have are present in manuals - I am RTFMing myself.

I do this because I wasn't like that in the past and I hate myself for not being more deliberate in reading and discovering things organically through doing things. This is not about joy, (even though it makes me happy) but my obligation to myself and then to my employer to be good at what I do.

Sitting with a chat bot undermines this urge to be a better developer completely. I will probably be argued with, but I have worked with a lot of smarter people, but I realized after many years, (and what the article alludes to), that they have a very strong sense to fight through myriads of almost random problems, but they have the knack at solving those problems intelligently.

I remember specific co-worker that used an even plainer Emacs config than I did to write Java code. I was genuinely curious what his thought process was when trying to remember APIs without anything more special than simple modes for jumping around a codebase. He said "I don't know, I just read the manual first."

Yes! If he can do that, I can do that. After that, a lot with working with computers is about continually pushing through those random problems to their solution. This is an absolutely precious skill. I don't feel the affected yet, but I will be, I expect - lesser quality searches while we have to push our chat bots sub-sub-problems instead of just understanding what we are using. I don't want to and I will continue to hold out, like all the other essentially dumb technologies I am now stuck with (I am looking at my two mobile phones now ...)

benreesman
0 replies
8h57m

I was trying to get whisper.cpp built under buck2 this weekend. Unsurprisingly nothing from GPT-4 to valiant to vanilla llama knew how to do that. I got it done but no model (search or LLM) was any help.

I only need help when it’s hard. I’ve already got a BERT-style model completing my queries on Google.

If your entire job could be done by riffing on stackoverflow answers? Yeah, I’d try to climb the value chain.

But most people’s job isn’t to copy paste shit.

bdamm
0 replies
16h1m

Anyone ought to be able to see the new truth; that AI is multiplying the power and importance of the programming and software engineering discipline, by allowing us to churn out far more software in far less time. Entire genres of code tasks will get wiped out (data adaption layers come to mind.)

Software engineers, even if just reduced to AI-bot controllers, will still be essential links between people who have no idea how computers work and the actual machines.

bawana
0 replies
15h48m

AI will surpass humanity at abstraction and the manipulation of abstraction in all its forms - laws, finance, literature, poetry, mathematics... Humans will finally be coralled into the activity their evolution optimized them for - innovation, adaptation, exploration, creation. All the billionaires of today will fade into obscurity as AI strips them of their wealth and leaves only the real humans - unfortunately this may only intensify the rat race if humans continue to compete in a capitalist market.

autonomousErwin
0 replies
12h37m

I think one of the undervalued things is the hack-iness/curiosity nature of (good) developers which will be hard to replace. I imagine there'll be a very short time where developers are in even more demand than they are now and that will quickly wane as everyone realises that the only job is entrepreneurship.

aurbano
0 replies
9h30m

I see a lot of comments wondering how AI could be useful as a software engineer so I'll give my take on it:

I envision it being able to fully replace a junior engineer, and in some use-cases senior engineers as well.

In the case of junior engineers: the AI should have access to an internal knowledge base (i.e. Confluence) and the task/ticketing system (Jira), and ideally the internal chat (Slack). I would assign tickets to it, and I'd expect the AI to reply with comments asking for clarification when there is something unclear, or proposing an implementation before starting if it's not a very simple task (this could even be defined in the task, using story points for instance.

Once cleared, the AI submits a PR linked to the task - so far just like any engineer in the team would. The PR gets reviewed as usual, with suggestions/requests for changes made by the (human) team, which then get addressed by the AI. With the big difference that all this process may happen in less than 1h from ticket creation to PR review.

I wouldn't expect it to be able to implement complex features, onboard new libraries, or rearchitect the system in major ways to accommodate future features - just like I wouldn't expect that from junior team members.

It would obviously be amazing if it could incorporate previous PR comments into it's "context" for future work, so it could learn and improve.

Separately I mentioned it could also do part of the job of senior team members - in the form of PR reviews. If it has access to every previous PR review and learns from them it might be able to give insightful suggestions, and for very large codebases it could have an advantage over humans as it could find patterns or existing code that may be overlooked (i.e. "It looks like this util function you added is the same as this one that was already there", or "This code looks different to similar areas, but follows a different pattern, you might want to rewrite it this way //..."

Is GPT-4 there? Definitely not, perhaps an LLM is not even the way to achieve this, but I absolutely see this becoming an incredible asset to a tech team.

assimpleaspossi
0 replies
6h17m

Whenever someone calls a programmer a "coder", I feel like it's calling Stephen King a "typist" and that's insulting. Programming is much more than just the coding part.

andsoitis
0 replies
4h7m

But he had no idea what a pain it is to make an iPhone app. I’d tried a few times and never got beyond something that half worked. I found Apple’s programming environment forbidding.

A seasoned programmer who struggles with this?

androtheos
0 replies
4h16m

"Programmers are people who can endure an endless parade of tedious obstacles.", maybe my new favorite quote.

acscott
0 replies
15h33m

Title does not approximate well the article. Ad probandum: "Computing is not yet overcome. GPT-4 is impressive, but a layperson can’t wield it the way a programmer can. I still feel secure in my profession."

abhaynayar
0 replies
9h31m

As he later put it, his own neural network had begun to align with GPT-4’s.

Having found the A.I.’s level, I felt almost instantly that my working life had been transformed. Everywhere I looked I could see GPT-4-size holes; [...]

I think this is key for me at least. GPT-4 is really good once you start figuring out what problems NOT to give it. Because there are certain things that it can't solve no matter how much you prompt it in the right direction, and it gets really annoying really fast.

_bramses
0 replies
1h33m

How can a field that is less than 80 years old wane? Y’all’s recency bias shows way too strongly. Your definition of junior and senior is purely anecdotal and based around what the brass at your companies “award”. Not to mention the arbitrary conflation of algorithm design and the actual process of writing sustainable code. It’s a show fair of “my way is best”, and science waits for no opinions as such.

TDiblik
0 replies
6h49m

as a high school student, I'm more hesitant to go into CS degree not because of chatgpt but because it is constantly marketed as a high income job and A LOT of people are choosing/learning it. I wonder if it's gonna be the same in 10 years or so. Chatgpt/copilot make stuff easier and there are only so much CRUD apps to be made :/

MarkusWandel
0 replies
2h41m

It's glorified autocomplete. The stuff that is annoying when you're at a proper keyboard with proper typing skills, but maybe not so bad when you're thumb typing on a smartphone. I've not personally used it, but someone else with very advanced skills, needing to code something in an unfamiliar programming language, asked ChatGPT to do it, got a "mostly right" boilerplate and fixed up the obvious errors, which was much quicker than learning the language enough to start from scratch. Autocomplete!

Loxicon
0 replies
8h33m

This sounds like a Chat GPT sales letter.

LAC-Tech
0 replies
8h58m

LLMs have not significantly changed the way I code at all. It's only good at very basic mechanical tasks.

JonChesterfield
0 replies
9h4m

There appears to be consensus that these models generate code very quickly, relative to a person typing, and that the code emitted varies from totally wrong to subtly wrong. Thus they're going to significantly increase the amount of code in production and that code is going to tend to be of the subtly wrong kind.

This is my non-web-dev impression of the javascript world. Loads of code out there, basically all of it wrong in a bunch of places. Reasoning by analogy with that, one should expect the reliability of computer systems to go downhill from here.

The interesting question for a professional software dev is where one can add value in a world in which essentially anyone can produce large amounts of code that sometimes works. My hunch is that it's going to be really important that the languages and libraries in use are totally solid as they're essentially being fuzz tested continually.

FrustratedMonky
0 replies
4h50m

Regular expressions are pretty tricky. You can be an expert and it can still take time to tweak. I've been using regular expressions for decades, and now find myself using GPT as a kind of pre-compiler. I'll test regular expressions with GPT and it gets really close or the final result.

Dowwie
0 replies
2h38m

Most of the software engineers I've known are problem solvers who use code to accomplish their goals. Problem solvers will use the best tools available for the job. Perhaps engineers will use natural language UI's to build software and "drop down" into a languages for debugging or optimization, just as today some drop into assembly.

DeepSeaTortoise
0 replies
6h19m

I am quite surprised nobody has called out the elephant in the room yet:

A much superior solution than even the next few AI generations has been available for several decades now. You could just outsource your work to a contractor in a developing nation.

Dave3of5
0 replies
6h29m

Ben told GPT-4 what he wanted and got code that ran perfectly

"perfectly"? I mean this sounds dubious. ChatGPT gives interesting suggestions and advice when it comes to coding but I've never been able to get any close to a perfect solution.

I don't think coding AI assistant's are meant to be used in that way anyway...

I think this is just another fluff piece about how coding is now dead (long live coding). That's basically not true. LLM have gave us a leg up but they haven't removed the need for a competent programmer.

BobBagwill
0 replies
3h11m

AFAIK, the go and chess models played themselves to improve. They weren't limited to pieces of recorded games played by junior players, like SO.

So far, coding LLMs are not generating code, compiling it, running it, checking the output, timing the program, fuzzing the input, etc. When they can train themselves, they will improve.

BobBagwill
0 replies
3h18m

AFAIK the go and chess models played themselves to improve. They weren't trained on games played by junior players. So far, coding LLMs don't try to compile and run their code and check the output. They don't fuzz the inputs to look for errors. There aren't competitive, adversarial LLMs trying to outcode each other or fool each other.

Barrin92
0 replies
16h0m

I mean this in the least offensive way possible, but whenever one of these flowery ChatGPT pieces comes out, it's always written like this

"at one point, we wanted a command that would print a hundred random lines from a dictionary file. I thought about the problem for a few minutes, and, when thinking failed, tried Googling. "[...]

I returned to the crossword project. Our puzzle generator printed its output in an ugly text format, with lines like "s""c""a""r""

""k""u""n""i""s""" "a""r""e""a". I wanted to turn output like that into a pretty Web page that allowed me to explore the words in the grid, showing scoring information at a glance. But I knew the task would be tricky[...]

This man has written software professionally for 20 years? The last part of the article is at least correct. Code generation isn't going to replace programmers. Almost all SNES and NES games were written in Assembly. Modern game devs learn Unity and Unreal and visual scripting. Are there now more or fewer game devs? Writing a few lines of code that generatea metric tonof more code is what most of us have been doing for many years now. Abstraction and tooling does not change the nature of the profession and it certainly doesn't end it.

AndrewKemendo
0 replies
13h26m

Yeah but they’ll never master system design/architecture so I’m safe!

/s

We all knew it was coming

2devnull
0 replies
1h50m

I’m not going to use gpt, at least not for coding. That’s not something I’m hearing other programmers say, at least not often. They may say it’s not good enough, yet, or that they as a senior dev don’t have any use for it. These strike me as cope. What I say is that I’m just not going to use it. I’m not sure if it’s a political decision I’ve made, but I’m sure it’s the right decision for me. I will simply change careers or go homeless before I try and compete with these great new efficiencies. I won’t take part in the disruption. It’s clearly degenerate.

1vuio0pswjnm7
0 replies
14h17m

"From the beginning, I had the sense that there was something wrongheaded in all this."

What was your first clue.