return to table of content

Effects of Gen AI on High Skilled Work: Experiments with Software Developers

ang_cire
171 replies
1d

I sometimes wonder about whether the decline in IT worker quality is down to companies trying to force more and more roles onto each worker to reduce headcount.

Developers, Operations, and Security used to be dedicated roles.

Then we made DevOps and some businesses took that to mean they only needed 2/3 of the headcount, rather than integrating those teams.

Then we made DevSecOps, and some businesses took that to mean they only needed 1/3 the original roles, and that devs could just also be their operations and appsec team.

That's not a knock on shift-left and integrated operations models; those are often good ideas. It's just the logical outcome of those models when execs think they can get a bigger bonus by cutting costs by cutting headcounts.

Now you have new devs coming into insanely complex n-microservice environments, being asked to learn the existing codebase, being asked to learn their 5-tool CI/CD pipelines (and that ain't being taught in school), being asked to learn to be DBAs, and also to keep up a steady code release cycle.

Is anyone really surprised they are using ChatGPT to keep up?

This is going to keep happening until IT companies stop cutting headcounts to make line go up (instead of good business strategy).

trashtester
35 replies
22h11m

They're realizing that 10x (+) developers exist, but think they can hire them at 1x developer salaries.

Btw, they key skill you're leaving out, is to understand the business your company is in.

If you can couple even moderate developer ability with a good understanding of business objectives, you may stay relevant even while some of the pure developers are replaced by AI.

swader999
31 replies
21h44m

This 100%. It's rare to find anyone that wants to learn a complex biz domain.

extr
26 replies
21h31m

It's because it's often a suboptimal career move from an individual perspective. By branding yourself as a "FinTech Developer" or whatever instead of just a "Developer" you're narrowing your possible job options for questionable benefit in terms of TC and skills growth. This isn't always the case and of course if you spend 20 years in one domain maybe you can brand yourself as an expert there and pull high $/hr consulting rates for fixing people's problems. Maybe. More likely IMO you end up stagnating.

I went through this myself early in my career. I did ML at insurance companies and got branded as an insurance ML guy. Insurance companies don't pay that well and there are a limited number of them. After I got out of that lane and got some name-brand tech experience under my belt, job hunting was much easier and my options opened up. I make a lot more money. And I can always go back if I really want to.

trashtester
21 replies
21h19m

I did ML at insurance companies and got branded as an insurance ML guy.

If you're an "ML insurance guy" outside of the US, it may be quite lucrative compared to other developers. It's really only in the US (and maybe China) that pure developers are demanding $200k+ salaries.

In most places in Europe, even $100k is considered a high salary, and if you can provide value directly to the business, it will add a lot to your potential compensation.

And in particular, if your skills are more about the business than the tech/math of your domain, you probably want to leverage that, rather than trying to compete with 25-year-olds with a strong STEM background, but poor real-life experience.

matrix2003
19 replies
21h4m

In most places in Europe, even $100k is considered a high salary

I think it's worth adding here that US developers can have a much higher burden for things like health insurance. My child broke his arm this year, and we hit our very high deductible.

I would like to see numbers factoring in things like public transportation, health insurance, etc., because I personally feel like EU vs US developers are a lot closer in quality of life after all the deductions.

nextos
6 replies
18h36m

I would also like to see those numbers. IMHO, the biggest difference comes from housing cost. For example, from my own back-of-the-envelope calculations, a £100k Oxbridge job affords a better lifestyle than a $180k NY job mostly because of housing.

However, some US areas have competitively priced housing and jobs that would make the balance tilt in favor of America. In EU, affordable spots with lots of desirable local jobs are becoming increasingly rare. Perhaps Vienna, Wrocław and a few other places in Central/Eastern EU.

_dain_
3 replies
10h19m

I don't know how NY is in comparison, but housing in Cambridge is almost as expensive as in London. A detached 3br starts at £700k. NIMBYs keep killing any expansion of supply.

sgt101
2 replies
9h24m

Some say NIMBY's other say water supply.

"Water supply issues are already holding back housing development around the city. In its previous draft water resources management plan, Cambridge Water failed to demonstrate that there was enough to supply all of the new properties in the emerging local plan without risk of deterioration.

The Environment Agency recently confirmed that it had formally objected to five large housing developments in the south of the county because of fears they could not sustainably supply water. It has warned that planning permission for more than 10,000 homes in the Greater Cambridge area and 300,000 square metres of research space at Cambridge University are in jeopardy if solutions cannot be found.

A document published alongside the Case for Cambridge outlines the government’s plan for a two-pronged approach to solving the water scarcity issue, to be led by Dr Paul Leinster, former chief of the Environment Agency, who will chair the Water Scarcity Group.

In the long term, supply will be increased, initially through two new pieces of infrastructure: a new reservoir in the Fens will delivery 43.5 megalitres per day, while a new pipeline will transfer 26 megalitres per day from Grafham Water, currently used by Affinity Water.

But, according to Kelly, a new reservoir would only solve supply requirements for the existing local plan and is “not sufficient if you start to go beyond that” – a point that is conceded in the water scarcity document. "

https://www.building.co.uk/focus/a-vision-for-150000-homes-b...

_dain_
1 replies
9h7m

The NIMBYs are the ones trying to stop the reservoir from being built. They created the problem they're complaining about. A big hole in the ground with water in it is not a complicated piece of infrastructure, but the planning system is so dysfunctional and veto-friendly that the construction timeline has been pushed out into the 2030s, in the best case. Previous generations got them done in two years flat. It is an artificial problem.

Same thing with transport. "We can't build new houses because it would increase car traffic", meanwhile putting up every barrier they can think of to stop East-West Rail.

_dain_
0 replies
5h19m

Just adding onto this because I can't edit: it beggars belief that water supply would ever be a limiting factor for urban growth in England. It's preposterous that this is even an issue. Yes, Cambridgeshire is the driest region in the country, but that's only a relative thing! It still gets quite a lot of rain! Other countries have far less rainfall in absolute terms and they grow their cities just fine, because they build reservoirs and dams and water desalination plants. Nature has not forced this situation on us, we are simply choosing not to build things.

pas
1 replies
18h0m

what are typical Oxbridge jobs? Are they really in Oxford and Cambridge? if yes, then ... NYC vs not-London UK seems like almost incomparable.

nextos
0 replies
17h57m

Pharma and bio/tech startups in the area, mostly in Oxford and Cambridge Science Parks. So yes, local.

The pharma & biotech sector is blooming in the Golden Triangle, which also includes London.

overrun11
3 replies
15h34m

US big tech devs typically make 3x+ what the Euro equivalent does. A 4500 deductible isn't really materially relevant. I (a US dev) have a high deductible plan but my employer contributes substantially to it anyway.

namaria
0 replies
10h29m

You're conflating median and average compensation.

matrix2003
0 replies
13h43m

IMO big tech is also a small part of the US sector (I’m not in big tech). The US idolizes FAANG, but there are a heck of a lot of other companies out there.

edit: Yes they do employ a ton of people, but most people I know don’t make those salaries.

ghaff
0 replies
4h58m

Typically companies put about the same amount of money into the pot whatever medical plan you choose.

lrem
3 replies
11h0m

It’s not really about the numbers. My pile of money at retirement would definitely be larger if I moved across the pond. But here my children walk themselves to school from the age of six. I don’t worry about them getting killed by a madman. And even the homeless get healthcare, including functional mental health support. Things no employer can offer in the US.

ptero
1 replies
5h54m

This a very reasonable viewpoint. As a different personal opinion, I am glad that I live on my side of the pond.

I am in my early 50s, and having worked in tech for the last 24 (with sane hours and never at the FAANG salaries) I own my condo in a nice town, my kids college is paid for and my personal accounts are well into 7-digits.

This is not all roses: schools in the US stink (I have grown up on the other side of the pond and was lucky to get a great science education in school so I can see the difference), politics are polarized, supermarket produce is mediocre, etc.

The biggest issue for me though is that I suspect that the societies on both sides of the pond are going to go through major changes in the next 10-15 years and many social programs will become unaffordable. I see governments of every first world country making crazy financial and societal choices so rather than depending on government to keep me alive and well I much prefer US salaries allowing me to have money in my own hands. I can put some of that into gold or Bitcoin, or buy an inexpensive apartment in a quiet country with decent finances and reasonable healthcare. Not being totally beholden to the government is what helps me sleep well at night. My 2c.

immibis
0 replies
2h49m

Don't forget that your stock market retirement account is also a government welfare program.

finikytou
0 replies
5h22m

Im in western europe and I would never let my children walk themselves to school at 6. europe is far from being a safe place minus some eastern europe.

SoftTalker
1 replies
16h7m

Your high deductible should have been covered by an HSA. In fact AFAIK it's required that you have one if you are on a high deductible plan.

overrun11
0 replies
15h38m

Sure but HSA is often your own money that you contributed

ghaff
0 replies
20h19m

I'm skeptical.

I live in a location that wouldn't have public transportation even in Europe. And my healthcare, while not "free," was never expensive outside of fairly small co-pays and a monthly deduction that wasn't cheap but was in the hundreds per month range. Of course, there are high deductible policies but that's a financial choice.

Wytwwww
0 replies
5h27m

I would like to see numbers factoring in things like public transportation, health insurance, etc

If that's worth more than $50k or so anyone living in the US that's not making significantly more than the median wage or would be in pretty horrible spot financially.

hit our very high deductible.

Isn't the maximum deductible that's allowed "only" $16k?

Also taxes are usually significantly higher in Europe with some exceptions (e.g. Switzerland vs California, though you need to pay for your health insurance yourself in Switzerland ).

mlinhares
0 replies
15h18m

rather than trying to compete with 25-year-olds with a strong STEM background, but poor real-life experience.

I have never seen this happen. All the new grads I've ever worked with (from Ivy league schools as well) are pretty much incapable of doing anything without a lot of handholding, they lack so much of everything, experience, context, business knowledge, political awareness, technical knowledge (I still can't believe "designing data intensive applications" isn't a required read in college for people that want big tech jobs) and even the STEM stuff (if they're so good why do i have to explain why we don't use averages?).

Which is fine, they are new to the market, so they need to be educated. Other than CRUD apps, i doubt these people can do anything complex without supervision, we're not even in the same league to compete with each other.

WalterBright
3 replies
20h42m

got branded as an insurance ML guy

I got branded as a compiler guy. My salary is $0.

PaulHoule
2 replies
17h51m

I’ve never gotten avaricious small town business types to believe that the cheap thing to do if the proprietary language you spent a lot to buy is EOL is hire some interns from your local Uni who took a compilers class to write an implementation or translator for $0.00 and some good experiences although… it’s true.

WalterBright
1 replies
17h49m

Taking a college level compiler class will get you a toy compiler.

PaulHoule
0 replies
7h20m

Not the best but something that seems impossible to the small town businessman.

sevensor
2 replies
16h38m

Funny, my world is full of the other kind of engineer. We all come with domain knowledge built in, and programming is the complex domain many people don’t want to learn.

SoftTalker
1 replies
16h5m

I've worked in higher ed. Many of the masters students I see want the credential but they have no desire to write code. Copying and plagarism are rampant.

sevensor
0 replies
6h56m

Which is short sighted, because if they’re on the other side of the interview table from me it takes about 60 seconds to find them out. Competence speaks a different language than fraud.

llm_trw
0 replies
11h55m

Pay me enough and I'll suck up everything about municipal waste management that any human being has ever said. Pay me 'market rates' and you'll get the type of work the market produces.

intelVISA
1 replies
11h22m

If you can couple even moderate developer ability with a good understanding of business objectives, you may stay relevant even while some of the pure developers are replaced by AI.

By 'stay relevant' you mean run your own? Ability to build + align to business objectives = $$$, no reason to be an Agile cog at that point.

YawningAngel
0 replies
8h32m

I know plenty about IAM and card issuance, but I don't back myself to compete with Ping or Marqeta

neom
0 replies
15h39m

Biz side of the house here: for sure, it's always been the way that really what you're "weeding in" is the IC's who are skilled and aware. (and:If I had to do layoffs, my stack rank would be smack dab in here also, btw)

yifanl
33 replies
1d

They're cutting headcount because they have no conception of how to make a good product, so reducing costs is the only of making the bottom line go up.

That's across the board, from the startups whose business plan is to be acquired at all costs, to the giant tech companies, whose business plan is to get monopoly power first, then figure out how to extract money later.

goatlover
16 replies
22h6m

Failures of capitalism that probably need regulation. If the middle class and workers are to be protected.

WalterBright
12 replies
20h38m

The city of Seattle decided that gig delivery drivers needed to be protected. They passed a minimum wage for them. The income of gig drivers promptly dropped, as the rise in prices corresponded with a drop in customers.

Who regulates the regulators?

malfist
3 replies
20h9m

You need a citation for that claim. And something more than just "the week after this went into effect it reduced sale volume"

And besides, what's the alternative here? Don't protect employees? Let them work below minimum wage?

malfist
1 replies
16h5m

Leaving aside the relevance of an Alaskan newspaper talking about local Seattle politics, their only source is a news article from Seattle written the week of the law taking effect.

As I said in my original comment, show me something that isn't from the first couple of weeks after it took effect. Preferably something scholarly, not just anecdotes from a newspaper

WalterBright
0 replies
11h45m

If you go to news.google.com, and type in "seattle gig worker law" and you'll find plenty of articles about it.

djbusby
3 replies
19h45m

Why did the prices for Uber/Lyft rise everywhere else too?

It wasn't Seattle laws that did that.

It's that those companies needed to go from Growth-Mode to Margin-Mode. They could no longer sell VC dollars for dimes.

gruez
1 replies
16h25m

Why did the prices for Uber/Lyft rise everywhere else too?

Presumably it rose in Seattle higher/faster than it would otherwise. The source that he provided in a sibling comment says sales dropped "immediately", which seems to corroborate this. It's lazy to argue "well prices rose elsewhere too so the minimum wage law couldn't possibly have had an impact"

djbusby
0 replies
15h3m

Well, I don't think I'm lazy. IIRC prices were already moving up and the rent-a-car-apps were exiting. And Uber/Lyft were very far on funding, needing some margin. And then there was this other trigger event of worker pay. Which is a nice corporate spin opportunity. And we've seen that pattern 100s of times (blaming workers for price increase).

WalterBright
0 replies
17h56m

Yes, it was the Seattle laws that did it locally. The food delivery gig drivers were not Uber/Lyft.

bumby
2 replies
19h29m

Who regulates the regulators?

In a healthy democracy, the civic-minded voters do.

WalterBright
1 replies
17h50m

That doesn't work out very well. The voters are too far removed from it, and the instruments the voters have are far too blunt.

bumby
0 replies
16h23m

That’s why i said “healthy” and “civic-minded.” I think there’s some truth to the saying “You get the government you deserve”

It still seems to be better than all the alternatives.

makk
0 replies
19h10m

Who regulates the regulators?

The voters and their representatives, if they put their minds to it.

numpad0
0 replies
19h43m

Yeah this after all. Private corporations exist to make money. They minimize investment and maximize returns. They progressively suppress wages and inflate profits. Clever individual choices and self discounts merely create temporary competitive advantages against other poor laborers in the market.

If you manage to secure the bag that way and exit that class altogether, good for you, but that solves nothing for the rest. There's no be all end all threshold that everyone can just stay above and stay ahead of inflation.

gruez
0 replies
16h23m

What would this mean in practice? Forcing companies to justify layoffs? Mandating minimum amount of people in frontend/backend/devops/operations/security roles?

01HNNWZ0MV43FF
0 replies
20h56m

Would UBI and LVT do the trick? I'm okay with minimum wages on the idea that a democratic government is the biggest possible de facto labor union, but in general I don't want the government making too many rules about how you can hire and fire people

WalterBright
14 replies
20h36m

They're cutting headcount because they have no conception of how to make a good product, so reducing costs is the only of making the bottom line go up.

The field is wide open for a startup to do it right. Why not start one?

Buttons840
7 replies
20h7m

Well, either (1) our free market is not healthy and the ability to form new companies that do better is weak and feeble, or (2) the current companies are doing about as well as possibe and it is hard to out compete them.

If it's the latter (number 2), then we need to start asking why American companies "doing about as well as possible" are incapable of producing secure and reliable software. Being able to produce secure and reliable software, in general and en masse, seems like a useful ability for a nation.

Reminds me of our national ability to produce aircraft; how's the competition in that market working out? And are we getting better or worse at producing aircraft?

WalterBright
5 replies
17h35m

Companies deliver what customers are willing to pay for.

If customers are willing to pay for X, and no companies make X available, you have a great case to make to a venture capitalist.

BTW, in every company I've worked for, the employees thought management was stupid and incompetent. In every company I've run, the employees thought I was stupid and incompetent. Sometimes these people leave and start their own company, and soon discover their employees think they're stupid and incompetent.

It's just a fact of life in any organization.

It's also a fact of life that anyone starting a business learns an awful lot the hard way. Consider Senator George McGovern (D), who said it best:

George McGovern's Mea Culpa

"In retrospect, I wish I had known more about the hazards and difficulties of such a business, especially during a recession of the kind that hit New England just as I was acquiring the inn's 43-year leasehold. I also wish that during the years I was in public office, I had had this firsthand experience about the difficulties business people face every day. That knowledge would have made me a better U.S. senator and a more understanding presidential contender. Today we are much closer to a general acknowledgment that government must encourage business to expand and grow. Bill Clinton, Paul Tsongas, Bob Kerrey and others have, I believe, changed the debate of our party. We intuitively know that to create job opportunities we need entrepreneurs who will risk their capital against an expected payoff. Too often, however, public policy does not consider whether we are choking off those opportunities."

https://www.wsj.com/articles/SB10001424052970203406404578070...

Buttons840
4 replies
16h45m

I think those who run companies are often stupid and incompetent, and the workers are probably right most of the time. Just look at the quote again, there's lots of things people don't know when they're starting a company.

It would be nice if being smart and competent was the key to success in our society, but you hint at the real key to success in your own comment--getting favor and money from those who already have it.

You didn't really engage with the other half of my comment, but I'll say it again, in general our society seems to be crumbling and our ability to get things done efficiently and with competence is waning. Hopefully the right people can get the blessing of venture capitalists to fix this (/s).

yibg
1 replies
11h57m

The problem isn't that those who run companies are stupid and incompetent (which his probably true). It's that the people lobbing those jabs would also be stupid and incompetent if they were running the companies but thinks they have the answers.

Buttons840
0 replies
9h54m

Right. I agree that we all have our share of stupidity and incompetence. Some of us are stupid and incompetent on a worker's salary, and some of us are stupid and incompetent on a CEO's salary. See, for example: https://news.ycombinator.com/item?id=38849580

WalterBright
1 replies
11h2m

I think those who run companies are often stupid and incompetent

If you've never run a business, it can sure seem that way.

the real key to success in your own comment--getting favor and money

Nobody is going to invest in your startup unless you convince them that you're capable of making money for them.

rakejake
0 replies
7h48m

Not sure why this thread got consumed by a strawman that "Most people running businesses are stupid/incompetent". That is obviously not true.

What I have realized is that most employees never do even a basic analysis of their industry vertical, the key players and drivers etc. Even low-level employees who will never come face-to-face with customers can benefit from learning about their industry.

The flip side is that a lot of business people (I exclude people who start their own companies or actively take an interest in a vertical) are also mostly the same. They care about rising from low-level business/product role to a senior role, potentially C-suite role, and couldn't care less about how they make this happen. Many times, it is hard to measure a business person's impact (positive or negative) - think about Boeing. All their pains today were seeded more than 20 years ago with a series of bad moves but the then CEO walked off into the sunset with a pile of cash and a great reputation. OTOH, there was a great article yesterday on HN from Monica Harrington, one of the founders of Valve whose business decisions were crucial to Valve's initial success, but had to sell her stake in the company early on.

I think business, despite its outsize role in the success/failure of a company, follows the same power law of talent that most other professions carry. Most people are average, some really good, some real greedy etc.

yibg
0 replies
11h55m

Maybe because "secure and reliable software" isn't what makes successful companies. Just like how people complain about tight airplane seats and getting nickeled and dimmed but continuously pick airlines that have cramped seats but are cheaper over airlines with more generous space but cost more.

makk
1 replies
19h27m

Why not start 10?

WalterBright
0 replies
17h31m

Feel free!

yifanl
0 replies
4h45m

I think there's a misunderstanding. They _are_ doing it right, it's just that the right way doesn't involve building a good product, because you aren't any more valuable. There's no way a startup can make more money than the guy speedrunning acquisition, making gonzo dollars fast >>> making (probably less) gonzo dollars in the long term.

If you think there's a problem with this model (and based on your wording of "doing it right", this seems to be the case), it's largely in the incentive structure, not the actors.

numpad0
0 replies
19h15m

Because techno-meritocratically correct business principles are far from capitalism best practices. Even employees will hate it because you won't be bringing home the bacon.

malfist
0 replies
20h10m

Because the field isn't even. Monopoly or oligopoly power works to prevent new competitors while at the same time stagnating their offerings.

Look at the explosion of browsers and their capabilities after the IE monopoly was broken.

intelVISA
0 replies
11h17m

The field is open(tm) as in under heavy regulatory capture, expecting the right school for VC funding, laden with competitors propped up by tax subsidiaries (if EU) and/or H1B farms (if US)... a tough time for even a moderate hustler.

solidninja
0 replies
4h58m

Probably a lot of that is to do with the short-term profit mindset. There is tons of software that is far from optimal, breaks frequently and has a massive impact on human lives (think: medical record systems, mainframes at banks, etc.). None of it is sexy, none of it is something you can knock up a PoC for in a month, and none of it is getting the funding to fix it (instead funding is going to teams of outsourced consultants who overpromise and just increase their budgets year on year). Gen AI won't make this better I think.

jajko
23 replies
1d

So its not going to stop. Typical C-suite who holds real power has absolutely 0 clue about IT complexity, we are overpriced janitors to them. Their fault, their blame, but they are probably long gone when these mistakes manifest fully.

In my banking corp, in past 13 years I've seen massive rise of complexity, coupled with absolutely madly done bureaucracy increase. I still could do all stuff that is required but - I dont have access. I cant have access. Simple task became 10 steps negotiating with obscure Pune team that I need to chase 10x and escalate till they actually recognize there is some work for them. Processes became beyond ridiculous, you start something and it could take 2 days or 3 months, who knows. Every single app will break pretty quickly if not constantly maintained - be it some new network stuff, unchecked unix update, or any other of trillion things that can and will go wrong.

This means - paper pushers and folks at best average at their primary job (still IT or related) got very entretched in processes and won, and business gets served subpar IT, projects over time and thus budget, perpetuating the image of shitty tolerated evil IT.

I stopped caring, work to live is more than enough for me, that 'live' part is where my focus is and life achievements are.

godelski
13 replies
23h32m

- my laundry app (that I must use) takes minutes to load, doesn't cache my last used laundry room, and the list of rooms isn't even fucking sorted (literally: room 3, room 7, room 1, room 2, ...)

- my AC's app takes 45 s to load even if I just used it, because it needs to connect. Worse, I'll bring the temp down in my house and in the evening raise it, but it'll come on even when 5F below my target value, staying on for 15+ minutes leaving us freezing (5F according to __it's thermometer__!)

- my TV controls are slow. Enough that I buffer inputs and wait 2-3 seconds for the commands to play. That pressing the exit button in the same menu (I turn down brightness at night because auto settings don't work, so it's the exact same behavior), idk if I'm exciting to my input, exiting the menu, or just exiting the sub menu. It's inconsistent!

There's so much that I can go on and on and I'm sure you can too. I think one of the worst parts about being a programmer is that I'm pretty sure I know how to solve many of these issues, and in fact sometimes I'll spend days to tear apart the system to actually fix it. Of course to only be undone by updates that are forced (app or whatever won't connect because why is everything done server side ( ┛ ◉ Д ◉ ) ┛ 彡 ┻ ━ ┻ ). Even worse, I'll make PRs on open projects (or open issues another way and submit solutions) that having been working for months and they just go stale while I see other users reporting the same problems and devs working on other things in the same project (I'll even see them deny the behavior or just respond "works for me" closes issue before opener can respond)

I don't know how to stop caring because these things directly affect me and are slowing me down. I mean how fucking hard is it to use sort? It's not even one line!

What the fuck is wrong with us?

NortySpock
4 replies
18h23m

my laundry app (that I must use) takes minutes to load

Do they have a place to mail your complaints? Who forces you to use the app? Annoy them. Annoy their boss. Write a flier pointing out how long it takes to use the app and leave those fliers in every laundry room you visit (Someone checks on those machines, right?). Heck, send an organization-wide email pointing out this problem. (CC your mayor, council-member, or congressional representative.) (You don't have to do all of these things, but a bit of gentle, non-violent, public name-and-shame can get results. Escalate gently accordingly as you fail to get results.)

my AC's app takes 45 s to load even if I just used it, because it needs to connect

If I were in your shoes, assuming I had time, I might (a) do the above "email the company with a pointed comment about how their app sucks" or (b) start figuring out how to use Home Assistant as a shim / middleman to manage the AC, and thus make Home Assistant server and its phone app the preferred front-end for that system (c) write a review on your preferred review site indicating the app is a pile of garbage

Even worse, I'll make PRs on open projects (or open issues another way and submit solutions) that having been working for months and they just go stale while I see other users reporting the same problems and devs working on other things in the same project (I'll even see them deny the behavior or just respond "works for me" closes issue before opener can respond)

Admittedly, the heavy-handed solution for this is to make a software fork, or a mod-pack, or "godelski's bag of fixes" or whatever, and maintain that (ideally automating the upkeep on that) until people keep coming to you for the best version, rather than the primary devs.

---

No, I don't do this to everyone I meet or for every annoyance (it's work, and it turns people away if you complain about everything), but emails to the top of the food chain pointing out that basic stuff is broken sometimes gets results (or at least a meeting where you can talk someone's ear off for 30 minutes about stuff that is wrong), especially if that mail / email also goes to other influential people.

I'm pretty chill and kind and helpful, but when something repeatedly breaks and annoys several people every day, you might hear about it in several meetings that I attend, possibly over the next year, until I convince someone to fix it (even if it's me who eventually is tasked with fixing it).

gruez
2 replies
16h20m

Do they have a place to mail your complaints? Who forces you to use the app?

Presumably the building he lives in contracted out laundry service to some third party company, which is in charge of the machines and therefore shitty app. In this situation there really isn't any real choice to vote with your wallet. They can tell you to pound sand and there's nothing you can do. Your only real option is to move out. Maybe if OP is lucky, owns the property, and is part of the HOA he might be able to lobby for the vendor to be replaced.

SoftTalker
1 replies
15h53m

Most landlords are not complete assholes. They want to do what is reasonable to keep their tenants happy. It's much easier to re-sign a happy tenant than to have to turn over an apartment and spend time and money marketing it and possibly have it vacant for a while. If all the tenants hate the laundry app and let the landlord know (politely, with a list of machines that don't work, dates/times, etc), most likely it will have an effect.

davidashe
0 replies
5h56m

“Most landlords are not complete assholes.”

Ummmm…

In my anecdotal experience, many New York City landlords don’t see their tenants as human beings, just a revenue source. Tenants complain? Maybe a city inspector shows up, a day or days later, so the landlord can turn the heat/water back on, and the inspector reports “no issue found.” People get mad and move out? New tenants pay an even higher rent! Heard horror stories about both individual and management company landlords. Can’t be the only city like this.

I’m pretty sure the long histories of social unrest under feudalism, the French Revolution, the mere existence of Marxism and Renter’s Rights Law, strongly beg to differ with your contention.

godelski
0 replies
1h25m

Short of of writing to my mayor, I've done most of that. Big motivation to learn reverse engineering and some basic hacking.

I can't hack everything.

I can't fix everything.

I need time for my own things I need time to be human.

But no one is having it. My impression is that while everyone else is annoyed by things there's a large amount of apathy and acceptance for everything being crud.

I know you're trying to help, but this is quite the burden on a person to try to fix everything. And things seem to be getting worse. So I'm appealing to the community of people who create the systems. I do not think these are things that can be fixed by laws. You can't regulate that people do their best or think things through.

I appeal here because if I can convince other developers that slowing down will speed things up, then many more will benefit (including the developers and the companies they work for). Even convincing a few is impactful. Even more when the message spreads.

Small investments compound and become mighy, but so do "shortcuts"

smokel
1 replies
22h1m

> What the fuck is wrong with us?

This is simple: we can't just trust each other. When programming started, people were mostly interested in building things, and there was little incentive to spoil other peoples work. Now there is money to be made, either through advertising, or through malpractice. This means that people have to protect their code from others. Program code is obfuscated (compiled and copyright enforced) or stored in a container with a limited interface (cloud).

It's not a technical issue, it's a social issue.

Applying a social mindset to technical issues (asking your compiler to be your partner, and preparing them a nice dinner) is equally silly as applying a technical mindset to social issues.

godelski
0 replies
20h16m

  >  When programming started, people were mostly interested in building things, and there was little incentive to spoil other peoples work. Now there is money to be made, either through advertising, or through malpractice
Yeah, I lean towards this too. Signals I use now to determine good software usually are things that look auxiliary because I'm actually looking for things that tell me the dev is passionate and "having fun." Like easter eggs, little things like it looks like they took way too much time to make something unimportant pretty (keeping doing this devs. I love it and it's appreciated. Always makes me smile ^_^). But I am also sympathetic, because yeah I also get tons of issues opened that should have been a google search or are wildly inappropriate. Though I try to determine if these are in good faith because we don't get wizards without noobs, and someone's got to teach them.

But it all makes me think we forgot what all of this is about, even "the economy." Money is supposed to be a proxy for increasing quality of life. Not even just on a personal level. I'm happy that people can get rich doing their work and things that they're passionate about but I feel that the way current infrastructures are we're actively discouraging or handcuffing people who are passionate. Or that we try to kill that passion. Managers, let your devs "have fun." Reign them in so that they don't go too deep of rabbit holes and pull too far away, but coding (like any engineering or any science) is (also) an art. When that passion dies, enshitification ensues.

For a concrete example: I'm wildly impressed that 99.9 times I'm filling out a forum that includes things like a country or timezone that my country isn't either autoselected or a copy isn't located at the top (not moved! copied!). It really makes me think that better than chasing leet code questions for interviews you ask someone to build a simple thing and what you actually look for is the details and little things that make the experience smoother (or product better). Because it is hard to teach people to about subtly, much harder than teaching them a stack or specific language (and if they care about the little things they'll almost always be quicker to learn those things). Sure, this might take a little more work to interview people and doesn't have a precise answer, but programs don't have precise answers either. And given how long and taxing software interviews are these days I wouldn't be surprised if slowing down actually ends up speeding things up and saving a lot of money.

pojzon
1 replies
20h40m

Those are all small symptoms of late stage capitalism and closing on end of an era.

Like Rome, corruption, laziness and barbarians will tear it all down.

godelski
0 replies
20h28m

I don't need help identifying the problem, I need help solving the problem.

mistrial9
1 replies
22h58m

... it is not "us" .. there is a collection of roles that each interact to produce products to market or releases. Add some game-theory to the thinking on that?

godelski
0 replies
21h42m

"Us" can mean different things depending on the context. The context here is "programmers" not "shitty programmers" (really we can be inclusive with engineers and any makers). Considering that it seems you appear to understand the context I'm a bit confused at the comment. You don't have to inject yourself if you're not in the referred to group (i.e. maybe I'm not talking about you. I can't know because I don't even know you)

I need to stress so much -- especially because how people think about AI and intelligence (my field of research[0]) -- that language is not precise. Language is not thoughts. Sure, many think "out loud" and most people have inner speech, but language (even inner) is compressed. You use language to convey much more complex and abstract concepts in your head to someone who is hopefully trying to make their decompression adapt to uncover the other person's intended meaning. This isn't a Chinese room where you just look in the dictionary (which also shows multiple definitions for anything). I know it's harder with text and disconnected cultures integration, but we must not confuse this and I think it's ignored far too often. Ignoring it seems to just escalate that problems.

  > Add some game-theory to the thinking on that?
And please don't be dismissive by hand waving. I'm sure you understand game theory. Which would mean you understand what a complex problem this is to actually model (to a degree I doubt anyone could to a good degree of accuracy). That you understand perturbation theory and chaos theory and how they are critical to modeling such a process meaning you only get probability distribution as results.

[0] I add this context because it decreases the amount of people that feel the need to nerdsplain things to me that they don't research.

linotype
1 replies
22h3m

I use an Apple TV. It’s more expensive than Google and Amazon’s offerings, but fast as hell and gets better with time.

godelski
0 replies
21h34m

I mean this is my TV menu. I'm adjusting brightness (the explicit example given). Does Apple TV control the whole TV settings?

Fwiw, my TV just ends up being a big monitor because all the web apps are better and even with all the issues jellyfin has, it's better than many of those. I just mostly use a mouse or kdeconnect.

Speaking of which, does anyone have a recommendation for an android keyboard that gives me things like Ctrl and super keys? Also is there a good xdotool replacement for Wayland? I didn't find ydotool working as well but maybe I should give it another try.

I can suggest this setup and think it'll work for many. My desktop sits behind my TV because it mostly does computational work, might run servers, or gaming. I'm a casual and so 60fps 4k is more than enough even with the delay. Then I just ssh from my laptop and do most of the work from there. Pretty much the same as my professional work, since I need to ssh into hpc clusters, there's little I need to do on the machine I'm physically in front of (did we go back to the 70's?)

GeoAtreides
8 replies
21h39m

10 steps negotiating with obscure Pune team that I need to chase 10x

why are you doing the chasing? unless you're the project manager, comment "blocked by obscure pune team" on the ticket and move on

malfist
3 replies
20h3m

In my experience, PMs rarely do the chasing down for you. Most of them ask why you're still blocked and if you've done anything about it. Even if they do do something to chase them down, you're still on the hook for the deadlines.

GeoAtreides
2 replies
11h51m

so you're saying PMs are not doing their job? and then you have to do the actual project managing? not ideal for you, very cushy job for them.

intelVISA
1 replies
11h11m

It's a popular grifter role for a reason.

malfist
0 replies
6h7m

The PM I work with most of the time introduced himself as "the next Elon Musk", so your assessment fits

dragon-hn
3 replies
17h53m

This is an unfortunate attitude. The current state of things in our industry is a reflection of this “not my job” way of doing things.

GeoAtreides
2 replies
11h53m

We can call it job specialisation. I don't do project managing, I do coding. Not my business to coordinate everyone, my job is to write code.

dragon-hn
1 replies
5h35m

My job is typically to deliver features that make the company money. The writing of code is just a part of that.

In my experience the people who solely focus on the code end up being significantly less effective.

GeoAtreides
0 replies
3h25m

to deliver features

ah, see, so your job description includes, besides being a dev, also being a project manager. that's fine, there's nothing bad about it, it's just that your job requires a bit more from you than other places.

crystal_revenge
21 replies
8h8m

whether the decline in IT worker quality

I think it entirely has to do with a generation of software people getting into the field (understandably) because it makes them a lot of money, rather than because they're passionate about software. These, by-and-large, are mediocre technical people and they tend to hire other mediocre technical people.

When I look around at many of the newer startups that are popping up they're increasing filled with really talented people. I chalk this up largely to the fact that people that really know what they're doing are increasingly necessary to get a company started in a more cash constrained environment and those people are making sure they hire really talented people.

Tech right now reminds me so much more of tech around 2004-2008, when almost everyone one that was interested in startups was in it because they loved hacking on technical problems.

My experience with Cursor is that it is excellent at doing things mediocre engineers can do, and awful at anything more advanced. It also requires the ability to very quickly understand someone else's code.

I could be wrong, but my suspicion is this will allow a lot of very technical engineers, that don't focus on things like front-end or web app development, to forgo needing to hire as many junior webdev people. Similar to how webmasters disappeared once we have frameworks and tools for quickly building the basic HTML/CSS required for a web page.

bluepizza
20 replies
7h53m

While you have a good point, I think the experts also branched off with some unreasonable requirements. I remember reading Yegge's blog post, years ago, saying that an engineer needed to know bitwise operators, otherwise they were not good enough.

I don't know. Curiosity, passion, focus, creative problem solving seem to me much more important criteria for an engineer to have, rather than bitwise operations. An engineer that has these will learn everything needed to get the job done.

So it seems like we all got off the main road, and started looking for shibboleths.

crystal_revenge
15 replies
6h49m

I have a hard time imagining someone who has "curiosity, passion, focus, creative problem solving" regarding programming and yet hasn't stumbled upon bitwise operators (and then immediately found them interesting). They're pretty fundamental to the craft.

I can see having to refresh on various bit shifting techniques for fast multiplication etc (though any serous program should at least be aware of this), but XOR is fundamental to even knowing the basics of cryptography. Bitwise AND, NOT, and OR are certainly something every serious programmer should have a very solid understanding of no?

bluepizza
12 replies
6h6m

It's probably a matter of domain, but most backend engineers who are working in Java/Golang/Python/Ruby systems have very little use of those, and they don't come up easily. Frontend (web or mobile), it comes up even less.

Can you tell me how this knowledge makes one a serious programmer? In which moments of the development lifecycle they are crucial?

tel
9 replies
5h50m

Without judgement, this feels like a switch up. It seems to me that the prior author did not suggest they were necessary, but instead ambient, available, and interesting. Indeed for many they are not useful.

At the same time, that might be a good indication of passion: a useless but foundational thing you learn despite having zero economic pressure to do so.

In the domains you have worked on, what are examples of such things?

bluepizza
8 replies
5h37m

It is not a switch up. The author said that an engineer would find these during their journey, and I am asking when. I don't have a strongly held opinion here, I literally want to know when. I am curious about it.

about3fitty
2 replies
3h53m

I found them in 2015 when I was maintaining a legacy app for a university.

The developer that implemented them could have used a few bools but decided to cram it all into one byte using bitwise operators because they were trying to seem smart/clever.

This was a web app, not a microcontroller or some other tightly constrained environment.

One should not have to worry about solar flares! Heh.

sroussey
0 replies
48m

trying to seem smart/clever.

Maybe. Every time I write something I consider clever, I often regret it later.

But young people in particular tend to write code using things they don’t need because they want experience in those things. (Just look at people using kubernetes long before there is any need as an example). Where and when this is done can be good or bad, it depends.

Even in a web app, you might want to pack bools into bytes depending on what you are doing. For example, I’ve done stuff with deck.gl and moving massive amounts of data between webworkers and the size of data is material.

It did take a beat to consider an example though, so I do appreciate your point.

Coming from a double major including EE though, all I have to say is that everyone’s code everywhere is just a bunch of NOR gates. Now if you want to increase your salary, looking up why I say “everything is NOR” won’t be useful. But if you are curious, it is interesting to see how one Boolean operand can implement everything.

jfyi
0 replies
3h4m

trying to seem smart/clever

Nobody that uses bit flags do it because they think it makes them look clever. If anything they believe it looks like using the most basic of tools.

One should not have to worry about solar flares!

Do you legitimately believe that a bool is immune to this? Yeah, I get this a joke, but it's one told from a conceit.

This whole post comes off as condescending to cover up a failure to understand something basic.

I get it, someone called you out on it at some point in your life and you have decided to strike back, but come on... do you really think you benefit from this display?

valicord
1 replies
4h50m

Bitmasks, enumerating combinations (subsets), parsing binary data, ...

jfyi
0 replies
4h12m

Cryptography, parity checking, sorting, image processing, optimization...

tel
1 replies
4h9m

Personally, I think it’s possible to not encounter them. I surely avoided them for a while in my own career, finding them to be the tricks of a low-level optimization expert of which I felt little draw.

But then I started investigating type systems and proof languages and discovered them through Boolean algebras. I didn’t work in that space, it was just interesting. I later learned more practically about bits through parsing binary message fields and wonder what the deal with endianness was.

I also recall that yarn that goes around from time to time about Carmack’s fast inverse square root algorithm. Totally useless and yet I recall the first time I read about it how fun a puzzle it was to work out the details.

I’ve encountered them many times since then despite almost never actually using them. Most recently I was investigating WASM and ran across Linear Memory and once again had an opportunity to think about the specifics of memory layout and how bytes and bits are the language of that domain.

bluepizza
0 replies
2h33m

I've mostly done corporate Java stuff, so a lot of it didn't come up, as Spring was already doing it for me.

I've first got to enjoy low level programming when reading Fabien Sanglard breaking down Carmack's code. Will look into WASM, sounds like it could be a fun read too.

krisoft
0 replies
4h13m

I literally want to know when

When the curious engineer is reading a book about their choosen programing language. Most likely in the part which describes operators.

Or when they parse something stored in binary.

usrnm
1 replies
5h37m

You're talking past one another. The whole point of being interested and passionate about a topic is that you want to learn stuff regardless of whether or not it's useful

BadHumans
0 replies
5h22m

I can be interested in front-end development and learn everything there is to know about templating while never knowing about bitwise operators. Point is domains are so large that even if you go beyond what is required and learn for the sake of curiosity, you can still not touch what others may consider fundamental.

ozim
0 replies
3h34m

There is a world of difference between stumbling upon, knowing about in general and being proficient with -because of using on regular basis- some tech piece.

I do know how bitwise AND, NOT, OR, XOR work but I don't solve problems on day to day basis that use those.

If you give me two bytes to do the operations manually it is going to be a piece of cake for me.

If you give me a problem where most optimal solution is using combination of those operations I most likely will use other solution that won't be optimal because on my day to day work I use totally different things.

A bit of a tangent is that is also why we have "loads of applicants willing to do tech jobs" and at the same time "skilled workers shortage" - if company needs someone who can solve optimally problems with bitwise operations "right here right now", I am not the candidate, given couple of months working on those problems I would get proficient rather fast - but no one wants to wait couple of months ;)

Viliam1234
0 replies
5h22m

Yeah, that sounds like someone claiming to be passionate about math, but who never heard of prime numbers. It's not that prime numbers are important for literally everything in math (although coincidentally they also happen to be important for cryptography), it's just that it's practically impossible to have high-school knowledge of math without hearing about them.

What other elementary topics are considered too nerdy by the cool people in IT today? Binary numbers, pixels...? What are the cool kids passionate about these days? Buzzwords, networking, social networks...?

rachofsunshine
1 replies
5h3m

This is something I have some expertise in (I run a recruiting company that uses a standardized interview, and I used to run assessments for another). It's also something I've thought a lot about.

There is absolutely truth to what you're saying. But like most obvious observations that aren't playing themselves out, there's more to it than that.

-----

One: curiosity, passion, and focus matter a lot. Every company that hires through us is looking for them in one form or another. But they still have to figure out a means by which to measure them.

One company thinks "passion" takes the form of "wanting to start a company someday", and worries that someone who isn't that ambitious won't be dedicated enough to push past the problems they run into. But another thinks that "passion" is someone who wants to tinker with optimizing some FPGA all night long because the platonic ideal of making a 15% more efficient circuit for a crypto node is what Real Technical Engineers do.

These companies are not just looking for passion in practice, but for "passion for".

And so you might say okay, screen for that. But the problem is that passion-for is easily faked - and is something you can easily fail to display if your personality skews the wrong way.

When I interviewed at my previous company many years ago, they asked me why I wanted to work there. I answered honestly: it seemed like a good job and that I'd be able to go home and sleep at night. This was a TERRIBLE answer that, had I done less well on other components of the interview or had they been interviewing me for more than a low-level job, would likely have disqualified me. It certainly would not have convinced them I had the passion to make something happen. But a few years later I was in the leadership of that company, and today I run a successor to it trying to carry the torch when they could not.

If you asked me the same question today about why I started a company, an honest answer would be similar: I do this business because I know it and because I enjoy its challenges, not because it's the Most Important Thing In The World To Me. I'm literally a startup founder, and I would not pass 90% of "see if someone's passionate enough to work at a startup" interview questions if I answered them honestly.

On the flip side, a socially-astute candidate who understands the culture of the company and the person they're interviewing with can easily fake these signals. There is a reason that energetic extraverts tend to do well in business - or rather, there are hundreds of reasons, and this is one of them. Social skills let you manipulate behavioral interviews to your advantage, and if you're an interviewer, you don't want candidates doing that.

So in effect, what you're doing here is replacing one shibboleth that has something to do with technical skill, with another that is more about your ability to read an interviewer and "play the game". Which do you think correlates better with technical excellence?

-----

And two: lots of people are curious, passionate, energetic, and not skilled.

You say that a person with those traits "will learn" everything needed. That might even be true. But "will learn" can be three years, five years, down the line.

One of the things we ask on our interview is a sort of "fizzbuzz-plus" style coding problem (you can see a similar problem - the one we send candidates to prep them - at https://www.otherbranch.com/practice-coding-problem if you want to calibrate yourself on what I'm about to say). It is not a difficult problem by any normal standard. It requires nothing more than some simple conditional logic, a class or two, and the basic syntax of the language you're using.

Many apparently-passionate, energetic, and curious engineers simply cannot do it. The example problem I linked is a bit easier than our real one, but I have reliable data on the real one, which tells me that sixty-two percent of candidates who take it do not complete even the second step.

Now, is this artificial? Yeah, but it's artificially easy. It involves little of the subtlety and complexity of real systems, by design. And yet very often we get code that (translated into the example problem I linked) is the rough equivalent of:

  print("--*-")
with no underlying data structure, or

  if (row == 1 && col == 3)
where the entire board becomes an immutable n^2 case statement that would have to be wholly rewritten if the candidate were to ever get to later steps of the problem.

Would you recommend someone who wrote that kind of code, no matter how apparently curious, passionate, or energetic they were?

bluepizza
0 replies
4h46m

This has been one of the most interesting and insightful replies I've ever got in a discussion. Thank you for taking the time and explaining your points. It really resonates with what I've seen in real life.

krosaen
0 replies
6h35m

Yegge no longer thinks knowing these kinds of details matter so much, can’t find the exact timestamp but it relates to the skills shifting to being really good at prompting and quickly learning what is relevant

https://youtu.be/dTVXTo6Umfg?si=bBBwSutb6ned7o-X

MrDarcy
0 replies
2h35m

I remember reading Yegge's blog post, years ago, saying that an engineer needed to know bitwise operators, otherwise they were not good enough.

I think he's right, nearly every engineer needs a basic understanding IP routing more today than in the 2000's with how connected the cloud is. Few engineers do, however.

Every time you need to ask yourself, "Where is this packet going?" you're using bitwise operators.

nostrademons
18 replies
18h42m

One person can do the work of 3 and regularly does in startups.

I think that what the MBAs miss is this phenomena of overconstraint. Once you have separate the generic role of "developer" into "developer, operations, and security", you've likely specified all sorts of details about how those roles need to be done. When you combine them back into DevSecOps, all the details remain, and you have one person doing 3x the work instead of one person doing the work 3x more efficiently. To effectively go backwards, you have to relax constraints, and let that one person exercise their judgment about how to do the job.

A corollary is that org size can never decrease, only increase. As more employees are hired, jobs become increasingly specialized. Getting rid of them means that that job function is simply not performed, because at that level of specialization, the other employees cannot simply adjust their job descriptions to take on the new responsibilities. You have to throw away the old org and start again with a new, small org, which is why the whole private equity / venture capital / startup ecosystem exists. This is also Gall's Law exists:

https://en.wikipedia.org/wiki/John_Gall_(author)#Gall's_law

IgorPartola
8 replies
17h19m

I think there is another bit to this which is cargo cult tendencies. Basically DevOps is a great idea under certain circumstances and works well for specific people in that role. Realistically if you take a top talent engineer they can likely step into any of the three roles or even some others and be successful. Having the synergy of one person being responsible for the boundary between two roles then makes your ROI on that person and that role skyrocket.

And then you evangelize this approach and every other company wants to follow suit but they don’t really have top talent in management or engineering or both (every company claims to hire top talent which obviously cannot be true). So they make a poor copy of what the best organizations were doing and obviously it doesn’t go well. And the thing is that they’ve done it before. With Agile and with waterfall before that, etc. There is no methodology (organizational, programming, testing, etc.) that can make excellence out of mediocrity.

andreasmetsala
3 replies
7h53m

There is no methodology (organizational, programming, testing, etc.) that can make excellence out of mediocrity.

That’s a pretty toxic statement. No one was born writing perfect code, every one of us needed to learn. An organizational culture that rewards learning is the way to produce an organization that produces excellent software.

It doesn’t matter how excellent someone is if the organization puts them to work on problems that nobody needs solved. The most perfect piece of software that nobody uses is a waste of bytes no matter how well it’s written.

Aeolun
1 replies
7h48m

Sure, everyone can learn, but what leads to mediocrity is a lack of desire to learn.

You can certainly structure your organisation around that and achieve great results. There’s no need for excellent tech to solve most problems. That’s ostensibly true for almost everything, because our society wouldn’t be able to survive if everything needed to be excellent.

rachofsunshine
0 replies
4h59m

A lot of different things can lead to mediocrity, and lack of desire to learn is only one of them.

Sometimes you want to learn, but your talents fail you.

Sometimes you want to learn, and you learn from someone who isn't good at the thing.

Sometimes you want to learn, and your environment punishes effectiveness.

Sometimes you want to learn, and you learn correctly for the wrong context.

Yes, people who give a damn about doing their job well are often good employees. But the converse is not true: many people who are not good employees do in fact care a great deal. There's no need to convert a descriptive statement about results into a moral judgement of a person's work ethic, and it's often just not factually correct to do so.

osigurdson
0 replies
5h20m

> That’s a pretty toxic statement

While we might not like it to be true, no amount of process can turn the average person into a Michael Jordan, a Jimmy Hendrix or a Leo Tolstoy. Assuming that software engineering talent is somehow a uniform distribution and not a normal one like every other human skill is likely incorrect. Don't conflate "toxic" with "true".

1dom
2 replies
9h32m

There is no methodology (organizational, programming, testing, etc.) that can make excellence out of mediocrity.

This is a thought provoking phrase, and I liked that. Thinking a bit deeper, I'm not sure if it's accurate, practical or healthy though.

I've seen mediocrity produce excellent solutions, and excellence churn out rubbish, and I suspect most people with a few years and tech jobs under their belt have.

You could argue that if they turned out excellent things, then by definition, they're excellent, but that then makes the phrase uselessly tautological.

If it's true, then what's the advice to mediocre people and environments - just give up? Don't waste your time on excellent organisation, programming, testing, because it's a waste and you'll fail?

I think there's no 1 thing that can make excellence out of mediocrity for everyone. But I like to think that for every combination of work, people and environment, there's an optimal set of practices that will lead to them producing something better than average. Obviously a lot don't find that set of practices due to the massive search space.

highfrequency
0 replies
2h6m

Your emphasis on these two ideas resonates with me:

1. Optimal practices are highly context specific.

2. The search space is very high dimensional and more often than not we are blind to most of it.

Care to elaborate on these points with anecdotes?

IgorPartola
0 replies
5h47m

I suppose I should have phrased that slightly kinder, and that’s on me. What I mean is that you can’t take your average jogger and expect them to win the 100 meter dash at the Olympics, no matter how much training and resources you provide them. Is that a real problem? No it certainly is not.

Lots of people like rock climbing but you can’t expect an average climber to win a major competition or climb some of the toughest mountains. It doesn’t mean they shouldn’t enjoy climbing, but if they start training like they are about to free solo El Capitan they are very likely to get seriously hurt.

Average developed, operators, sec. people can do plenty of good work. But put them in a position that will require them to be doing things way out of their area of expertise and no matter how you structure that role and what tools you give them you are setting them up for failure.

Another thing I was thinking about was not individual people being average but rather organizations: an average mediocre organization cannot reliable expect excellence even out of its excellent employees. The environment just isn’t set up for success. I am sure most people here are familiar with having talented people working in an organization that decided to poorly adopt Agile because it’s the thing that will fix everything.

SoftTalker
0 replies
16h11m

There is no methodology...that can make excellence out of mediocrity.

And yet we keep trying. We continually invent languages, architectures, and development methodologies with the thought that they will help average developers build better systems.

makeitdouble
4 replies
10h38m

One person can do the work of 3 and regularly does in startups.

Startup architecture and a 500+ engineer org's architecture are fundamentally different. The job titles are the same, but wont reflect the actual work.

Of course that's always been the case, and applies to the other jobs as well. What a "head of marketing" does at a 5 person startup has nothing to do with what the same titled person does at Amazon or Coca Cola.

I've also seen many orgs basically retitling their infra teams members as "devops" and call it a day. Communication between devs from different part of the stack has become easier, and there will be more "bridge" roles in each teams with an experienced dev also making sure it all works well, but I don't see any company that cared about their stack and performance fire half of their engineers because of a new fancy trend in the operation community.

Aeolun
3 replies
7h51m

Startup architecture and a 500+ engineer org's architecture are fundamentally different

Certainly. The startup architecture is often better. I don’t know what exactly leads to those overcomplicated enterprise things, but I suspect its a lack of limitations.

badpun
1 replies
7h36m

Large org architecture is, most importantly, just much larger. The bank I worked in had 6000 _systems_. A lot of that was redundancy due to a lot of mergers and acquisitions that didn't get cleaned up (because it was really hard to do). Compare that to your typical SaaS startup, which typically has at most a couple systems.

ghaff
0 replies
5h4m

Right. You can't run a 100 or even 1,000 person organization the same way you run a 25,000 person organization and many of the people roles are different. The 25,000 person organization can do more but it needs a lot more process to function. For example, the person just doing what they see as needing to be done in the small organization becomes a random loose cannon in the large one.

Someone mentioned DevSecOps upthread. The breaking down of organizational barriers really was more of a smaller company sort of thing. Platform Engineering and SREs are a better model of how this has evolved at scale.

serallak
0 replies
5h28m

Somebody said: "God was able to create the world in 6 days because he didn't have an installed base."

romanows
0 replies
16h32m

This is a great observation, thanks.

renegade-otter
0 replies
2h54m

In a startup you get significantly more focus time than in a large company. Especially if there is no production yet - there are no clients, no production bugs, no-call.

In a larger company, literally 80% of your job is meetings, Slack, and distractions.

nox101
0 replies
9h41m

I think it less to do with judgement and more to do with one person can do the work of 3 in startups because there's several orders of magnitude less coordination and communication that needs to happen.

If you have 5 people in a startup you have 10 connections between them, 20 people = 190 connections, 100 = 4950 connections, 1000 people = 499500 connections.

Sure you split then in groups with managers and managers managers etc to break down the connections to less than the max but it's still going to be orders of magnitude more communication and coordination needed than in a startup.

namaria
0 replies
11h3m

How does you explain the cyclical lay offs, if organizations aren't able to decrease in size?

ipaddr
8 replies
16h41m

Developers of the past worked towards the title of webmaster. A webmaster can manage a server, write the code and also be a graphic artist. Developers want to do it all.

What has changed is micromanaging of daily standup which reshapes work into neat packages for conversation but kills a non linear flow and limits exploration making things exactly as requested instead of what could be better.

neom
2 replies
15h34m

This comment had me laughing pretty hard, and thanks for making me feel old. Anyway, I guess I'm a "webmaster" in theory even tho I've not worked on web since early 2000s, lamp stack handy++ tho. made me laugh because: was a Supabase meetup recently and some kid told me he was full stack, but he can't ssh into a server? I'm supa confused what fullstack means these days.

ehnto
1 replies
13h11m

I have stopped calling myself a full stack developer, because the meaning and the role are kind of ambiguous now. Clients don't know what they can talk to me about (everything), and PMs seem to be unsure of what I can be assigned to (anything).

In fairness as well, the frontend tooling landscape has become so complex that while I'm capable of jumping in, I am in no way an efficient fronted developer off the rip.

duggan
0 replies
10h27m

The tooling has become complex, sure, but so have capabilities and expectations.

I used to “make websites” in the 2000s, then stopped for about 15 years to focus on backend and infrastructure.

Have been getting back into front end the last few months — there was a good bit of learning and unlearning, but the modern web stack is amazingly productive. Next.js, Tailwind, and Supabase in particular.

Last time I checked in to frontend dev, CSS directives fit on an A4 sheet, valid XHTML was a nerd flex and rounded corners were painstakingly crafted from tables and gifs.

Frontend is a dream now :)

SoftTalker
2 replies
16h1m

I never thought of a webmaster as meaning that. When I think of webmaster I think of a person who updates the content on a website using a CMS of some sort. Maybe they know a bit of HTML and CSS, how to copy assets to the web server, that sort of thing. But they are not sysadmins or programmers in the conventional sense. Maybe it just varies by employer.

llIIllIIllIIl
0 replies
15h51m

That’s just a more junior mastery of a true webmaster.

ehnto
0 replies
13h19m

They were talking about ye olden day I believe. Webmasters used to do every aspect of web development, hosting and all.

There are still people who do that at smaller companies, but you wouldn't call them a webmaster anymore.

chrismarlow9
1 replies
15h50m

What has also changed in my opinion is the vast landscape of tooling and frameworks at every level of the stack.

We now have containers that run in VMs that run on physical servers. And languages built on top of JavaScript and backed by Shadow Dom and blah blah. Now sure I could easily skip all that and stick a static page on a cdn that points to a lambda and call it a day. But the layers are still there.

I'm afraid only more of this is coming with AI in full swing and fully expect a larger scale internet outage to happen at some point that will be the result of a subtle bug in all this complexity that no single person can fix because AI wrote it.

There's too much stuff in the stack. Can we stop adding and remove some of it?

immibis
0 replies
2h42m

a lambda

A CGI script, even, which Lambda is the cloudified version of.

stuckkeys
7 replies
22h21m

Sadly. It is an onward trend. I have become so discouraged from this subject, that I am evaluating my career choices. Farming is a the safest bet for now.

malfist
3 replies
20h6m

Have you actually done farming? I grew up on a farm. It's no where close to a safe bet.

It's capital intensive, high risk, hard work with low margins. Not at all like stardew.

djbusby
1 replies
19h43m

Personal horticulture is fun and rewarding! Grow your own lettuce, tomato, etc.

BigAg is HARD WORK

malfist
0 replies
17h20m

Absolutely. I love having my own garden and growing food for myself and friends. I give away a ton of food with what I grow.

I'll never run a farm.

Closest thing I might come to is a florist's greenhouse, but that's probably still a no go

randomdata
0 replies
14h19m

Presumably he means safe with respect to task consolidation, as per the topic of discussion. Which I would agree is more or less true, but only because farming already went through its consolidation phase. Hence the old saying that goes something like: "A farmer has to be a mechanic, an engineer, a scientist, a veterinarian, a business manager, and an optimist—all in a single day."

As a farmer, if you think programming, CI/CD pipeline management, and database administration being consolidated into one job is a line too far... Brace yourself!

stuckinhell
0 replies
22h10m

i'm worried this is the case as well

smellybigbelly
0 replies
13h17m

I wonder how you view farming as the safest bet. Farming is quite challenging and the competition will drive any noob into the ground. Not just the knowledge but also capital.

huuhee3
0 replies
21h49m

Nursing is pretty safe too, simply due to demographics and limitations of robotics.

TeMPOraL
4 replies
10h10m

This is happening across the board, not just to IT workers, and I suspect it's the major factor as for why the expected productivity improvements from technology didn't materialize.

Think of your "DevSecOps" people doing 3x the work they should. Do you know what else are they doing? Booking their own business travels. Reconciling and reporting their own expenses. Reporting their own hours, broken down by business categories. Managing their own vacation days. Managing their own meetings. Creating presentations on their own, with graphics they made on their own. Possibly even doing 80% of work on lining up purchases from third parties. And a bunch of other stuff like that.

None of these are part of their job descriptions - in fact, all of these are actively distracting and disproportionally compromise the workers' ability to do their actual jobs. All of these also used to have dedicated specialists, that could do it 10x as efficiently, for fraction of the price.

My hypothesis is this: those specialists like secretaries, internal graphics departments, financial staff, etc. they all were visible on the balance sheet. Eliminating those roles does not eliminate the need for their work to be done - just distributes it to everyone in pieces (in big part thanks to self-serve office software "improving" productivity). That slows everyone down across the board disproportionally, but the beancounters only see the money saved on salaries of the eliminated roles - the slowdown only manifests as a fuzzy, generic sense of loss of productivity, a mysterious costs disease that everyone seems to suffer from.

I say it's not mysterious; I say that there is no productivity gain, but rather productivity loss - but because it turns the costs from legible, overt, into diffuse and hard to count, it's easy to get fooled that money is being saved.

sgt101
2 replies
9h44m

And yet we are told that competition will determine that capital and talent will flow towards that most efficient organisations over time. Thus, surely organisations that eschewed this practice would emerge and dominate?

So, either capitalism doesn't work, or your thesis isn't quite right...

I have two other counters to offer, first we have seen GDP per capita gradually increasing in major economies for the last 50 years (while the IT revolution has played out). There have been other technical innovations over this time, but I believe that GDP per capita has more than quadrupled in G8 economies. The USA and Canada have, at the same time, enjoyed a clear extra boost from fracking and shale extraction, and the USA has arguably enjoyed an extra extra boost from world dominance - but arguably.

The second one is simple anecdote. Hour for hour I now can do far more in terms of development than I did when I was a hard core techie in the 90's and 2000's. In addition I can manage and administer systems that are far more complex than those it took teams of people to run at that time (try running a 10gb size db under load on oracle 7 sitting on top of spinning rust and 64mb ram store for fun) I can also manage a team of 30's expenses, timesheets, travel requests and so on that again would have taken a person to do. I can just do these things and my job as well and I do it mostly in about 50 hrs a week. If I wasn't involved in my people's lives and happy to argue with customers to get things better I could do it in 40 hrs regularally, for sure. But I put some discretion in.

My point is - we are just more productive. It is hard to measure, and anecdote / "lived experience" is a bad type of evidence, but I think it's clearly there. This is why then accountants have been able to reorganise modern business organisations to use fewer people to do more. Have they destroyed value while doing this - totally - but they have managed to get away with it because 7/10 they have been right.

Personally I've suffered from the 3/10 errors. I know many of us on here have, but we shouldn't shut our eyes because of that.

Retric
1 replies
6h30m

And yet we are told that competition will determine that capital and talent will flow towards that most efficient organisations over time. Thus, surely organisations that eschewed this practice would emerge and dominate?

That’s really not how competition works in practice. Verizon and AT&T are a mess internally but their competitors where worse.

GDP per capita has a lot more to do with automation than individual worker productivity. Software ate the world, but it didn’t need to be great software to be better than no software.

At large banks you often find thousands of largely redundant systems from past mergers all chugging along at the same time. Meanwhile economies of scale still favor the larger bank because smaller banks have more systems per customer.

So sure you’re working with more complex systems, but how much of that complexity is actually inherently beneficial and how much is legacy of suboptimal solutions? HTML and JavaScript are unbelievably terrible in just about every way except ubiquity thus tools / familiarity. When we talk about how efficient things are, it’s not on some absolute scale it’s all about the tools built to cope with what’s going on.

AI may legitimately be the only way programmers in 2070 deal with ever more layers of crap.

Viliam1234
0 replies
5h13m

As long as you have the economies of scale and huge barriers to entry, companies can stay deeply dysfunctional without getting outcompeted. Especially when the same managers rotate between them all and introduce the "best practices" everywhere.

Viliam1234
0 replies
3h7m

Think of your "DevSecOps" people doing 3x the work they should. Do you know what else are they doing? Booking their own business travels. Reconciling and reporting their own expenses. Reporting their own hours, broken down by business categories. Managing their own vacation days. Managing their own meetings. Creating presentations on their own, with graphics they made on their own. Possibly even doing 80% of work on lining up purchases from third parties. And a bunch of other stuff like that.

It feels like you are working at the same company as me.

Companies complain all the time about how difficult it is to find competent developers, which is their excuse for keeping most of the teams understaffed. Okay, then how about increasing the developers' productivity by letting them focus on, you know, development?

Why does the paperwork I need to do after visiting a dentist during the lunch break take more time than the visit itself? It's not enough just to bring the receipt to HR; I need to scan the paper, print the scan, get it signed by a manager, start a new process in a web application, ask the manager to also sign it in the web application, etc. I need to check my notes every time I am doing it, because the web application asks me a lot of information that in theory it should already know, and the attached scan needs to have a properly formatted file name, and I need to figure out the name of the person I should forward this process to, rather than the application figuring it out itself. Why??? The business travels were even worse, luckily my current company doesn't do those frequently.

My work is defined by Jira tickets that mostly contain a short description like "implement XY for Z", and it's my job to figure out wtf is "Z", who is the person in our company responsible for "Z", what exactly they meant by "XY", where is any specification, when is the deadline, who am I supposed to coordinate with, and who will test my work. I miss the good old days when we had some kind of task description and the definition of done, but those were the days when we had multiple developers in a team, and now it's mostly just me.

I get invitations to meetings that do not concern me or any of my projects, but it's my job to figure that out, not the job of the person who sent the invitations. Don't get me started on e-mails, because my inbox only became manageable after I wrote dozen rules that put various junk in the spam folder. No, I don't need a notification every time someone in the company made an edit on any Confluence page. No, I don't need notifications about people committing code to projects I am not working on. The remaining notifications often come in triplicate, because first I get a message in Teams, then an e-mail saying that I got a message in Teams, and finally a Windows notification saying that I got a new e-mail. When I return from a vacation, I spend my first day or two just sorting out the flood in my e-mails.

On some days, it is lunchtime before I had an opportunity to write my first line of code. So it's the combination of being Agile-Full-Stack-Dev-Sec-Ops-Cloud-Whatever and the fact that everything around me seems designed to make my work harder that is killing me. This is a system that slows down 10x developers to 1x developers, and us lesser mortals to mere 0.1x developers.

ozim
1 replies
3h48m

For me DevOps/DevSecOps is reaction movement against toxic turf wars and silos of functions from ambitious people - not some business people scheme to reduce headcount and push more responsibilities.

I have received e-mails "hey thats DB don't touch that stuff you are not a DBA" or "hey developers shouldn't do QA" while the premise might be right, lots of things could be done much quicker.

I have seen throwing stuff over the wall "hey I am a developer, I don't care about some server stuff", "hey my DB is working fine it must be you application" or months of fixing an issue because no one would take the responsibility across the chain.

elric
0 replies
3h17m

For me DevOps/DevSecOps is reaction movement against toxic turf wars and silos of functions from ambitious people

Well the DevOps grandfathers (sorry, Patrick & Kris, but you're both grey now) certainly wanted to tear down the walls that had been put up between Devs & Ops. Merging Dev & Ops practices has been a fundamentally good change. Many tasks that used to be dedicated Ops/Infra work are now so automated that a Dev can do them as part of their daily work (e.g. spinning up a test environment or deploying to production). This has been, in a sense, about empowerement.

The current "platform engineering"-buzz builds on top of that.

- not some business people scheme to reduce headcount and push more responsibilities

I imagine that many business people don't understand tech work well enough to deliberately start such schemes. Reducing toil could probably result in lower headcount (no one likes being part of a silo that does the same manual things over and over again just to deploy to production), but by the same count the automations don't come free. They have to be set up and maintained. Once one piece of drudgery has been automated, something else will rear its ugly head. Automating the heck out of boring shit is not only more rewarding work, it's also a force multiplier for a team. I hope business people see those benefits and aim for them, instead of the aforementioned scheming.

moi2388
1 replies
6h12m

From my experience it’s agile/scrum being poorly implemented.

So many companies no longer think about quality or design. “Just build this now and ship it, we can modify it later”, not thinking about the ramifications.

No thinking about design at all anymore, then having tech debt but not allocating any sprints to mitigate it.

Viliam1234
0 replies
4h58m

Sometimes it feels like there is no planning... and as a consequence also no documentation. Or maybe there are tons of outdated documentation because the project is so agile that it was redesigned a few dozen times, but instead of updating the original documents and diagrams, the architects always only produced a diff "change this to this", and those diffs are randomly placed in Confluence, most of them as attached Word or PowerPoint documents.

"But developers hate to write documentation" they say. Okay genius, so why don't you hire someone who is not a developer, someone who doesn't spend their entire time sprinting from one Jira task to another, someone who could focus on understanding how the systems work and keeping the documents up to date. It would be enough to have one such person for the entire company; just don't also give them dozen extra roles that would distract them from their main purpose. "Once we hired a person like this, but they were not competent for their job and everyone complained about docs, so we gave up." Yeah, I suspect you hired the cheapest person available, and you probably kept giving them extra tasks that were always higher priority than this. But nice excuse. Okay, back to having no idea how anything works, which is not a problem because our agile developers can still somehow handle it, until they burn out and quit.

lispisok
1 replies
23h32m

Like most things the decline in quality is probably multi-faceted. There is also the component where tech became a hot attractive field so people flooded in who only cared about the paycheck and not the craft.

trashtester
0 replies
22h5m

That definitely happend in the dotcom bubble. Plenty of "developers" were crowding the field, many of which neither had any real technical ability or interest.

The nerds who were into programming based on personal interest were really not affected.

Those who have tech as a passion will generally outpeform those who have it as a job, by a large margin.

But salary structures tend to ignore this.

yieldcrv
0 replies
8h29m

Partially agree

It is so much easier to deploy now (and for the last 5-10 years) without managing an actual server and OS

It just gets easier, with new complexities added on top

In 2014 I was enamored that I didn’t need to be a DBA because platforms as a service were handling all of it in a NoSQL kind of way. And exposed intuitive API endpoints for me.

This hasn’t changed, at worst it was a gateway drug to being more hands on

I do fullstack development because it’s just one language, I do devops because it’s not a fulltime job and cloud formation scripts and further abstraction is easyish, I can manage the database and I haven’t gotten vendor locked

You don’t have to wait 72 hours for DNS and domain assignments to propagate anymore it’s like 5 minutes, SSL is free and takes 30 minutes tops to be added to your domain, CDNs are included. Over 10 years ago this was all so cumbersome

spacecadet
0 replies
3h45m

This is happening across most industries. My friends in creative fields are experiencing the same "optimizations". I call this the great lie of productivity. The worker (particularly tech workers) have dreamed of reducing their time spent doing "shit jobs" while amplifying their productivity, so that we can spend more time doing what is meaningful to us... In reality businesses sell out to disconnected private equity and chase "growth" just to boost the wealth of the top. Ultimately this has played out as optimizing work force by reducing head count and spreading workers over more roles.

skinney6
0 replies
4h50m

Remember 10x? "Are you a 10x developer?!" lol

rgblambda
0 replies
8h38m

At my company, our sprint board is copy/pasted across teams, so there's columns like "QA/Testing" that just get ignored because our team has no testers.

There's also no platform engineers but IaC has gotten that good that arguably they've become redundant. Architecture decisions get made on the fly by team members rather than by the Software Architect who only shows up now and again to point out something trivial. No Product Owner so again the team work out the requirements and write the tickets (ChatGPT can't help there).

badpun
0 replies
7h44m

In every team I've worked with, DevOps didn't do any of the development, while being called DevOps. The only thing they were developing were automation for build, tests and deployment. Other than that, they're still a separate role from development. The main difference between pre-devops days and now, is that operations people used to work in dedicated operations teams (that's still the case in some highly regulated places, e.g. banks), and now they work alongside developers.

agumonkey
0 replies
8h55m

devops is a great idea but if the design/engineering part is not actually easier, it ends up as additional mental effort

ThinkBeat
0 replies
6h39m

I think this is a valid and important point.

You can categorize get people who are average at several things devOps. But you will not get someone with a deep background and understanding in all the fields at the same time.

I come from a back end background. and I appalled at how little the devOps I have worked with know about even SQL.

Having teams with people who are experts at different things will give a lot better output. It will be more expensive.

Most devOps I have met, with a couple of exceptions, are front end devs who knows a couple of Javascript, knows Javascript and Typescript. When it comes to the back-end it is getting everything possible form npm and stringing it together.

KronisLV
0 replies
4m

Now you have new devs coming into insanely complex n-microservice environments, being asked to learn the existing codebase, being asked to learn their 5-tool CI/CD pipelines (and that ain't being taught in school), being asked to learn to be DBAs, and also to keep up a steady code release cycle.

If fewer people need to undertake more roles, I think the simplest things you can get away with should be chosen, yet for whatever reason that's not what's happening.

Need a front end app? Go for the modern equivalent of jQuery/Bootstrap, e.g. something like Vue, Pinia and PrimeVue (you get components out of the box, you can use them, you don't have to build a whole design system, if needed can still do theming). Also simpler than similar setups with Vuex or Redux in React world.

Need a back end app? A simple API only project in your stack of choice, whether that's Java with Dropwizard (even simpler than Spring Boot), C# with ASP.NET (reasonably simple out of the box), PHP with Laravel, Ruby with Rails, Python with Flask/Django, Node with Express etc. And not necessarily microservices but monoliths that can still horizontally scale. A boring RESTful API that shuffles JSON over the wire, most of the time you won't need GraphQL or gRPC.

Need a database? PostgreSQL is pretty foolproof, MariaDB or even SQLite can also work in select situations. Maybe something like Redis/Valkey or MinIO/SeaweedFS, or RabbitMQ for specific use cases. The kinds of systems that can both scale, as well as start out as a single container running on a VPS somewhere.

Need a web server? Nginx exists, Caddy exists, as does Apache2.

Need to orchestrate containers? Docker Compose (or even Swarm) still exist, Nomad is pretty good for multi node deployments too, maybe some relatively lightweight Kubernetes clusters like K3s with Portainer/Rancher as long as you don't go wild.

CI/CD? Feed a Dockerfile to your pipeline, put the container in Nexus/Artifactory/Harbor/Hub, call a webhook to redeploy, let your monitoring (e.g. Uptime Kuma) make sure things remain available.

Architectures that can fit in one person's head. Environments where you can take every part of the system and run it locally in Docker/Podman containers on a single dev workstation. This won't work for huge projects, but very few actually have projects that reach the scale where this no longer works.

Yet, this is clearly not what's happening, that puzzles me. If we don't have 20 different job titles involved in a project, then the complexity covered under the "GlassFish app server configuration manager" position shouldn't be there in the first place (once had a project like that, there was supposed to be a person involved who'd configure the app server for the deployments, until people just went with embedded Tomcat inside of deployable containers, that complexity suddenly dissipated).

alternatex
101 replies
1d5h

I wonder if the study includes the technical debt that more experienced developers had to tackle after the less experienced devs have contributed their AI-driven efforts. Because my personal experience has involved a lot of that in one of the companies listed in the study.

Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

sgarland
23 replies
1d5h

Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.

Thank you, this says what I have been struggling to describe.

The day I lost part of my soul was when I asked a dev if I could give them feedback on a DB schema, they said yes, and then cut me off a few minutes in with, “yeah, I don’t really care [about X].” You don’t care? I’m telling you as the SME for this exactly what can be improved, how to do it, and why you should do so, but you don’t care. Cool.

Cloud was a mistake; it’s inculcated people with the idea that chasing efficiency and optimization doesn’t matter, because you can always scale up or out. I’m not even talking about doing micro-benchmarks (though you should…), I’m talking about dead-simple stuff like “maybe use this data structure instead of that one.”

dartos
13 replies
1d5h

I’ve noticed this more and more since 2020.

A lot of people who entered the field in the past 6 or so years are here for the money, obviously.

Nothing wrong with that at all, but as someone with a long time programming and technology passion, it’s sad to see that change.

choilive
7 replies
1d1h

Its been a while since my undergrad (>10 yrs) but many of my peers were majoring in CS or EE/CE because of the money, at the time I thought it was a bit depressing as well.

With a few more years under my belt I realized theres nothing wrong with doing good work and providing yourself/your family a decent living. Not everyone needs the passion in their field to become among the best or become a "10x"er to contribute. We all have different passions, but we all need to pay the bills.

neilv
6 replies
1d

Yeah, I think that the quality of work (skill + conscientiousness), and the motivation for doing the work, are two separate things.

Off-the-cuff, three groups:

1. There are people who are motivated by having a solid income, yet they take the professionalism seriously, and do skilled rock-solid work, 9-5. I'd be happy to work with these people.

2. There are other people who are motivated by having a solid or more-than-solid income, and (regardless of skill level), it's non-stop sprint performance art, gaming promotion metrics, resume-driven development, practicing Leetcode, and hopping at the next opportunity regardless of where that leaves the project and team.

3. Then there's those weirdos who are motivated by something about the work itself, and would be doing it even if it didn't pay well. Over the years, these people spend so much time and energy on the something, that they tend to develop more and stronger skills than the others. I'd be happy to work with these people, so long as they can also be professional (including rolling up sleeves for the non-fun parts), or amenable to learning to be professional.

Half-joke: The potential of group #3 is threatening to sharp-elbowed group #2, so group #2 neutralizes them via frat gatekeeping tactics (yeah-but-what-school-did-you-go-to snobbery, Leetcode shibboleth for nothing but whether you rehearsed Leetcode rituals, cliques, culture fit, etc.).

Startups might do well to have a mix of #3 and #1, and to stay far away from #2. But startups -- especially the last decade-plus of too many growth investment scams -- are often run by affluent people who grew up being taught #2 skills (for how you game your way into prestigious school, aggressively self-interested networking and promoting yourself, etc.).

dartos
2 replies
1d

I’m in group 3 for sure.

The hardest part of any job I’ve had is doing the not so fun parts (meetings, keeping up with emails, solidly finishing work before moving on to something new)

neilv
0 replies
1d

You're on the nominally right Web site. #1 is engineers, #2 is cancer, and #3 is hackers (possibly also engineers). :)

XenophileJKO
0 replies
11h6m

As I progressed, I've learned I am more valuable for the company working on things I am interested in. Delegate the boring stuff to people that don't care. If it is critical to get it done right, do it yourself.

zquzra
0 replies
20h53m

#3 is the old school hacker, "a high-powered mutant of some kind never even considered for mass production. Too weird to live, and too rare to die" :-)

sgarland
0 replies
21h24m

I would definitely do this job even if it paid half as well. Nothing else is nearly as interesting to me, and I’ve tried quite a range.

sdiupIGPWEfh
0 replies
1h46m

Blend of #1 & #3 here. Without the pay, I don't think I'd muster the patience for dealing with all the non-programming BS that comes with the job. So I'd have found something else with decent pay, and preferably not an office job. Sometimes I wish I'd taken that other path. I have responsibilities outside work, so I'm rarely putting in more than 40 hours. Prior to family commitments, I had side projects and did enough coding outside work. I still miss working on hobby video game projects. But I do take the professionalism seriously and will do the dirty work that has to be done, even if means cozying up to group #2 to make things happen for the sake of the project.

sgarland
3 replies
1d5h

There’s certainly nothing wrong with enjoying the high pay, no – I definitely do. But yeah, it’s upsetting to find out how few people care. Even moreso when they double down and say that you shouldn’t care either, because it’s been abstracted away, blah blah blah. Who do you think is going to continue creating these abstractions for you?

ebiester
1 replies
1d1h

...Someone else who thinks of actual web applications as an abstraction.

In the olden days, we used to throw it over to "Ops" and say, "your problem now."

And Junior developers have always been overwhelmed with the details and under pressure to deliver enough to keep their job. None of this is new! I'm a graybeard now, but I remember seniors having the same complaints back then. "Kids these days" never gets old.

conductr
0 replies
14h10m

I get what you’re saying, but at the same time I feel like I encounter the real world results of this erosion constantly. While we have more software than ever, it all just kind of feels janky these days. I encounter errors in places I never had before, doing simple things. The other day I was doing a simple copy and paste operation (it was some basic csv formatted text from vs code to excel iirc) and I encountered a Windows (not excel or vs code) error prompt that my clipboard data had been lost in the time it took me to Alt+Tab and Ctrl+v, something I’ve been doing ~daily for 3 decades without any issues.

I’m more of a solo full stack dev and don’t really have first hand experience building software at scale and the process it takes to manage a codebase the size of the Windows OS, but these are the kinds of issues I see regularly these days and wouldn’t in the past. I also use macOS daily for almost as long and the Apple software has really tanked in terms of quality, I hit bugs and unexpected errors regularly. I generally don’t use their software (Safari, Mail, etc) when I can avoid it. Also have to admit lack of features is a big issue for me on their software.

bumby
0 replies
1d

I get the pragmatism argument, but I would like to think certain professions should hold themselves to a higher standard. Doctors, lawyers, and engineers have a duty to society IMO that runs counter to a “just mail it in to cash a paycheck” mentality. I guess it comes down to whether you consider software developers to be that same kind of engineer. Certainly I don’t want safety critical software engineers to have that cavalier attitude (although I’ve seen it).

jayd16
0 replies
18h21m

Maybe a lack of learned healthy shame from people joining the industry during the covid hiring spree.

carlmr
4 replies
1d

Cloud was a mistake; it’s inculcated people with the idea that chasing efficiency and optimization doesn’t matter, because you can always scale up or out.

Similarly Docker is an amazing technology, yet it enabled the dependency tower of babels that we have today. It enabled developers that don't care about cleaning up their depencies.

Kubernetes is amazing technology, yet it enabled the developers that don't care to ship applications that constantly crash, but who cares, kubernetes will automatically restart everything.

Cloud and now AI are similar enabler technologies. They could be used for good, but there are too many people that just don't care.

shadowgovt
3 replies
1d

The fine art of industry is building more and more elaborate complicated systems atop things someone deeply cares about to be used by those who don't have to deeply care.

How many developers do we imagine even know the difference between SIMD and SISD operators, much less whether their software stack knows how to take advantage of SIMD? How many developers do we imagine even know how RAM chips store bits or how a semiconductor works?

We're just watching the bar of "Don't need to care because a reliable system exists" move through something we know and care about in our lifetimes. Progress is great to watch in action.

blibble
1 replies
1d

How many developers do we imagine even know the difference between SIMD and SISD operators, much less whether their software stack knows how to take advantage of SIMD? How many developers do we imagine even know how RAM chips store bits or how a semiconductor works?

hopefully some of those that did a computer science degree?

(all of what you've said was part of mine)

shadowgovt
0 replies
1d

Globally, only 41% of software developers even have a bachelor's degree.

mistrial9
0 replies
23h32m

you insult fine art

jcgrillo
2 replies
19h36m

In a similar vein, some days I feel like a human link generator into e.g. postgres or kafka documentation. When docs are that clear, refined, and just damn good but it seems like nobody is willing to actually read them closely enough to "get it" it's just a very depressing and demotivating experience. If I could never again have to explain what a transaction isolation level is or why calling kafka a "queue" makes no sense at all I'd probably live an extra decade.

At the root of it, there's a profound arrogance in putting someone else in a position where they are compelled to tell you you're wrong[1]. Curious, careful people don't do this very often because they are aware of the limits of their knowledge and when they don't know something they go find it out. Unfortunately this is surprisingly rare.

[1] to be clear, I'm speaking here as someone who has been guilty of this before, now regrets it, and hopes to never do it again.

sgarland
0 replies
16h57m

I've taken to asking if people have read docs (and linking them to the correct page). So far it hasn't changed anything. Unfortunate.

kragen
0 replies
7h16m

last night someone told me about objdump -S. i wish i'd known about this many years ago; it would have saved me so much work

in my defense, though, it's possible the option didn't exist when i first read the objdump manual

noisy_boy
0 replies
10h27m

Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.

They are/will be the management's darling because they too are all about delivering without any interest in technology either.

Well designed technology isn't seen as foundation anymore; it is merely a tool to just keep the machine running. If parts of the machine are being damaged by the lack of judgement in the process, that shouldn't come in the way of this year's bonus; it'll be something to worry about in the next financial year. Nobody knows whats going to happen in the long-term anyway, make hay while the sun shines.

The age of short-term is upon us.

acedTrex
17 replies
1d1h

Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

Bingo, this, so much this. Every dev i know who loves AI stuff was a dev that I had very little technical respect for pre AI. They got some stuff done but there was no craft or quality to it.

simonw
6 replies
1d

I’m a developer who loves using AI, and I’m confident that I was a capable developer producing high quality code long before I started using AI tools.

mistrial9
2 replies
23h48m

ok but you are not the sort that most of these comments are talking about.. Wasn't your demo at PyCon making jaws drop among hard core techies ?

williamcotton
1 replies
22h37m

Generalizing the talent and enthusiasm of your peers based on the tools they enjoy to use is going to cause friction.

mistrial9
0 replies
2h48m

this is fun but also sort of startling to see https://www.williamcotton.com/articles/writing-web-applicati...

.. the exact same content for a screensaver with Todd Rundgren in 1987 on the then-new color Apple Macintosh II in Sausalito, California. A different screensaver called "Flow Fazer" was more popular and sold many copies. The rival at the time was "After Dark" .. whose founder had a PhD in physics from UC Berkeley but also turned out to be independently wealthy, and then one of the wealthiest men in the Bay Area after the dot-com boom.

wokwokwok
1 replies
18h8m

Hm… I think it’s fair to say that as a learning tool, when you are not familiar with the domain, coding assistants are the extremely valuable for everyone.

I wrote a kotlin idea plugin in a day; I’ve never used kotlin before and the jetbrains ui framework is a dogs breakfast of obscure edge cases.

I had no skills in this area, so I could happily lean into the assistance provided to get the job done. And it got done.

…but, I don’t use coding assistants day to day in languages I’m very familiar with: because they’re flat out bad compared to what I can do by hand, myself.

Even using Python, generated code is often subtly wrong and it takes more time to make sure it is correct than to do it by hand.

…now, I would assume that a professional kotlin developer would look at my plugin and go: that’s a heap of garbage, you won’t be able to upgrade that when a new version comes out (turns out, they’re right).

So, despite being a (I hope) competent programmer I have three observations:

1) the code I built worked, but was an unmaintainable mess.

2) it only took a day, so it doesn’t matter if I throw it away and build the next one from scratch.

3) There are extremely limited domains where that’s true, and I personally find myself leaning away from LLM anything where maintenance is a long term goal.

So, the point here is not that developers are good/bad:

It’s the LLM generated code is bad.

It is bad.

It is the sort of quick, rubbish prototyping code that often ends up in production…

…and then gets an expensive rewrite later, if it does the job.

The point is that if you’re in the latter phase of working on a project that is not throw away…

You know the saying.

Betty had a bit of bitter butter, so she mixed the bitter butter with the better butter.

But it made the better butter bitter.

rednafi
0 replies
5h27m

What you’re describing is the manifestation of Gell-Mann Amnesia in the AI space.

ben_w
0 replies
21h35m

If it were not for all the things in your profile, I would aver that the only devs I know that think otherwise of their coding abilities were students at the time.

Delk
2 replies
1d

For what it's worth, a (former) team mate who was one of the more enthusiastic adopters of gen AI at the time was in fact a pretty good developer who knew his stuff and wrote good code. He was also big on delivering and productivity.

In terms of directly generating technical content, I think he mostly used gen AI for more mechanical stuff such as drafting data schemas or class structures, or for converting this or that to JSON, and perhaps not so much for generating actual program code. Maybe there's a difference to someone who likes to have lots of program logic generated.

acedTrex
1 replies
1d

I have certainly used it for different mechanical things. I have copilot, pay for gpt4o etc.

I do think there is a difference between a skilled engineer using it for the mechanical things, and an engineer that OFFLOADS thinking/designing to it.

Theres nuance everywhere, but my original comment was definitely implying the people that attempt to lean on it very hard for their core work.

trashtester
0 replies
21h54m

Those bad engineers who offload critical thinking to copilot will be the first to leave....

wahnfrieden
1 replies
1d

You're playing status games

acedTrex
0 replies
1d

I mean... in a way yes. The status of being someone who cares about the craft of programming.

In the scheme of things however, that status hardly matters compared to the "ability to get something shipped quickly" which is what the vast majority of people are paid to do.

So while I might judge those people for not meeting my personal standards or bar. In many cases that does not actually matter. They got something out there, thats all that matters.

chasd00
1 replies
1d

i'm certainly skeptical of genai but your argument sounds very familiar to assembler dev feelings towards c++ devs back in the day.

acedTrex
0 replies
1d

I am not necessarily arguing against GenAI. I am sure it will have somewhat similar effects to how the explosion in popularity of garbage collected languages et al had on software back in the 90s.

More stuff will get done, the barrier of entry will be lower etc.

The craft of programming took a significant quality/care hit when it transitioned from "only people who care enough to learn the ins and outs of memory management can feasibly do this" to "now anyone with a technical brain and a business use case can do it". Which makes sense, the code was no longer the point.

The C++ devs rightly felt superior to the new java devs in the narrow niche of "ability to craft code." But that feeling doesn't move the needle business wise in the vast majority of circumstances. Which is always the schism between large technology leaps.

Basically, the argument of "its worse" is not WRONG. Just, the same as it did not really matter in the mid 90s. Does not matter as much now, compared to the ability to "just get something that kinda works."

Workaccount2
1 replies
22h24m

SWE has a huge draw because frankly it's not that hard to learn programming, and the bar to clear in order to land a $100-120k work-from-home salary is pretty low. I know more than a few people who career hopped into software engineering after a lackluster non-tech career (that they paid through the nose to get a degree in, but were still making $70k after 5 years). By and large these people seem to just not be "into it", and like you said are more about delivering than actually making good products/services.

However, it does look like LLM's are racing to make these junior devs unnecessary.

trashtester
0 replies
21h44m

However, it does look like LLM's are racing to make these junior devs unnecessary.

The main utility of "junior devs" (regardless of age) is that they can serve as an interface to non-technical business "users". Give them the right tools, and their value will be similar to good business controllers or similar in the org.

A salary of $100-$150k is really low for someone who is really a competent developer. It's kept down by those "junior devs" (of all ages) that apply for the same jobs.

Both kinds of developers will be required until companies use AI in most of those roles, including the controllers, the developers and the business side.

trashtester
0 replies
21h55m

was a dev that I had very little technical respect for pre AI

There are two interpretations of this:

1) Those people are imposters

2) It's about you, not them

I've been personally interested in AI since the early 80's, neural nets since the 90's, and vigilant about "AI" since Alexnet.

I've also been in a tech lead role for the past ~25 years. If someone is talking about newer "AI" models in a nonsensical way, I cringe.

anonzzzies
11 replies
1d5h

Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.

I found this too. But I also found the opposite, including here on HN; people who are interested in technology have almost an aversion against using AI. I personally love tech and I would and do write software for fun, but even that is objectively more fun for me with AI. It makes me far more productive (very much more than what the article states) and, more importantly, it removes the procrastination; whenever I am stuck or procrastinating getting to work or start, I start talking with Aider and before I know it, another task was done that I probably wouldn't have done that day without.

That way I now launch bi weekly open and closed source projects while before that would take months to years. And the cost of having this team of fast experienced devs sitting with me is max a few $ per day.

andybak
7 replies
1d5h

Yeah. There is a psychological benefit to using AI that I find very beneficial. A lot of tasks that I would have avoided or wasted time doing task avoidance on suddenly become tractable. I think Simon Willison said something similar.

jcgrillo
6 replies
19h44m

Are you just working on personal projects or in a shared codebase with tens or hundreds of other devs? If the latter, how do you keep your AI generated content from turning the whole thing into an incomprehensible superfund site of tech debt? I've gotten a lot of mileage in my career so far by paying attention when something felt tedious or mundane, because that's a signal you need some combination of refactoring, tooling, or automation. If instead you just lean on an LLM to brute-force your way through, sure, that accomplishes the short term goal of shipping your agile sprint deliverable or whatever, but what of the long term cost?

andybak
5 replies
19h31m

Are you just working on personal projects or in a shared codebase with tens or hundreds of other devs?

Like - I presume almost everyone - somewhere in the middle?

That was a helluva dichotomy to offer me...

how do you keep your AI generated content from turning the whole thing into an incomprehensible superfund site of tech debt?

By reading it, thinking about it and testing it?

Did I somehow give the impression I'm cutting and pasting huge globs of code straight from ChatGPT into a git commit?

There's a weird gulf of incomprehension between people that use AI to help them code and those that don't. I'm sure you're as confused by this exchange as I am.

jcgrillo
3 replies
19h23m

That was a helluva dichotomy to offer me...

I certainly didn't intend it that way, more like a continuum. O(1) --> O(10) --> O(100) --> ...

Did I somehow give the impression I'm cutting and pasting huge globs of code straight from ChatGPT into a git commit?

Yes, a little. It seemed to me like you were advocating using the LLM to generate large amounts of tedious output.

andybak
1 replies
19h0m

I'm not sure where I said that but I certainly didn't intend to give that impression.

I use AI either as an unblocker to get me started, or to write a handful of lines that are too complex to do from memory but not so complex that I can't immediately grok them.

I find both types of usage very satisfying and helpful.

jcgrillo
0 replies
18h23m

Interesting, thanks for clarifying.

anonzzzies
0 replies
16h8m

It does generate swats of code, however, you have to review and test it. But, depending on what you are working in/with, you would have to write this yourself anyway; for instance, Go has always so much plumbing, AI simply removes all those keystrokes. And very rigorously; it adds all the err and defer blocks in, which can be 100s and a large % of one go file: what is the point of writing that yourself? It does that very fast as well; if you write the main logic without any of that stuff and ask sonnet to make it 'good code', you write a few lines and get 100s back.

But it is far more useful on verbose 'team written' corporate stuff than on the more reuse intensive tech: in CL or Haskell, the community is far more DRY than Go or JS/TS; you tend to create and reuse many things and your much of the end result is (basically) a DSL; current AI is not very good at that in my experience; it will recreate or hallucinate (when you pressure reuse of previously created things, if there are too many, even though it does fit in the context window) functions all over the place. But many people have the same issue; they don't know, cannot search or forget and will just redo things many times over; AI makes that far easier (as in, no work at all often), so that's the new reality.

rurp
0 replies
13h57m

Working in a codebase with 10s of other developers seems... pretty normal? Not universal sure, but that has to be a decent percent of professional software work. Once you get to even a half dozen people working in a code base I think consistency and clarity take on a significant role.

In my own experience I've worked on repos with <10 other devs where I spent far more effort on consistency and mantainability than getting the thing to work.

skydhash
1 replies
17h56m

people who are interested in technology have almost an aversion against using AI

Personally, I don't use LLMs. But I don't mind people using them as interactive search engines or code/text manipulations as long as they're aware of the hallucination risks and took care of what they're copying into the project. My reasons for it is mostly that I'm a journey guy, not a destination guy. And I love reading books and manuals as they give me an extensive knowledge map. Using LLMs feels like taking guidance from someone who has not ventured 1km outside their village, but heard descriptions from passersby. Too much vigilance required for the occasional good stuff.

And the truth is, there are a lot of great books and manuals out there. And while they teach you how to do stuff, they often teach you why you should not do it. I strongly doubt Copilot imparting architectural and technical reminders alongside the code.

anonzzzies
0 replies
16h1m

I'm a journey guy, not a destination guy

For my never finishing side projects I am too; I enjoy my weekends tinkering on the 'final database' system I have been building in CL for over a decade and will probably never really 'finish'. But to make money, I launch things fast and promote them; AI makes that far easier.

Especially for parts like fronted that I despise; I find 0 pleasure in working with css magic that even seasoned frontenders have to try/fail in a loop to create. I let Sonnet just struggle until it's good enough instead of me having to do that annoying chore; then I ask Aider to attach it to the backend and done.

trashtester
0 replies
21h37m

people who are interested in technology have almost an aversion against using AI

I wonder if this is an age thing, for many people. I'm old enough to have started reading these discussions on Slashdot in the early 90s.

But between 2000 and 2010, Slashdot changed and became much less open to new ideas.

The same may be happening to HN right now.

It feels like a lot of people are stuck in the tech of 10 years ago.

nerdjon
10 replies
1d5h

I am curious about this also. I have now seen multiple PR's that I had to review that a method was clearly completely modified by AI with no good reason and when asked why something was changed we just got silence. Not exaggerating, literal silence and then trying to ignore the question and explain the thing we were first asking them to do. Clearly having no idea what is actually in this PR.

This was done because we asked for a minor change to be done (talking maybe 5 lines of code) and tested. So now not only are we dealing with new debt, we are dealing with code that no one can explain why it was completely changed (and some of the changes were changes for the sake of change), and we are dealing with those of us that manage this code now looking at completely foreign code.

I keep seeing this with people that are using these tools and they are not higher level engineers. We finally got to the point of denying these PR's and saying to go back and do it again. Loosing any of the time that was theoretically gained from doing it in the first place.

Not saying these tools don't have a place. But people are using it without understanding what it is putting out and not understanding the long term effects it will have on a code base.

okwhateverdude
8 replies
23h39m

Not saying these tools don't have a place. But people are using it without understanding what it is putting out and not understanding the long term effects it will have on a code base.

It is worse than that. We're all maintaining in our heads the mental sand castle that is the system the code base represents. The abuse of the autocoder erodes that sand castle because the intentions of the changes, which are crucial for mentally updating the sand castle, are not communicated (because they are unknowable). This is same thing with poor commit messages, or poor documentation around requirements/business processes. With enough erosion, plus expected turn over in staff, the sand castle is actually gone.

rongenre
7 replies
21h44m

There needs to be a pattern for including AI-generated code, recording all the context which led to the generation.

jcgrillo
6 replies
19h58m

Easy: exclude developers who try it. Learning--be it a new codebase, a new programming language, a new database--takes time. You're not going to be as productive until you learn how. That's fine! Cheating on your homework with an LLM should not be something we celebrate, though, because the learner will never become productive that way, and they won't understand the code they're submitting for review.

XenophileJKO
5 replies
11h15m

The truth is that the tools are actually quite good already. If you know what you are doing they will 10-20x your productivity.

Ultimately not adopting them will religate you to the same fate as assembly programmers. Sure there are place for it, but you won't be able to get near as much functionally done in the same amount of time and there won't be as much demand for it.

shinycode
1 replies
7h53m

Do you agree that the brain-memory activity of writing code and reading someone else’s code is totally different ? The sand castle analogy is still valid here because once you have a x10 productivity or worse a x20 one, there is no way you can deeply understand the things the same way than if you wrote it from scratch. Without spending a considerable amount of time and getting productivity down, the understanding is not the same. If no one is responsible because it’s crap software and you won’t be around enough time to bear responsibility… it’s ok I guess ?

andai
0 replies
7h51m

I write at 1x speed and I can't even understand my own code the next day.

kragen
0 replies
7h22m

if you are seeing 900% productivity gains, why did these controlled experiments only find 28%, mostly among programmers who don't know what they're doing? and only 8–10% among programmers who did? do you have any hypotheses?

i suspect you are seeing 900% productivity gains on certain narrow tasks (like greenfield prototype-quality code using apis you aren't familiar with) and incorrectly extrapolating to programming as a whole

jcgrillo
0 replies
4h16m

We'll see. Get back to me sometime in 2034 and we can compare notes.

dasil003
0 replies
5h28m

I think you’re probably right, though I fear what it will mean for software quality. The transition from assembly to high level languages was about making it easier to both write and to understand code. AI really just accelerates writing with no advancement in legibility.

jonnycomputer
0 replies
4h46m

When using code assist, I've occasionally found some perplexing changes to my code I didn't remember making (and wouldn't have made). Can be pretty frustrating.

anonzzzies
5 replies
1d5h

had to tackle after the less experienced devs have contributed their AI-driven efforts.

So, like before AI then? I haven't seen AI deliver illogical nonsense that I couldn't even decipher like I have seen some outsourcing companies deliver.

trashtester
2 replies
21h34m

that I couldn't even decipher

This is one of the challenges of being a tech lead. Sometimes code is hard to comprehend.

In my experience, AI delivered code is no worse than entry level developer code.

anonzzzies
1 replies
16h29m

But that is a HN bubble thing: I work and worked with seniors with 10-15 years under their belt who have no logic bone in their body; the worst (and an AI does not do that) is when there is a somewhat 'busy' function that has an if or switch statement and, over time, to add features or fix bugs, ifs were added. Now after 5+ years, this function is 15000 lines and is somewhat of a trained neural network adjacent thing; 100s of nested ifs, pages long, that cannot be read and cannot be followed even if you have a brain. This is made by senior staff of, usually, outsourcing companies and I have seen it very many times over the past 40 years. Not entry level, not small companies either. I know a gov tax system which was partly maintained by a very large and well known outsourcing company which has numerous of these puppies in production that no one dares to touch.

AI doesn't do stuff like because it could not, which to me, is a good thing. When it gets better, it might start to do it, I don't know.

People here live in a bubble where they think the world is full of people who read 'beautiful code', make tests, use git or something instead of zip$date and know how DeMorgan works; by far, most don't, not juniors, not seniors.

nottorp
0 replies
9h45m

to add features or fix bugs, ifs were added.

"Back to that two page function. Yes, I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. "

https://www.joelonsoftware.com/2000/04/06/things-you-should-...

Of course, you can try to split the 15000 line function in logical blocks or something, but don't assume the ifs in it are useless.

nottorp
0 replies
9h47m

I haven't seen AI deliver illogical nonsense

I have. If you're doing niche-er stuff it doesn't have enough data and hallucinates. The worst is when it spits two screens of code instead of 'this cannot be done at the level you want it'.

that I couldn't even decipher

That's unrelated to code quality. Especially with C++ which has become as write only as perl.

htrp
0 replies
1d5h

AI seems to give more consistent results than outsourcing (not necessarily better, but at least more predictable failure modes)

fer
4 replies
1d5h

Disclaimer: I work at a company who sells coding AI (among many other things).

We use it internally and the technical debt is an enormous threat that IMO hasn't been properly gauged.

It's very very useful to carpet bomb code with APIs and patterns you're not familiar with, but it also leads to insane amounts of code duplication and unwieldy boilerplate if you're not careful, because:

1. One of the two big bias of the models is the fact that the training data is StackOverflow-type training data, which are examples and don't take context and constraints into account.

2. The other is the existing codebase, and it tends to copy/repeat things instead of suggesting you to refactor.

The first is mitigated by, well, doing your job and reviewing/editing what the LLM spat out.

The second can only be mitigated once diffs/commit history become part of the training data, and that's a much harder dataset to handle and tag, as some changes are good (refactorings) but other might be not (bugs that get corrected in subsequent commits) and no clear distinction as commit messages are effectively lies (nobody ever writes: bug introduced).

Not only that, merges/rebases/squashes alter/remove/add spurious meanings to the history, making everything blurrier.

dboreham
1 replies
18h35m

Consider myself very fortunate to have lived long enough that I'm reading a thread where the subject is the quality of the code generated by software. Decades of keeping that lollypop ready to be given, and now look where we are!

samatman
0 replies
2h59m

The rate at which I quote Perlis 93 is a roughly quadratic function of the number of quarters since coding LLMs were first deployed.

trashtester
0 replies
21h42m

If you're in a company that is valued at $10M+ per developer, the technical debt is not a big concern.

Either you will go bust, OR you will be able to hire enough people to pay those debts, once you get traction in the market.

disconcision
0 replies
3h5m

(nobody ever writes: bug introduced)

it's usually written "feature: ...

theflyinghorse
2 replies
14h13m

I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering.

If I don't deliver my startup burns in a year. In my previous role if I didn't deliver the people who were my reports did not get their bonuses. The incentives are very clear, and have always been clear - deliver.

rurp
1 replies
14h7m

Succesful companies aren't built in a sprint. I doubt there has ever been a successful startup that didn't have at least some competent people thinking a number of steps ahead. Piling up tech debt to hit some short term arbitrary goals is not a good plan for real success.

theflyinghorse
0 replies
13h48m

These are platitudes.

Here's a real example of delivering something now without worrying about being the best engineer I can. I have 2 CSRs, They are swamped with work and we're weeks away from bringing another CSR on board. I find a couple of time consuming tasks that are easy to automate and build those out separately as one-off jobs that work well enough. Instantly it's a solid time gain & stress reducer to CSRs. Are my one-off automation tasks a long term solution? No. Do I care? Not at the moment, and my ego can take a hit for the time being.

mrighele
2 replies
22h50m

Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

For them programming is a means to an end, and I think it is fine, in a way. But you cannot just ask an AI to write you tiktok clone and expect to get the finished product. Writing software is an iterative process, and LLMs currently used are not good enough for that, because they need not only to answer the questions, but at the very minimum to start asking questions: "why do want to do that ?" "do you prefer this or that", etc., so that they can actually extract all the specification details that the user happily didn't even know he needed before producing an appropriate output. (It's not too different from how some independent developers has to handle their clients, isn't it ?). Probably we will get there, but not too soon.

I also doubt that current tools can keep a project architecturally sound long-term, but that is just an hunch.

I admit though that I may be biased because I don't like much tools like copilot: when I write software, I have in my mind a model of the software that I am writing/I want to write, the AI has another model "in mind" and I need to spend mental energy understanding what it is "thinking". Even if 99/100 it is what I wanted, the remaining 1% is enough to hold me back from trusting it. Maybe I am using it the wrong way, who knows.

The AI tool that work for me would be a "voice controller AI powered pair programmer": I write my code, then from time to time I ask him questions on how to do something, and I can get either an contextual answer depending on the code I am working on, or generate the actual code if I wish so". Are there already plugins working that way for vscode/idea/etc ?

liminalsunset
1 replies
21h58m

I've been playing with Cursor (albeit with a very small toy codebase) and it does seem like it could do some of what you said - it has a number of features, not all of which necessarily generate code. You can ask questions about the code, about documentation, and other things, and it can optionally suggest code that you can either accept, accept parts of, or decline. It's more of a fork of vscode than a plugin right now though.

It is very nice in that it gives you a handy diffing tool before you accept, and it very much feels like it puts me in control.

mrighele
0 replies
20h38m

Thanks, I might give it a try

kraftman
2 replies
1d4h

Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

Is this a bad thing? Maybe I'm misunderstanding it, but even when I'm working on my own projects, I'm usually trying to solve a problem, and the technology is a means to an end to solving that problem (delivering). I care that it works, and is maintainable, I don't care that much about the technology.

never_inline
0 replies
1d1h

Read it as "closing a ticket" that makes sense. They don't care if the code explodes after the sprint.

layer8
0 replies
22h34m

Caring that it works and is maintainable is caring about the technology.

jacobsenscott
2 replies
1d

Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

You get what you measure. Nobody measure software quality.

wahnfrieden
1 replies
1d

Maybe not at your workplaces, but at mine, we measured bugs, change failure rate, uptime, "critical functionality" uptime, regressions, performance, CSAT, etc. in addition to qualitative research on quality in-team and with customers

jacobsenscott
0 replies
3h28m

That's 0.001% of the workplaces out there. But good for them. Do you think it actually improved software quality, or was it all busy work?

Kiro
2 replies
1d

I don't think it's that clear cut. I personally think the AI often delivers a better solution than the one I had in mind. It always contains a lot more safe guards against edge cases and other "boring" stuff that the AI has no problem adding but others find tedious.

trashtester
1 replies
21h28m

If you're building a code base where AI is delivering on the the details of it, it's generally a bad thing if the code provided by AI provide safeguards WITHIN your code base.

Those kinds of safeguards should instead be part of the framework you're using. If you need to prevent SQL injection, you need to make sure that all access to the SQL type database pass through a layer that prevents that. If you are worried about the security of your point of access (like an API facing the public), you need to apply safeguards as close to the point of entry as possible, and so on.

I'm a big believer in AI generated code (over a long horizon), but I'm not sure the edge case robustness is the main selling point.

Kiro
0 replies
20h43m

Sounds like we're talking about different kind of safeguards. I mean stuff like a character in a game ending up in a broken case due to something that is theoretically possible but very unlikely or where the consequence is not worth the effort. An AI has no problem taking those into account and write tedious safeguards, while I skip it.

prophesi
1 replies
1d

From my experience, most of it is quickly caught in code review. And after a while it occurs less and less, granted that the junior developer puts in the effort to learn why their PRs aren't getting approved.

So, pretty similar to how it was before. Except that motivated junior developers will improve incredibly fast. But that's also kind of always been the case in software development these past two decades?

YeGoblynQueenne
0 replies
1d

It's interesting to see your comment currently right next to that of nerdjon here:

https://news.ycombinator.com/item?id=41465827

(the two comments might have moved apart by the time you read this).

Edit: yep, they just did.

noobermin
0 replies
1d5h

It doesn't, it looks at PRs, commits, and builds only. They remark at this lacking context.

jonnycomputer
0 replies
4h48m

Code quality is the hardest thing to measure. Seems like they were measuring commits, pull-requests, builds, and build success rate. This sort of gets at that, but is probably inadequate.

The few attempts I've made at using genAI to make large-scale changes to code have been failures, and left me in the dark about the changes that were made in ways that were not helpful. I needed suggestions to be in much smaller chunks. paragraph sized. Right now I limit myself to using the genAI line completion suggestions in Pycharm. It very often guesses my intentions and so actually is helpful, particularly when laboriously typing out lots of long literals, e.g. keys in a dictionary.

infecto
0 replies
1d5h

I hear this but I don't think this is a new issue that AI brought, it simply magnified it. That's a company culture issue.

It reminds me of a talk Raymond Hettinger put on a while ago about rearranging the flowers in the garden. There is a tendency from new developers to rearrange for no good reason, AI makes it even easier now. This comes down to a culture problem to me, AI is simply the tool but the driver is the human (at least for now).

godelski
0 replies
23h18m

  > I wonder if the study includes the technical debt that more experienced developers had to tackle after the less experienced devs have contributed their AI-driven efforts. 
It does not

You also may find this post from the other day more illuminating[0], as I believe the actual result strongly hints at what you're guessing. The study is high schoolers doing math. While GPT only has an 8% error rate for the final answer, it gets the steps wrong half the time. And with coding (like math), the steps are the important bits.

But I think people evaluate very poorly when there's ill defined metrics but some metric exists. They over inflate it's value since it's concrete. Like completing a ticket doesn't mean you made progress. Introducing technical debt would mean taking a step back. A step forward in a very specific direction but away from the actual end goal. You're just outsourcing work to a future person and I think we like to pretend this doesn't exist because it's hard to measure.

[0] https://news.ycombinator.com/item?id=41453300

elric
0 replies
2h36m

I don't remember who said it, but "AI generated code turns every developer into a legacy code maintainer". It's pithy and a bit of an exaggeration, but there's a grain of truth in there that resonates with me.

creativeSlumber
0 replies
19h59m

This. Even without AI, we have inexperienced developers rolling out something that "just works" without thinking about many of the scaling/availability issues. Then you have to spend 10x the time fixing those issues.

add-sub-mul-div
0 replies
1d1h

Maintaining bad AI code is the new maintaining bad offshore code.

EvkoGS
0 replies
2h25m

Also, I've personally seen more interest in AI in devs that have little interest in technology, but a big interest in delivering. PMs love them though.

You are not a fucking priest in the temple of engineering, go to fucking CS dep at the local uni and be the one and preach it there. You are worker of the company with customers, which pays you a salary from customers money.

danielvaughn
53 replies
1d5h

A 26% productivity increase sounds inline with in my experience. I think one dimension they should explore is whether you're working with a new technology or one that you're already familiar with. AI helps me much more with languages/frameworks that I'm trying to learn.

jstummbillig
17 replies
1d5h

That might be. It might also be a consequence of how flexible people at different stages in their career are.

I have seen mostly senior programmers argue why ai tools don't work. Juniors just use them without prejudice.

jncfhnb
6 replies
1d5h

My experience is juniors using them without prejudice and then not understanding why their code is wrong

SketchySeaBeast
5 replies
1d5h

I am seeing that a lot - juniors who can put out a lot of code but when they get stuck they can't unstick themselves, and it's hard for me to unstick them because they have a hard time walking me through what they are doing.

carlmr
2 replies
1d5h

I've gotten responses now on PRs of the form. "I don't know either, this is what Copilot told me."

If you don't even understand your own PR, I'm not sure why you expect other people can.

I have used LLMs myself, but mostly for boilerplate and one-off stuff. I think it can be quite helpful. But as soon as you stop understanding the code it generates you will create subtle bugs everywhere that will cost you dearly in the long run.

I have the strong feeling that if LLMs really outsmart us to the degree that some AI gung-ho types believe, the old Kernighan quote will get a new meaning:

"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."

We'll be left with code nobody can debug because it was handed to us by our super smart AI that only hallucinates sometimes. We'll take the words of another AI that the code works. And then we'll hope for the best.

SketchySeaBeast
1 replies
1d5h

Copilot is best for me when it's letting me hit tab to auto-complete something I was going to write anyways.

carlmr
0 replies
10h40m

Copilot does hit that sweet to spot sometimes.

ebiester
1 replies
1d1h

Is that so new? We used to complain when someone blindly copied and pasted from stack overflow. Before that, from experts-exchange.

Coding is still a skill acquisition that takes years. We need to stamp out the behavior of not understanding what they take from copilot, but the behavior is not new.

SketchySeaBeast
0 replies
1d

You're right, it isn't entirely new, but I think it's still different. You still had to figure out how that new code snippet applied to your code. Stack overflow wouldn't shape the code you're trying to slot in to fit your current code.

infecto
2 replies
1d5h

I wish I could upvote this comment more than once. There does appear to be a prejudice with more senior programmers arguing why it cannot work, how they just cause more trouble, and other various complaints. The tools today are not perfect but they still amaze me at what is being accomplished, even a 10% gain is incredible for something that costs $10/month. I believe progress will be made in the space and the tooling in 5 years will be even better.

randomdata
0 replies
1d4h

> The tools today are not perfect

The trouble is that they seem to be getting worse. Some time ago I was able to write an entire small application by simply providing some guidance around function names and data structures, with an LLM filling in all of the rest of the code. It worked fantastically and really showed how these tools can be a boon.

I want to taste that same thrill again, but these days I'm lucky if I can get something out of it that will even compile, never mind the logical correctness. Maybe I'm just getting worse at using the tools.

mattgreenrocks
0 replies
23h41m

The prejudice comes down to whether they want to talk the LLM into the right solution vs applying what they know already. If you know your way around then there’s no need to go through the LLM. I think sr devs often tend to be more task focused so the idea of outsourcing the thinking to an LLM feels like another step to take on.

I find Claude good at helping me find how to do things that I know are possible but I don’t have the right nomenclature for. This is an area where Google fails you, as you’re hoping someone else on the internet used similar terms as you when describing the problem. Once it spits out some sort of jargon I can latch onto, then I can Google and find docs to help. I prefer to use multiple sources vs just LLMs, partially because of hallucination, but also to keep amassing my own personal context. LLMs are excellent as librarians.

marginalia_nu
1 replies
1d1h

I find their value depends a lot on what I'm doing. Anything easy I'll get insane leverage, no exaggeration I'll slap together that shit 25x faster. It's seen likely billions of lines of simple CRUD endpoints, so yeah it'll write those flawlessly for you.

Anything the difficult or complex, and it's really a coinflip if it's even an advantage, most of the time it's just distracting and giving irrelevant suggestions or bad textbook-style implementations intended to demonstrate a principle but with god-awful performance. Likely because there's simply not enough training data for these types of tasks.

With this in mind, I don't think it's strange that junior devs would be gushing over this and senior devs would be raising a skeptical eyebrow. Both may be correct, depending on what you work on.

jeremy151
0 replies
22h24m

I think for me, I'm still learning how to make these tools operate effectively. But even only a few months in, it has removed most all the annoying work and lets me concentrate on the stuff that I like. At this point, I'll often give it some context, tell it what to make and it spits out something relatively close. I look it over, call out like 10 things, each time it says "you're right to question..." and we do an iteration. After we're thru that, I tell it to write a comprehensive set of unit tests, it does that, most of them fail, it fixes them, and them we usually have something pretty solid. Once we have that base pattern, I can have it pattern and extend variants after the first solid bit of code. "Using this pattern for style and approach, make one that does XYZ instead."

But what I really appreciate is, I don't have to do the plug and chug stuff. Those patterns are well defined, I'm more than happy to let the LLM do that and concentrate on steering whether it's making a wise conceptual or architectural choice. It really seems to act like a higher abstraction layer. But I think how the engineer uses the tool matters too.

tuyiown
0 replies
1d5h

Flexible, or just that only juniors have real benefits

nuancebydefault
0 replies
1d

Now some junior dev can quickly make something new and fully functional in days, without knowing in detail what they are doing. As opposed to weeks by a senior originally.

Personally I think that senior devs might fear a conflict within their identity. Hence they draw the 'You and the AI have no cue' card.

SoftTalker
0 replies
1d

Juniors don't know enough to know what problems the AI code might be introducing. It might work, and the tests might pass, but it might be very fragile, full of duplicated code, unnecessary side-effects, etc. that will make future maintenance and debugging difficult. But I guess we'll be using AI for that too, so the hopefully the AI can clean up the messes that it made.

SketchySeaBeast
0 replies
1d5h

As a senior, I find that trying to use copilot really only gives me gains maybe half the time, the other half the time it leads me in the wrong direction. Googling tends to give me a better result because I can actually move through the data quicker. My belief is this is because when I need help I'm doing something uncommon or hard, as opposed to juniors who need help doing regular stuff which will have plenty of examples in the training data. I don't need help with that.

It certainly has its uses - it's awesome at mocking and filling in the boilerplate unit tests.

GoToRO
0 replies
1d5h

As a senior, you know the problem is actually finishing a project. That's the moment all those bad decisions made by junior need to be fixed. This also means that a 80% done project is more like 20% done, because in the state it is, it can not be finished: you fix one thing and break 2 more.

pqdbr
9 replies
1d5h

Same impression here.

I've been using Cursor for around 10 days on a massive Ruby on Rails project (a stack I've been coding in for +13 years).

I didn't enjoy any productivity boost on top of what GitHub Copilot already gave me (which I'd estimate around the 25% mark).

However, for crafting a new project from scratch (empty folder) in, say, Node.js, it's uncanny; I can get an API serving requests from a OpenAPI schema (serving the OpenAPI schema via swagger) in ~5 minutes just by prompting.

Starting a project from scratch, for me at least, is rare, which probably means going back to Copilot and vanilla VSCode.

gamerDude
4 replies
1d5h

I start new projects a lot and just have a template that has everything I would need already set up for me. Do you think there is a unique value prop that AI gives you when setting up that a template would not have?

pjot
1 replies
1d5h

GH Copilot chat specifically has directives like

  @workspace /new “scaffold a ${language} project”
that automagically creates a full project structure and boilerplate. It’s been great for one off things for me at least

sethops1
0 replies
21h58m

You don't need AI to implement that.

lherron
0 replies
1d5h

I do both. I have a basic template that also includes my specific AI instructions and conventions that are inputs for Aider/Cursor. Best of both worlds.

carlmr
0 replies
1d5h

AI has templates for the languages you seldom use (as long as the language is not too niche for the LLM to have trained on it).

yunwal
0 replies
1d

If I struggle on a particularly hard implementation detail in a large project, often I'll use an LLM to set up a boilerplate version of the project from scratch with fewer complications so that I can figure out the problem. It gets confused if it's interacting with too many things, but the solutions it finds in a simple example can often be instructive.

dartos
0 replies
1d5h

How do you use AI in your workflow outside of copilot?

I haven’t been able to get any mileage out of chat AI beyond treating it like a search engine, then verifying what it said…. Which isn’t a speedy workflow

SketchySeaBeast
0 replies
1d5h

I feel like this whole "starting a new project" might be the divide between the jaded and excited, which often (but not always) falls between senior and junior lines. I just don't do that anymore. Coding is no longer my passion, it's my profession. I'm working in an established code base that I need to thoughtfully expand and improve. The easy and boilerplate problems are solved. I can't remember the last time I started up a new project, so I never see that side of copilot or cursor. Copilot might at its best when tinkering.

CuriouslyC
0 replies
1d5h

To fully extract maximum value out of LLMs, you need to change how you architect and organize software to make it easy for them to work with. LLMs are great with function libraries and domain specific languages, so the more of your code you can factor into those sorts of things the greater a speed boost they'll give you.

hobofan
9 replies
1d5h

I'd also expand it to "languages/frameworks that I'll never properly learn".

I'm not great at remembering specific quirks/pitfalls about secondary languages like e.g. what the specific quoting incantations are to write conditionals in Bash, so I rarely wrote bash scripts for automation in the past. Basically only if that was a common enough task to be worth the effort. Same for processing JSON with jq, or parsing with AWK.

Now with LLMs, I'm creating a lot more bash scripts, and it has gotten so easy that I'll do it for process-documentation more often. E.g. what previously was a more static step-by-step README with instructions is now accompanied with an interactive bash script that takes user input.

danielvaughn
6 replies
1d2h

Oh god yes, LLM for bash scripting is fantastic. Bash is one of those languages that I simply don't want to learn. Reading it makes my eyes hurt.

iwontberude
3 replies
1d1h

Bash is a terrible language in so so many ways and I concur that I have no interest in learning its new features going forward. I remember how painfully inadequate it was for actual computation and its data types are non existent. Gross.

SoftTalker
2 replies
1d

You have to understand the domain in which its intended to be used.

Bash scripts are essentially automating what you could do at the command line with utility programs, pipes, redirects, filters, and conditionals.

If you're getting very far outside of that scope, bash is probably the wrong tool (though it can be bent to do just about anything if one is determined enough).

danielvaughn
1 replies
23h49m

I don't know too much about the history, but wasn't this one of the original motivations for Python? Like it was meant to be basically "bash scripting but with better ergonomics", or something like that.

okwhateverdude
0 replies
23h24m

You might thinking of Perl. Affectionately known as the swiss army chainsaw of programming languages, it incorporated elements from bash, awk, sed, etc and smushed them together into a language that carried the early Web.

einpoklum
0 replies
1d1h

You could, but then your script won't run anywhere except your own system, as opposed to running basically everywhere...

sgarland
1 replies
1d5h

While I’ll grant you that LLMs are in fact shockingly good with shell scripts, I also highly recommend shellcheck [0]. You can get it as a treesitter plugin for nvim, and I assume others, so it lints as you write.

[0]: https://www.shellcheck.net/

insane_dreamer
5 replies
1d1h

I haven't found it that useful in my main product development (which while python based, uses our own framework and therefore there's not much code for CoPilot to go on and it usually suggests methods and arguments that don't exist, which just makes extra work).

Where I do find it useful are

1) questions about frameworks/languages that I don't work in much and for which there is a lot of example content (i.e., Qt, CSS);

2) very specific questions I would have done a Google Search (usually StackOverflow) for ("what's the most efficient way to CPU and RAM usage on Windows using python") - the result is pointing me to a library or some example rather than directly generating code that I can copy/paste

3) boilerplate code that I already know how to write but saves me a little time and avoids typing errors. I have the CoPilot plugin for PyCharm so I'll write it as a comment in the file and then it'll complete the next few lines. Again best results is something that is very short and specific. With anything longer I almost always have to iterate so much with CoPilot that it's not worth it anymore.

4) a quick way to search documentation

Some people have said it's good at writing unit tests but I have not found that to be the case (at least not the right kind of unit tests).

If I had to quantify it, I'd probably give it a 5-10% increase in productivity. Much less than I get from using a full featured IDE like PyCharm over coding in Notepad, or a really good git client over typing the git commands in the CLI. In other words, it's a productivity tool like many other tools, but I would not say it's "revolutionary".

nuancebydefault
4 replies
1d

It is revolutionary no doubt. How many pre-AI tools would accomplish an amount of 4 big features you mentioned at once?

insane_dreamer
1 replies
23h49m

Well, like I said, a well-designed IDE is a much bigger productivity booster than CoPilot, and I've never heard anyone describe JetBrains, NetBeans or VSCode as "revolutionary".

nuancebydefault
0 replies
23h16m

I would call IDE's "evolutionary". Jetbrains is a better evolutionary version of, say, Eclipse in the day.

skydhash
0 replies
17h9m

1) questions about frameworks/languages that I don't work in much and for which there is a lot of example content

Books and manuals, they're pretty great for introductory materials. And for advanced stuff, you have to grok these first.

2) very specific questions I would have done a Google Search (usually StackOverflow) for ("what's the most efficient way to CPU and RAM usage on Windows using python")

I usually go backwards for such questions, searching for not what I want to do, but how it would look like if it exists. And my search-fu have not failed me that much in that regards, but that requires knowledge on how those things work, which again goes back to books and other such materials.

3) boilerplate code that I already know how to write but saves me a little time and avoids typing errors.

Snippets and templates in my editor. And example code in the documentation.

4) a quick way to search documentation

I usually have a few browser tabs open for whatever modules I'm using, plus whatever the IDE has, and PDFs and manual pages,...

For me, LLMs feel like building a rocketship to get groceries at the next village, and then hand-waving the risks of explosions and whether it would actually get you there.

insane_dreamer
0 replies
3h23m

I used Google Search for all of the above and would usually find what I needed in the first few hits -- which would lead to the appropriate doc page or an example page or a page on SO, etc.

So it's not like CoPilot is giving me information that I couldn't get fairly easily before. But it is giving it to me much __faster__ than I could access it before. I liken it to an IDE tool that allows you to look up API methods as you type. Or being able to ask an expert in that particular language/domain, except it's not as good as the expert because if the expert doesn't know something they're not going to make it up, they'll say "don't know".

So how much benefit you get from it is relative to how much you have to look up stuff that you don't know.

moolcool
2 replies
1d5h

I've found Copilot pretty good at removing some tedium, like it'll write docstrings pretty well most of the time, but it does almost nothing to alleviate the actual mental labour of software engineering

jacobsenscott
1 replies
1d

Removing the tedium frees your brain up to focus on the interesting parts of the problem. The "actual mental labor" is the fun part. So I like that.

danielvaughn
0 replies
23h53m

Yeah this has been my experience. I bucket my work largely into two categories - "creative work" and "toil". I don't want or need AI to replace my creative work, but the more it can handle toil for me, the better.

As it happens, I think co-pilot is a pretty poor user experience, because it's essentially just an autocomplete, which doesn't really help me all that much, and often gets in my way. I like using Cursor with the autocomplete turned off. It gives you the option to highlight a bit of text and either refactor it with a prompt, or ask a question about it in a side chat window. That puts me in the driver seat, so to speak, so I (the user) can reach out to AI when I want to.

jakub_g
1 replies
23h0m

I only use (free) ChatGPT sporadically, and it works best for me in areas where I'm familiar enough to call bullshit, but not familiar enough to write things myself quickly / confidently / without checking a lot of docs:

- writing robust bash and using unix/macos tools

- how to do X in github actions

- which API endpoint do I use to do Y

- synthesizing knowledge on some topic that would require dozens of browser tabs

- enumerating things to consider when investigating things. Like "I'm seeing X, what could be the cause, and how I do check if it's that". For example I told it last week "git rebase is very slow, what can it be?" and it told me to use GIT_TRACE=1 which made me find a slow post-commit hook, and suggested how to skip this hook while rebasing.

thiht
0 replies
8h30m

Same for me. I also use it for some SQL queries involving syntax I’m unfamiliar with, like JSONB operators in Postgres. ChatGPT gives me better results, faster than Google.

empath75
1 replies
1d5h

Sounds about right to me, which is why the hysteria about AI wiping out developer jobs was always absurd. _Every_ time there has been a technology that improved developer productivity, developer jobs and pay have _increased_. There is not a limited amount of automation that can be done in the world and the cheaper and easier it gets to automate stuff, the more stuff will be economically viable to automate that wasn't before. Did IDE's eliminate developer jobs? Compilers? It's just a tool.

randomdata
0 replies
1d3h

The automated elevator is just a tool, but it "wiped out" the elevator operator. Which is really to say not that the elevator operator was wiped out, but that everyone became the elevator operator. Thus, by the transitive properties of supply and demand, the value of operating an elevator declined to nothing.

Said hysteria was built on the same idea. After all, LLMs themselves are just compilers for a programming language that is incredibly similar to spoken language. But as the programming language is incredibly similar to the spoken language that nearly everyone already knows, the idea was that everyone would become the metaphorical elevator operator, "wiping out" programming as a job just as elevator operators were "wiped out" of a job when operating an elevator became accessible to all.

The key difference, and where the hysteria is likely to fall flat, is that when riding in an elevator there isn't much else to do but be the elevator operator. You may as well do it. Your situation would not be meaningfully improved if another person was there to press the button for you. When it comes to programming, though, there is more effort involved. Even when a new programming language makes programming accessible, there remains a significant time commitment to carry out the work. The business people are still best to leave that work to the peons so they can continue to focus on the important things.

rm_-rf_slash
0 replies
1d

AI speeds my coding up 2x-4x, depending on the breadth of requirements and complexity of new tech to implement.

But coding is just a fraction of my weekly workload, and AI has been less impactful for other aspects of project management.

So overall it’s 25%-50% increase in productivity.

itchyjunk
0 replies
1d5h

Maybe that's what the new vs experienced difference is indirectly capturing.

arexxbifs
18 replies
1d5h

My hunch - it's just a hunch - is that LLM-assisted coding is detrimental to one's growth as a developer. I'm fairly certain it can only boost productivity to a certain level - one which may be tedium for more senior developers, but formative for juniors.

My experience is that the LLM isn't just used for "boilerplate" code, but rather called into action when a junior developer is faced with a fairly common task they've still not (fully) understood. The process of experimenting, learning and understanding is then largely replaced by the LLM, and the real skill becomes applying prompt tweaks until it looks like stuff works.

JackMorgan
4 replies
12h41m

For myself it's an incredible tool for learning. I learn both broader and deeper using chat tools. If anything it gives me a great "sounding board" for exploring a subject and finding additional resources.

E.g last night I setup my first Linux raid. A task that isn't too hard, but following a tutorial or just "reading the docs" isn't particularly helpful given it takes a few different tools (mount, umount, fstab, blkid, mdadm, fdisk, lsblk, mkfs) and along the way things might not follow the exact steps from a guide. I asked dozens of questions about each tool and step, where previously I would have just "copy paste and prayed".

Two nights ago I was similarly able to fully recover all my data from a failed ssd also using chatgpt to guide my learning along the way. It was really cool to tackle a completely new skill having a "guide" even if it's wrong 20% of the time, that's way better than the average on the open Internet.

For someone who loves learning, it feels like thousand league boots compared to just endlessly sifting through internet crap. Of course everything it says is suspect, just like everything else on the Internet, but boy it cuts out a lot of the hassle.

manmal
1 replies
7h5m

You’ve given two examples for “broad”, but none for “deep”. I’ve also used LLMs for setting up my homelab, and they were really helpful since I was basically at beginner level in most linux admin topics (still am). But trying to eg setup automatic snapshot replications for my zfs pool had me go back to reading blog posts, as ChatGPT just couldn’t provide a solution that worked for me.

Spivak
0 replies
4h19m

I think one of the catch-22s of LLMs is that using it as a fancy search index (which is the dev assistant use-case) is that the information it surfaces is hugely dependent on what words you use and it matches energy. If you don't know the words you'll get very beginner oriented content and you can get it to surface deeper knowledge iff you know the shibboleths, but it's annoying. One dumb trick that's been unreasonably useful is just copy-pasting barely-related source code, but at the level you're trying to understand and then just asking an unrelated question— just yank that search vector way over to the region where you think the good information lives.

iLoveOncall
0 replies
7h7m

Okay but this is not programming, more about an enhanced tutorial. This isn't at all what the original commenter is talking about.

Delk
0 replies
5h32m

My approach is typically to follow some kind of a guide or tutorial and to look into the man pages or other documentation for each tool as I go, to understand what the guide is suggesting.

That's how I handled things e.g. when I needed to resize a partition and a filesystem in a LVM setup. Similarly to your RAID example, doing that required using a bunch of tools on multiple levels of storage abstraction: GPT partitions, LUKS tools, LVM physical and logical volumes, file system tools. I was familiar with some of those but didn't remember the incantations by heart, and for others I needed to learn new tools or concepts.

I think I use a similar approach in programming when I'm getting into something I'm not quite familiar with. Stack Overflow answers and tutorials help give the outline of a possible solution. But if I don't understand some of the details, such as what a particular function does, I google them, preferring to get the details either from official documentation or from otherwise credible-sounding accounts.

Workaccount2
3 replies
22h11m

With the progress LLM's having been making in the last two years, is it actually a bad bet to not want to really get into it?

How many contemporary developers have no idea how to write machine code, when 50 years ago it was basically mandatory if you wanted to be able to write anything?

Are LLM's just going to become another abstraction crutch turned abstraction solid pillar?

arexxbifs
1 replies
19h36m

Abstraction is beneficial and profitable up to a certain point, after which upkeep gets too hard or expensive, and knowledge dwindles into a competency crisis - for various reasons. I'm not saying we are at that point yet, but it feels like we're closing in on it (and not just in software development). 50 years ago isn't even 50 years ago anymore, if you catch my drift: In 1974, the real king of the hill was COBOL - a very straight-forward abstraction.

I'm seeing a lot of confusion and frustration from beginner programmers when it comes to abstraction, because a lot of abstractions in use today just incur other kinds of complexity. At a glance, React for example can seem deceptively easy, but in truth it requires understanding of a lot of advanced concepts. And sure, a little knowledge can go a long way in E.G. web development, but to really write robust, performant code you have to know a lot about the browser it runs in, not unlike how great programmers of yesteryear had entire 8-bit machines mapped out in their heads.

Considering this, I'm not convinced the LLM crutch will ever solidify into a pillar of understanding and maintainable competence.

skydhash
0 replies
17h43m

And it really helps if you have a global view across the abstraction stack, even if you don't dive in the details of the implementation. I still think that having some computer organization/OS architecture knowledge would be great for developers, at least to know that memory is not free, even though we have GBs of it and that having an internet connection is not an integral part of the computer like the power supply.

jprete
0 replies
19h35m

LLMs aren't an abstraction, even a very leaky one, so the analogies with compilers and the like really fall flat for me.

throwaway314155
2 replies
23h11m

My experience has been that indeed, it is detrimental to juniors. But unlike your take, it is largely a boon to experienced developers. That you suggest "tedium" is involved for more senior developers suggests to me that you haven't given the tooling a fair chance or work with a relatively obscure technology/language.

layer8
1 replies
22h29m

I think you’ve misunderstood the GP. They are saying AI is useful to seniors for tasks that would otherwise be tedious, but doing those tedious tasks by hand would be formative for juniors, and it is detrimental to their growth when they do them using AI.

throwaway314155
0 replies
13h37m

Ah yeah you're right. In my defense, that sentence reads ambiguously without a couple of re-reads. There's a (not actually) implied double negative in there, or something, which threw me off.

Thanks for pointing it out with words instead of downvotes.

tensor
0 replies
21h6m

I think it really depends on the users. The same people who would just paste stackoverflow code until it seems to work and call it a day will abuse LLMs. However, those of us who like to know everything about the code we write will likely research anything an LLM spits out that we don't know about.

Well, at least that's how I use them. And to throw a counter to your hypothesis, I find that sometimes the LLM will use functions or library components that I didn't know of, which actually saves me a lot of time when learning a new language or toolkit. So for me, it actually accelerates learning rather than retarding it.

saintradon
0 replies
17h56m

Like any other tool, it depends on how you use it.

pphysch
0 replies
20h42m

It all depends of one is using it scientifically, i.e. having a hypothesis for the generated code before it exists, so that you can evaluate it.

When used scientifically, coding copilots boost productivity AND skills.

diob
0 replies
1d

I think it can be abused like anything else (copy paste from stack overflow until it works).

But for folks who are going to be successful with or without it, it's a godsend in terms of being able to essentially ask stack overflow questions and get immediate non judgemental answers.

Maybe not correct all the time, but that was true with stack overflow as well. So as always, it comes back to the individual.

arealaccount
0 replies
1d1h

Why I recommend something like CoPilot over ChatGPT to juniors wanting to use AI.

At least Copilot you’re still more or less driving, whereas ChatGPT you’re more the passenger and not growing intuition.

DanHulton
0 replies
1d1h

I was really hoping this study would be exploring that. Really, it's examining short-term productivity gains, ignoring long-term tech debt that can occur, and _completely_ ignoring effects on the growth of the software developers themselves.

I share your hunch, though I would go so far as to call it an informed, strong opinion. I think we're going to pay the price in this industry in a few years, where the pipeline of "clueful junior software developers" is gonna dry way up, replaced by a firehose of "AI-reliant junior software developers", and the distance between those two categories is a GULF. (And of course, it has a knock-on effect on the number of clueful intermediate software developers, and clueful senior software developers, etc...)

dimal
11 replies
4h8m

It matters what you measure. The studies only looked at Copilot usage.

I’m an experienced engineer. Copilot is worse than useless for me. I spend most of my time understanding the problem space, understanding the constraints and affordances of the environment I’m in and thinking about the code I’m going to write app. When I start typing code, I know what I’m going to write, and so a “helpful” Copilot autocomplete is just distraction for me. It makes my workflow much much worse.

On the other hand, AI is incredibly useful for all of those steps I do before actually coding. And sometimes getting the first draft of something is as simple as a well crafted prompt (informed by all the thinking I’ve done prior to starting. After that, pairing with an LLM to get quick answers for all the little unexpected things that come up is extremely helpful.

So, contrary to this report, I think that if experienced developers use AI well, they could benefit MORE than inexperienced developers.

maxrecursion
2 replies
3h9m

For me, AI is like a documentation/Googlefu accelerant. There are so many little things that I know exactly what I want to do, but can't remember the syntax or usage.

For example, writing IaC especially for AWS, I have to look up tons of stuff. Asking AI gets me answers and examples extremely fast. If I'm learning the IaC for a new service I'll look over the AWS docs, but if I just need a quick answer/refresher, AI is much faster than going and looking it up.

jiiam
0 replies
2h33m

My experience with IaC output is that it's so broken to not only be unhelpful but actively harmful.

foobarian
0 replies
2h33m

I find that for AWS IaC specifically with a high pace of releases and a ton of versions dating back more than a decade the AI answers are a great spring board but require a bit of care to avoid mixing APIs.

heed
1 replies
1h31m

I like to use copilot when writing tests. It's not always perfect but makes things less tedious for me.

eric_h
0 replies
1h13m

I recently switched to cursor, and am in the process of wrangling an inherited codebase that had effectively no tests and cursor has saved me _hours_ it's generally terrible at any actual code refactoring, but it has saved me a great deal of typing while adding the missing test coverage.

scruple
0 replies
13m

I used the Copilot trial. I found myself waiting to see what it would come up with, analyzing it, and most often time throwing it away for my own implementation. I quickly realized how much of a waste of time it was. I did find use for it in writing unit tests and especially table-driven testing boilerplate but that's not enough to maintain a paid subscription.

kbaker
0 replies
36m

Yeah, Copilot is meh. Aider-chat for things with GPT-4 earlier this year was a huge step up.

But recently using Claude Sonnet + Haiku through OpenRouter also with aider, and it is like a new dimension of programming.

Working on new projects in Rust and a separate SPA frontend, it just ... implements whatever you ask like magic. Gets it about 90-95% right at the first prompt. Since I am pretty new to Rust, there are a lot of idiomatic things to learn, and lots of std convenience functions I don't yet know about, but the AI does. Figuring out the best prompt and context for it to be effective is now the biggest task.

It will be crazy to see where things go over the next few years... do all junior programmers just disappear? Do all programmers become prompt engineers first?

huijzer
0 replies
1h48m

So, contrary to this report, I think that if experienced developers use AI well, they could benefit MORE than inexperienced developers.

A psychology professor I know says this holds in general. For any new tool, who will be able to get the most benefits out of it? Someone with a lot of skill already or someone with fewer skill? With less skill, there is even a chance that the tool has a negative effect.

dfee
0 replies
28m

Contrarian take: I feel that copilot rewards me for writing patterns that it can then use to write an entire function given a method signature.

The more you lean into functional patterns: design some monads, don’t do I/O except at the boundaries, use fluent programming, then it’s highly effective.

This is all in Java, for what it’s worth. Though, I’ll admit, I’m 3.5y into Java, and rely heavily on Java 8+ features. Also, heavy generic usage in my library code gives a lot of leash to the LLM to consistently make the right choice.

I don’t see these gains as much when using quicker/sloppier designs.

Would love to hear more from true FP users (Haskell, OCaml, F#, Scala).

dclowd9901
0 replies
3h38m

I think my experience mirrors your own. We have access at my job but I’ve turned it off recently as it was becoming too noisy for my focus.

I found the tool to be extremely valuable when working in unfamiliar languages, or when doing rote tasks (where it was easy for me to identify if the generated code was up to snuff or not).

Where I think it falters for me is when I have a very clear idea of what I want to do, and its _similar_ to a bog standard implementation, but I’m doing something a bit more novel. This tends to happen in “reduce”s or other more nebulous procedures.

As I’m a platform engineer though, I’m in a lot of different spaces: Bash, Python, browser, vanilla JS, TS, Node, GitHub actions, Jenkins Java workflows, Docker, and probably a few more. It gives my brain a break while I’m context switching and lets me warm up a bit when I move from area to area.

brandall10
0 replies
48m

Copilot isn't particular useful. At best it comes up with small snippets that may or may not be correct, and rarely can I get larger chunks of code that would be working out of the gate.

But Claude Sonnet 3.5 w/ Cursor or Continue.dev is a dramatic improvement. When you have discrete control over the context (ie. being able to select 6-7 files to inject), and with the superior ability of Claude, it is an absolute game changer.

Easy 2-5x speedup depending on what you're doing. In an hour you can craft a production ready 100 loc solution, with a full complement of tests, to something that might otherwise take a half day.

I say this as someone with 26 yoe, having worked in principal/staff/lead roles since 2012. I wouldn't expect nearly the same boost coming at less than senior exp. though, as you have to be quite detailed at what you actually want, and often take the initial solution - which is usually working code - and refine it a half dozen times into something that you feel is ideal and well factored.

Delk
8 replies
23h7m

It's probably worth going a bit deeper into the paper before picking up conclusions. And I think the study could really do a bit of a better job of summarizing its results.

The abstract and the conclusion only give a single percentage figure (26.08% increase in productivity, which probably has too many decimals) as the result. If you go a bit further, they give figures of 27 to 39 percent for juniors and 8 to 13 percent for seniors.

But if you go deeper, it looks like there's a lot of variation not only by seniority, but also by the company. Beside pull requests, results on the other outcome measures (commits, builds, build success rate) don't seem to be statistically significant at Microsoft, from what I can tell. And the PR increases only seem to be statistically significant for Microsoft, not for Accenture. And even then possibly only for juniors, but I'm not sure I can quite figure out if I've understood that correctly.

Of course the abstract and the conclusion have to summarize. But it really looks like the outcomes vary so much depending on the variables that I'm not sure it makes sense to give a single overall number even as a summary. Especially since statistical significance seems a bit hit-and-miss.

edit: better readability

baxtr
4 replies
21h26m

Re 26.08%: I immediately question any study (outside of physics etc) that provides two decimal places.

kkyr
3 replies
19h41m

Why?

jayd16
2 replies
18h14m

The assumption is that they almost certainly did not have the sample size to justify two decimal places. If they want two decimals for aesthetics over properly honoring significant figures then it calls the scientific rigor into question.

energy123
1 replies
11h49m

But they say "26.08% increase (SE: 10.3%)", so they make it clear that there's a lot of uncertainty around that number.

They could have said "26% (rounded to 0 dp)" or something, but that conveys even less information about the amount of uncertainty than just saying what the standard error is.

Delk
0 replies
9h31m

They could still have gone for one decimal. Or possibly even none, considering the magnitude of the SE, but I get that they might not want to say "SE: 10%".

The second decimal point doesn't essentially add any information because the data can't provide information at that precision. It's noise but being included in the summary result makes it implicitly look like information. Which is exactly why including it seems a bit questionable.

That's not the major issue with the study, though, it's just one of the things that caught my eye originally.

svnt
1 replies
11h56m

To get a better picture of how this comes about: Microsoft has a study for their own internal product use, and wants to show its efficacy. The results aren’t as broadly successful as one would hope.

Accenture is the kind of company that cooperates and co-markets with large orgs like Microsoft. With ~300 devs in the pool they hardly move the population at all, and they cannot be assumed to be objective since they are building a marketing/consulting division around AI workflows.

The third anonymous company didn’t actually have a randomized controlled trial, so it is difficult to say how one should combine their results with the RCTs. Additionally, I am sure that more than one large technology company went through similar trials and were interested in knowing the efficacy of them. That is to say, we can assume other data exist than just those included in the results.

Why did they select these companies, from a larger sample set? Probably because Microsoft and Accenture are incentivized by adoption, and this third company was picked through p-hacking.

In particular, this statement in the abstract is a very bad sign:

Though each separate experiment is noisy, combined across all three experiments

It is essentially an admission that individually, the companies don’t have statistically significant results, but when we combine these three (and probably only these three) populations we get significant results. This is not science.

Delk
0 replies
9h52m

Yeah, it does seem fishy.

The third company seems a bit weird to include in other ways as well. In raw numbers in table 1, there seem to be exactly zero effects from the use of CoPilot. Through the use of their regression model -- which introduces other predictors such as developer-fixed and week-fixed effects -- they somehow get an estimated effect of +54%(!) from CoPilot in the number of PRs. But the standard deviations are so far through the roof that the +54% is statistically insignificant within the population of 3000 devs.

Also, they explain the introduction of the week fixed effect as a means of controlling for holidays etc., but to me it sounds like it could also introduce a lot of unwarranted flexibility into the model. But this is a part where I don't understand their methodology well enough to tell whether that's a problem or not.

I generally err towards the benefit of the doubt when I don't fully understand or know something, which is why I focused more on the presentation of the results than on criticizing the study and its methodology in general. I'd have been okay with the summary saying "we got an increase of 27.3% for Microsoft and no statistically significant results for other participants".

But perhaps I should have been more critical.

throwthrowuknow
0 replies
6h21m

Personally, my feeling is that some of the difference is accounted for by senior devs who are applying their experience in code review and testing to the generated code and are therefore spending more time pushing back asking for changes, rejecting bad generations and taking time to implement tests to ensure the new code or refactor works as expected. The junior devs are seeing more throughput because they are working on tasks that are easier for the LLM to do right or making the mistake of accepting the first draft because it LGTM.

There is skill involved in using generative code models and it’s the same skill you need for delegating work to others and integrating solutions from multiple authors into a cohesive system.

einpoklum
5 replies
1d1h

Reminds me of a situation I've been in a few times already:

   Dev: Hey einpoklum, how do I do XYZ?
   Me:  Hmm, I think I remember that... you could try AB and then C.
   Dev: Ok, but isn't there a better/easier way? Let me ask ChatGPT.
   ...
   Dev: Hey einpoklum, ChatGPT said I should do AB and then C.
   Me:  Let me have a look at that for a second.
   Me:  -Right, so it's just what I read on StackOverflow about this, a couple of years ago.

Sometimes it's even the answer that _I_ wrote on StackOverflow and then I feel cheated.

slt2021
2 replies
1d

true, ChatGPT even imports libraries and configs that are 4-5 year outdated.

einpoklum
1 replies
21h12m

I didn't mean to say the answer is outdated... I try not to give people outdated answers :-\

Viliam1234
0 replies
3h0m

Neither do I, but those answers later become outdated anyway. :D

meiraleal
1 replies
19h16m

Sometimes it's even the answer that _I_ wrote on StackOverflow and then I feel cheated.

What's next? Royalties from companies that used the answer in their solutions? Do you also want copyright of this comment?

einpoklum
0 replies
9h5m

What's next?

Perhaps developing a little sense of humor? :-\

GoToRO
5 replies
1d5h

For me AI just brought back documentation. All new frameworks lack documentation big time. The last good one for me was a DOS book! I don't think newer developers even have an idea of what good documentation looks like.

Even so, AI will propose different things at different times and you still need an experienced developer to make the call. In the end it replaces documentation and typing.

jakub_g
1 replies
22h50m

Every now and then there's a HN discussion "how do you manage internal documentation" where most commenters write something like "there's no point writing documentation because it quickly becomes outdated". (There were two such threads in just last days). Might explain why nothing is documented anymore.

jcgrillo
0 replies
17h59m

Maybe that also explains why (approximately) nothing really works right.

vinibrito
0 replies
20m

Oh hey I really love writing great docs, of course not always I have the opportunity to do so, but could you point me to one you consider great? Can be anything, no need to be some modern live docs. I want to see what is in the past that was so great but we lost, maybe I can incorporate some of it.

tensor
0 replies
21h4m

Yes, it can both help you write it but also, if you start with the documentation, e.g. the comment describing what function does, AI becomes orders of magnitude better and actually helping write the function, so it actually forces developers to better document their code!

choilive
0 replies
23h50m

I think AI just made documentation more valuable.

For public facing projects - your documentation just became part of the LLM's training data, so its now extra important your documentation is thorough and accurate because you will have a ton of developers getting answers from that system.

For private projects, your documentation can now be fed into a finetuning dataset or a RAG system, achieving the same effect.

29athrowaway
4 replies
1d1h

The study doesn't cover the long term effects of using generative AI on a project, which is the deskilling of your workforce.

Because development will become an auction-like activity where the one that accepts more suggestions wins.

wrink
1 replies
20h56m

I think this is right. Just have to jump ship before it all explodes and people wonder what happened

Similar to gaming the stock price for a couple quarters as C-level, but now with more incentive for this sort of behavior at IC level

29athrowaway
0 replies
18h26m

You cannot jump ship now that every institution has been assimilated by the Borg, aka MBAs.

They assimilate companies and leave a bloated hollow mess behind.

https://www.youtube.com/watch?v=P4VBqTViEx4

jillesvangurp
1 replies
7h21m

I look at this very differently. There are a lot of grumpy people jealously guarding their "skills" and artisanal coding prowess getting really annoyed at the juniors that come in and devalue what they do by just asking an AI to do the same thing and then moving on with their day. Young people are more mentally agile and most young people are now growing up with LLMs as a tool chain that was always there.

I'm actually fairly senior (turning 50 next month) and I notice an effect that AI is having on my own productivity: I now take on mini projects that I used to delegate or avoid doing because they would take too much time or be too tedious. That's not the case anymore. The number of things I can use chat gpt for is gradually expanding. I notice that I'm skilling up a lot more rapidly as well.

This is great because if you want to stay relevant, you need to adapt to modern tools and technology. That's nothing new of course. Changes are a constant in our industry. And there always are a lot of people that learn some tricks when they are young and then never learn anything new again. If you are lucky some of that stuff stays relevant for a few decades. But mostly a lot of stuff gets unceremoniously dumped by younger generations.

The ability to use LLMs is becoming an important skill in itself and one that is now part of what I look for in candidates (including older ones). I don't have a lot of patience for people refusing to use tools that are available to them. Tell me how you use tools to your advantage; not how you have a tool aversion or are unwilling to learn new things.

anotherburner23
0 replies
1h58m

"Tell me how you use tools to your advantage; not how you have a tool aversion or are unwilling to learn new things" is a false binary. It leaves out the possibility that the tool in question is not suited for its purpose.

null_investor
3 replies
1d5h

They also added lots of technical debt as I'm sure they used the AI to generate tests and some of those tests could be actually testing bugs as the correct behavior.

I've already fixed a couple of tests like this, where people clearly used AI and didn't think about it, when in reality it was testing something wrong.

Not to mention the rest of the technical debt added... looking at productivity in software development by amount of tasks is so wrong.

randomdata
1 replies
1d5h

> when in reality it was testing something wrong.

Must have seen AI write the implementation as well?

If you're still cognizant of what you're writing on the implementation side, it's pretty hard to see a test go from failing to passing if the test is buggy. It requires you to independently introduce the same bug the LLM did, which, while not completely impossible, is unlikely.

Of course, humans are prone to not understanding the requirements, and introducing what isn't really a bug in the strictest sense but rather a misfeature.

Jensson
0 replies
23h59m

it's pretty hard to see a test go from failing to passing

Its pretty easy to add a passing test and call it done without checking if it actually fails in the right circumstances, and then you will get a ton of buggy tests.

Most developers don't do the start out at failing and then to passing ritual, especially junior ones who copies code from somewhere instead of knowing what they wrote.

onlyrealcuzzo
0 replies
1d5h

They also added lots of technical debt as I'm sure they used the AI to generate tests and some of those tests could be actually testing bugs as the correct behavior.

Let's not forget that developers some times do this, too...

yieldcrv
2 replies
1d1h

Notably, less experienced developers showed higher adoption rates and greater productivity gains.

This is what I’ve seen too, I don't think less experienced developers have gotten better in their understanding of anything just more exposed and quicker, while I do think more experience developers have stagnated

dpflan
1 replies
1d1h

while I do think more experience developers have stagnated

Is this because they are not using coding assistants? Are they resistant to using them? I have to say that the coding assistant is helpful; it is an ever-present rubber duck that can talk back with useful information.

yieldcrv
0 replies
1d

Across teams and organizations what I’ve seen this year is that the “best” developers haven’t looked. The proverbial rockstars with the most domain knowledge have never gotten around to AI and LLMs at all.

This is compounded by adherence to misguided corporate policies that broadly prohibit use of LLMs but are meant to only be about putting trade secrets into the cloud, not distinguishing between cloud vs locally run language models. Comfortable people would never challenge this policy with critical thinking, and it requires special interest to look at locally run language models, even just to choose which one to run.

Many developers have not advocated for more RAM on their company issued laptop to run better LLMs.

and I haven't seen any internal language model that the company is running in their intranet. But it would be cool if there was a huggingface-style catalogue and server farm companies could have and let their employees choose models to prompt, always having the latest models to load.

simpleranchero
2 replies
1d1h

Microsoft people in research team proving Microsoft tools are good. Elsevier will now do ads in research papers.

Bjorkbat
1 replies
1d

This is far, far more rigorous than the experiment Microsoft behind Microsoft's claim that Copilot made devs 55% faster.

The experiment in question was to split 95 devs into two groups and see how long it took each group to setup a web server in Javascript. Control took a little under 3 hours on average, the copilot group took 1 hour and 11 minutes on average.

https://github.blog/news-insights/research/research-quantify...

And it is thanks to this weak experiment that Github proudly boasts that Copilot makes devs 55% faster.

By contrast the conclusion that Copilot makes devs ~25% more productive seems reasonable, especially when you read the actual paper and find out that among senior devs the productivity gains are more marginal.

mistrial9
0 replies
22h11m

what do these three things have in common?

* control the agenda items in a formal meeting

* fill a fixed amount of time in an interview with no rebuttal

* design the benchmark experiments and the presentation of the results

zone411
1 replies
22h10m

This was likely Copilot based on GPT 3.5.

Microsoft: September 2022 to May 3rd, 2023

Accenture: July 2023 to December 2023

Anonymous Company: October 2023 to ?

Copilot _Chat_ update to GPT-4 was Nov 30, 2023: https://github.blog/changelog/label/copilot/

jjcm
0 replies
22h8m

Great call out. I'd be extremely curious what the results looked like with something like cursor or just using claude out of the box. I'm amazed at just how easy it is to get simple small scripts up and going now with claude.

sum1ren
1 replies
1d5h

When I'm using genai to write some code for me, I lose the internal mental state of what my code is doing.

As such, when I do have to debug problems myself, or dream up ideas of improvements, I no longer can do this properly due to lack of internal mental state.

Wonder how people who have used genai coding successfully get around this?

cageface
0 replies
14h58m

This is the problem I have with it. It breaks me out of the flow state that’s so important to writing good code.

robodale
1 replies
3h2m

I've used Copilot and chatGPT to help with algorithms where I'm unsure where to start. Actual case: "Write a performant algorithm to find the number of work days given a future date". It's trickier than you think and makes a great interview question.

Both AI tools came back with...garbage. Loops within loops within loops as they iterated through each day to check if the day is a weekend or not, is a leap year and to account for the extra day, is it a holiday or not, etc.

However, chatGPT provided a clever division to cut the dataset down to weeks, then process the result. I ended up using that portion in my final algorithm creation.

So, my take on AI coding tools are: "Buyer beware. Your results may vary".

zzleeper
0 replies
2h53m

I was never able to use them well, but cursor.ai together with Claude is starting to feel actually useful

noobermin
1 replies
1d5h

Can someone potentially smarter than me explain how the data, which in table I clearly shows the majority of the means for each experiment metric being less than the SD could even hope to be salvaged? Taken blindly, the results are simply unbelieveable to outright lying, the sort of thing you see submitted to garbage open access journals. The text describing model they employ afterwards is not convincing enough for me and seems light on details. I mean, wouldn't any reasonable reviewer demand more?

I know preprints don't need polish but this is even below the standard of a preprint, imo.

smokel
0 replies
1d4h

At least they acknowledge this:

"However, the table also shows that for all outcomes (with the exception of the Build Success Rate), the standard deviation exceeds the pre-treatment mean, and sometimes by a lot. This high variability will limit our power in our experimental regressions below."

What I find even stranger is that the values in the "control" and "treatment" columns are so similar. That would be highly unlikely given the extreme variability, no?

michaelteter
1 replies
7h16m

ChatGPT and to a lesser degree Copilot have been very valuable to me this year.

Copilot often saves me a lot of typing on a 1-3 line scope, occasionally surprising me with exactly what I was about to write on a 5-10 line scope. It’s really good during rearrangement and early refactoring (as you are building a new thing and changing your mind as you go about code organization).

ChatGPT, or “Jimmy” - as I like to call him - has been great for answering syntax questions, idiom questions, etc. when applying my general skills based on other languages to ones I’m less familiar with.

It has also been good for “discussing” architecture approaches to a problem with respect to a particular toolset.

With proper guidance and very clear prompting, I usually get highly value responses.

I would rough guess that these two tools have saved me 2-3 months of solo time this year - nay, since April.

One I get down in the deep details, I use Jimmy much less often. But when I hit something new, or something I long since forgot, he’s ready to be relative expert / knowledge base.

iLoveOncall
0 replies
7h9m

It saved you 2-3 months over a 5 months period?

Unless one is an absolutely terrible developer working on incredibly simple problems, this is simply impossible.

agentultra
1 replies
22h4m

Empirical studies like this are hard to conduct... I'm curious though. This study was authored by at least two folks from Microsoft and one of the sample groups in the study was also from Microsoft. The reason this seems to stand out to me as odd is that Microsoft also owns the AI tool being used in the study and would definitely want a favourable conclusion in this paper.

Is that a flag we should be watching out for?

anonymousDan
0 replies
22h1m

It sure is.

MBCook
1 replies
19h47m

It lets people make more PRs. Woohoo. Who cares?

Does it increase the number of things that pass QA?

Do the things done with AI assistance have fewer bugs caught after QA?

Are they easier to extend or modify later? Or do they have rigid and inflexible designs?

A tool that can help turn developers into unknown quality code monkeys is not something I’m looking for. I’m looking for a tool that helps developers find bugs or design flaws in what they’re doing. Or maybe write well designed tests.

Just counting PRs doesn’t tell me anything useful. But it triggers my gut feeling that more code per unit time = lower average quality.

anthomtb
0 replies
2h36m

Dev - “Hey copilot, please split this commit into 5 separate commits”

Copilot - “okay I can do that for you! Here are your new commits!”

Senior Dev - “why? Your change is atomic. I’ll tell management to f-off, in a kind way, if they bring up those silly change-per-month metrics again”

zombiwoof
0 replies
20h59m

I sometimes wonder if LeetCode killed the Software Star

zitterbewegung
0 replies
1d1h

I am an average developer with more than five years experience in Python. I was using chatgpt to create prototypes of what to do in something I was familiar with and was able to debug the task to make it work. I wasn’t specific enough to specify the epaper display had seven colors instead of black and white.

When I was using chatgpt to do qualifiers for a CTF called Hack A Sat at defcon 31 I could not get anything to work such as gnu radio programs.

If you have the ability to debug then I have experienced that it is productive but when you don’t understand you run into problems.

wouldbecouldbe
0 replies
1d5h

I'm guiding a few and sometimes they write pretty good code with the help of GPT but then in meetings have trouble understanding and explaining things.

I think it's a big productivity boost, but also a chance that the learning rate might actually be significantly slower.

throwaway_2494
0 replies
4h21m

I think that over time, AI will tend to make all programmers into average programmers.

For those who are beginners, it can bring their skills up and make them look like better developers than they are.

More insidiously, expert programmers who overuse of AI might also regress to the mean as their skills deteriorate.

smokel
0 replies
1d5h

Interestingly, they compare number of pull requests as a statistic for productivity. Not that I know of a better metric, but I wonder if this is an accurate metric. It seems similarly misguided as looking at lines of code.

If an AI tool makes me more productive, I would probably either spend the time won browsing the internet, or use it to attempt different approaches to solve the problem at hand. In the latter case, I would perhaps make more reliable or more flexible software. Which would also be almost impossible to measure in a scientific investigation.

In my experience, the differences in developer productivity are so enormous (depending on existing domain knowledge, motivation, or management approach), that it seems pretty hard to make any scientific claim based on looking at large groups of developers. For now, I prefer the individual success story.

muglug
0 replies
1d5h

This is a decently-thorough study, using PRs as a productivity metric while also tracking build failures (which remained constant at MSFT but increased at Accenture).

Would love to see it replicated by researchers at a company that does not have a clear financial interest in the outcome (the corresponding author here was working at Microsoft Research during the study period).

Before moving on, we discuss an additional experiment run at Accenture that was abandoned due to a large layoff affecting 42% of participants

Eek

makk
0 replies
19h24m

Notably, less experienced developers showed higher adoption rates and greater productivity gains.

And that is why demand for senior developers is going to go through the roof. Who is going to unfuck the giant balls of mud those inexperienced devs are slinging together? Who’s going to keep the lights on?

lolinder
0 replies
1d5h

The most interesting thing about this study for me is that when they break it down by experience levels, developers who are above the median tenure show no statistically significant increase in 'productivity' (for some bad proxies of productivity), with the 95% confidence intervals actually dipping deep into the negatives on all metrics (though leaning slightly positive).

This tracks with my own experience: Copilot is nice for resolving some tedium and freeing up my brain to focus more on deeper questions, but it's not as world-altering as junior devs describe it as. It's also frequently subtly wrong in ways that a newer dev wouldn't catch, which requires me to stop and tweak most things it generates in a way that a less experienced dev probably wouldn't know to. A few years into it I now have a pretty good sense for when to use Copilot and when not to—so I think it's probably a net positive for me now—but it certainly wasn't always that way.

I also wonder if the possibly-decreased 'productivity' for more senior devs stems in part from the increase in 'productivity' from the juniors in the company. If the junior devs are producing more PRs that have more mistakes and take longer to review, this would potentially slow down seniors, reducing their own productivity gains proportionally.

heisenberg1
0 replies
1d5h

This is obvious. Right, of course you get an increase in productivity, especially as a junior - when current AI is able to solve leetcode.

BUT I think a lot of people mentioned that, you get code - that the person which wrote it do not understand. So the next time you get a bug there, good luck fixing it.

My take so far. AI is great, but only for non critical, non core code. Everything that is done for plotting and scripting is awesome (which can take days to implement and in minutes with AI) - but core lib functions - wouldn't outsource it to the AI right now.

godelski
0 replies
23h48m

B(i)ased as fuck.

I think this post from the other day adds some important context[0]. In that study kids with access to GPT did way more practice problems but worse on the test. But the most important part was that they found that while GPT usually got the final answer right that the logic was wrong, meaning that the answer is wrong. This is true for math and code.

There's the joke: there's two types of 10x devs, those that do 10x work and those who finish 10x jira tickets. The problem with this study is the assumptions that it makes, which is quite common and naive in our industry. They assume that PRs and commits are measures of productivity and they assume passing review is a good quality metric. These are so variable between teams. Plenty are just "lgtm" reviews.

The issue here is that there's no real solid metric for things like good code. Meeting the goals of a ticket doesn't mean you haven't solved the problem so poorly you are the reason 10 new tickets will be created. This is the real issue here and the only real way to measure it is using Justice Potter's test (I know it when I see it), and requires an expert evaluator. In other words, tech debt. Which is something we're seeing a growing rise in, all the fucking enshitification.

So I don't think that study here contradicts [0], in fact I think they're aligned. But I suspect people who are poor programmers (or non programmers) will use this at evidence for what they want to see. Believing naive things like lines of code, number of commits/PRs, etc are measures of productivity rather than hints of measure. I'm all for "move fast and break things" as long as there's time set aside to clean up the fucking mess you left behind. But there never is. It's like we have businesses ADHD. There's so much lost productivity because so much focus is placed on short term measurements and thinking. I know medium and long term thinking are hard, but humans do hard shit every day. We can do a lot better than a shoddy study like this.

[0] https://news.ycombinator.com/item?id=41453300

gibsonf1
0 replies
21h48m

Yes, but what about all the bugs created and time to debug?

ergonaught
0 replies
1d5h

The result is "less experienced people got more stuff done". I do not see an assessment of whether the stuff that got done was well done.

The output of these tools today is unsafe to use unless you possess the ability to assess its correctness. The less able you are to perform that assessment, the more likely you are to use these tools.

Only one of many problems with this direction, but gravity sucks, doesn't it.

dheerkt
0 replies
18h58m

Study funded by Microsoft, conducted by Microsoft engineers, says Microsoft product makes engineers +X% more productive.

croes
0 replies
21h38m

Notably, less experienced developers showed higher adoption rates and greater productivity gains.

If you start low it's easier to get greater growth rates.

The biggest is the first step, 0% to 1% is infinite growth.

clbrmbr
0 replies
6h26m

I used to be super productive at the raw keyboard. Then RSI got to me. But with CoPilot, I’m back to my normal productivity. For me it’s a life-saver as it allows fast typing with minimal hand strain.

charlie0
0 replies
19h10m

On the surface, this appears to be great news.

However, there's a big question as to whether these are short productivity gains vs longer lasting gains. There's a hypothesis that the AI generate code will slowly spaghetti-fy a codebase.

Is 1-2 years sufficiently long enough to take this into consideration? Or disprove the spaghettification?

andai
0 replies
7h48m

Didn't Microsoft find way higher gains in a previous study? I wonder where the differences (also between companies) come from.

PaulHoule
0 replies
17h55m

It beats StackOverflow, but that might be saying more about SO.