return to table of content

Home-Cooked Software and Barefoot Developers

9dev
60 replies
21h11m

If I have learnt one thing working in software engineering, specifically on AI-enabled products empowering junior engineers, and using Copilot professionally, it’s that you need even more experience to detect the subtleties in the lack of the models understanding of your domain, your specific intent. If you don’t know exactly what you’re after, and use the LLM as a sparring partner to bounce your ideas off, you’re in for a lot of pain.

Depending on the way you phrase questions, ChatGPT will gleefully suggest you a wrong approach, just because it’s so intent on satisfying your whim instead of saying No when it would be appropriate.

And in addition to that, you don’t learn by figuring out a new concept. If you already have a feeling of the code you would write anyway, and only treat the model as a smart autocomplete, that doesn’t matter. But for an apprentice, or a layperson, that will keep code as scary and unpredictable as before. I don’t think that should be the answer.

JohnFen
21 replies
20h41m

you don’t learn by figuring out a new concept

This.

If LLMs were actually some magical thing that could write my code for me, I wouldn't use them for exactly this reason. Using them would prevent me from learning new skills and would actively encourage my existing skillset to degrade.

The thing that keeps me valuable in this industry is that I am always improving, always learning new skills. Anything that discourages that smells like career (and personal) poison to me.

TeMPOraL
6 replies
11h9m

In other words, your objection isn't to LLMs, it's to delegation, since the exact same argument would apply to having "some magical thing that could write my code for me" be your co-worker or a contractor.

It's fair for the type of code you want to write for your own growth. But even with that, there's more than enough bullshit boilerplate and trivial cross-language differences that contribute zero (or negatively) to your growth, and is worth having someone else, or something else, write it for you. LLMs are affordable for this, where people usually are not.

mlsu
5 replies
10h59m

If that's the only thing LLMs are good for, my money for improving software productivity is in good old fashioned developer tools.

A better language reduces boilerplate. A better compiler helps you reason about errors. Better language features help you be more expressive. If I need to spool up a jet turbine feeding H100's just to decipher my error messages, the solution is a better compiler, not a larger jet engine.

I myself have noticed this: a wild heterogeneity in the types of tasks for which LLM are helpful. Its appearance as a silver bullet withers the closer you get to essential complexity.

lumb63
2 replies
7h39m

One of my fears with the growing use of LLMs for writing software is that people are using them as a catch-all that prevents them from feeling the pain that indicates to us that there is a better way to do something.

For example, nobody would ever have developed RAII if manual memory management didn’t peeve them. Nobody would have come up with C if they didn’t feel the pain of assembly, or Rust without C, or Typescript without JavaScript, etc. Nobody would have come up with dozens of the genius tools that allow us to understand and reason about our software and enable us to debug it or write it better, had they not personally and acutely felt the pain.

At my job, the people most enthusiastic about LLMs for coding are the mobile and web devs. They say it saves them a lot of time spent writing silly boilerplate code. Shouldn’t the presence of that boilerplate code be the impetus that drives someone to create a better system? The entire firmware team has no interest in the technology, because there isn’t much boilerplate in C. Every line means something.

I worry LLMs will lead to terrible or nonexistent abstractions in code, making it opaque, or inefficient, or incorrect, or all of the above.

ddj231
0 replies
5h25m

To add to this, LLMs write pretty trite poetry, for example. If we think of code from the creative side, it’s hard to imagine that we’d want to simply hand all coding over to these systems. Even if we got working solutions (which is a major undertaking for large systems), it seems we’d be sacrificing elegance, novelty, and I’d argue more interesting explorations.

JamesonNetworks
0 replies
5h9m

Its an interesting observation for sure, but those developers for mobile and web sit at the tippy top of all other abstractions layers. At that position a certain amount of boiler plate is needed because not all controls and code behind are the same, and there is a mighty collection of hacks on hacks to get a lot of things done. I think this is more of “horses for courses” thing where developers higher in the abstraction stack will always benefit from LLMs more and developers lower down the stack have more agency for improvement. At the end of the day, I think everyone gets more productive which is a net positive. Its just that not all developers are after the same goal (Application dev vs library devs vs system devs)

TeMPOraL
1 replies
10h40m

If I need to spool up a jet turbine feeding H100's just to decipher my error messages, the solution is a better compiler, not a larger jet engine.

You don't need a jet turbine and H100s for that, you need it once for the whole world to get that ability; exercising it costs comparatively little in GPU time. Like, can't say how much GPT-4o takes in inference, but Llama-3 8B works perfectly fine and very fast on my RTX 4070 Ti, and it has a significant enough fraction of the same capabilities.

Speaking of:

A better compiler helps you reason about errors.

There's only so much it can do. And yes, I've actually set up an "agent" (predefined system prompt) so I can just paste the output of build tooling verbatim, and get it to explain error messages in it, which GPT-4 does with 90%+ accuracy. Yes, I can read and understand them on my own. But also no, at this point, parsing multiple screens of C++ template errors or GCC linker failures is not a good use of my life.

(Environment-wise, I'm still net ahead of a typical dev anyway, by staying away from Electron-powered tooling and ridiculously wasteful modern webdev stacks.)

A better language reduces boilerplate.

Yes, that's why everyone is writing Lisp, and not C++ or Java or Rust or JS.

Oh wait, wrong reality.

Better language features help you be more expressive.

That's another can of worms. I'm not holding much hopes here, because as long as we insist on working directly on plaintext codebase treated as single source of truth, we're already at Pareto frontier in terms of language expressiveness. Cross-cutting concerns are actually cross-cutting; you can't express them all simultaneously in a readable way, so all the modern language design advances are doing is shifting focus and complexity around.

LLMs don't really help or hurt this either, though they could paper over some of the problem by raising the abstraction level at which programmers edit their code, in lieu of the tooling actually being designed to support such operations. I don't think this would be good - I'd rather we stopped with the plaintext single-source-of-truth addiction in the first place.

Its appearance as a silver bullet withers the closer you get to essential complexity.

100% agreed on that. My point is, dealing with essential complexity is usually a small fraction of our work. LLMs are helpful in dealing with incidental complexity, which leaves us more time to focus on the essential parts.

eru
0 replies
10h19m

Yes, that's why everyone is writing Lisp, and not C++ or Java or Rust or JS.

Oh wait, wrong reality.

You are a bit too cynical. The tools (compilers and interpreters and linters etc) that people are actually using have gotten a lot better. Both by moving to better languages, like more Rust and less C; or TypeScript instead of Javascript. But also from compilers for existing languages getting better, see especially the arms race between C compilers kicked off by Clang throwing down the gauntlet in front of GCC. They both got a lot better in the process.

(Common) Lisp was a good language for its time. But I wouldn't hold it up as a pinnacle of language evolution. (I like Lisps, and especially Racket. And I've programmed about half of my career in Haskell and OCaml. So you can rest assured about my obscure and elitist language cred. I even did a year of Erlang professionally.)

---

Btw, just to be clear: I actually agree with most of what you are writing! LLMs are already great for some tasks, and are still rapidly getting better.

You are also right that despite better languages being available, there are many reasons why people still have to use eg C++ here or there, and some people are even stuck on ancient versions of or compilers for C++, with even worse error messages. LLMs can help.

rickydroll
4 replies
5h19m

This all depends on your mental relationship with the LLM. As somebody else pointed out, this is an issue of delegation. If you had one or more junior programmers working for you writing code according to what you specify, would you have the same worry?

I treat LLMs as junior programmers. They can make my life easier and occasionally make it harder. With that mindset, you start out knowing that they're going to make stupid mistakes, and that builds your skill of detecting mistakes in other people's code. Also, like with biological junior programmers, nonbiological junior programmers quickly show how bad you are giving direction and force you to improve that skill.

I don't write code by hand because my hands are broken, and I can't use the keyboard long enough to write any significant amount of code. I've developed a relationship with nonbiological junior programmers such that I now tell them, via speech recognition, what to write and what information they need to create code that looks like code I used to create by hand.

Does this keep me from learning new skills? No. I'm always making new mistakes and how to correct them. One of those corrections was knowing that you don't learn something significant from writing code. Career-sustaining knowledge comes at a much higher level.

manuel_w
1 replies
3h26m

I don't write code by hand because my hands are broken, and I can't use the keyboard long enough to write any significant amount of code. I've developed a relationship with nonbiological junior programmers such that I now tell them, via speech recognition, what to write and what information they need to create code that looks like code I used to create by hand.

Can you please write a blog post on what tools you use for that?

My hands are fine, still, I'd love to just verbally explain what I want and have someone else type the code for me.

rickydroll
0 replies
26m

sure. I've been meaning to do a comparison of aqua and dragon. I'll do one of co-pilot and gpt-whatever. give me 6 months or so, I've got marketing to do. :-)

finnthehuman
0 replies
53m

If you had one or more junior programmers working for you writing code according to what you specify, would you have the same worry?

I try hard to avoid the scenario where junior programmers write things they don’t understand. That’s actually my biggest frustration and the difference between juniors I enjoy vs loathe working with. There are only so many ways to remain supportive and gently say “no, for real, learn what you’re working on instead of hacking something brittle and incomplete together.”

JohnFen
0 replies
4h16m

If you had one or more junior programmers working for you writing code according to what you specify, would you have the same worry?

It's a great question. My answer generally is yes, I would (and do), but I'm willing to sacrifice a bit in order to ensure that the junior developer gets enough practical experience that they can succeed in their career. I'm not willing to make such a sacrifice for a machine.

fragmede
4 replies
19h39m

It sounds like you've discouraged yourself from learning the skill of using an LLM to help you code.

_heimdall
2 replies
15h7m

That only matters if the assumption is that any skill is worth learning simply because it's a skill.

You could learn the skill of running yourself over with a car, but it's either a skill you'll never use or the last skill you'll use. Either way, you're probably just as well off not bothering to learn that one.

rrr_oh_man
1 replies
13h25m

"running yourself over with a car" feels very different from "learning to use LLMs to your advantage".

_heimdall
0 replies
6h47m

The GP was pointing out that learning to use an LLM, in their opinion, would stop them from learning other new skills and erode their existing ones.

In that context I think the analogy holds. Using an LLM halts your learning, as does running yourself over with a car. It's an exaggerated point for sure, but I think it points to the fact that you don't have to learn to use LLMs simply because it's a skill you could learn, especially if you think it will harm you long term.

JohnFen
0 replies
4h10m

I would disagree with that take, actually. Perhaps I haven't yet figured out how to leverage LLMs for that (and don't get me wrong, I have certainly experimented as has most of my team), but I'm not discouraged from it.

I'm just trying to be clear-eyed about the risks. As an example, code completion tools in IDEs will cause me to get rusty in important baseline skills. LLMs present a similar sort of risk.

eru
2 replies
10h31m

Eh, your argument could also be used against compilers. Or against language features like strong typing in something like Rust, instead of avoiding our bugs through very careful analysis when writing C code like God intended.

Using an LLM _is_ a skill, too.

advael
1 replies
8h38m

I agree it's a skill, but I actually think hear this analogy a lot and think it's not a great one

A feedback loop with an LLM is useful for refinement of ideas and speeding up common tasks. I really even think it can be a massive productivity boost for one of the most common professional dev tasks with the right tooling. I work a lot of mercenary gigs and need to learn new languages all the time, and something like phind.com is great for giving me basic stuff that works in a language whose idioms I don't know, and the fact that it cites its sources and gives me links means I can deal with it being wrong sometimes, and also drill down and learn more when appropriate more easily

However, LLMs are super not like compilers. They simply do not create reliable simplifications in the same way. A higher level language creates a permanent, reliable, and transferable reduction in complexity for the programmer, and this only works because of that reliability. If I write a function in scala, it probably has a more complicated equivalent in JVM bytecode, but it works the same every time and the higher-order abstraction is semantically equivalent and I can compose it with other functions and decompose it into its constituent parts reliably without changing the meaning. Programming languages can be direct translations of each other in a way that adding the fuzziness of natural language makes it basically impossible to. An abstraction in a language can be used in place of the complex underlying reality, and even modified to fit new situations predictably and reliably without drastic risk of not working the same way. This reliability also means that the simplification has compounding returns, as it's easier to reason about and expand on for some future maintainer or even my future self.

LLMs for code generation, at least in their current form, lack all these important properties. The code they generate is a fuzzy guess rather than a one-to-one translation. Often it's a good guess! But even when it is, it's generating code that's no more abstract than needed to be written before, so putting it into your codebase still gives you just as much additional complexity to take into account when expanding on it as before. Maybe the LLM can help with that, maybe not. Asking an LLM to solve a problem in one case can fail to transfer to another one in unpredictable ways.

You also aren't able to use it to make permanent architectural simplifications recursively. We can't for example save a series of simple english instructions instead of the code that's generated, then treat that as a moving piece we can recombine by piping that into another instruction to write a program, etc. This would also increase the cost of computing your program significantly obviously, but that's actually a place where, well, not a compiler but an interpreter is a decent analogy. My main concern with LLMs being deployed by developers en masse is kind of already happening, but it predates LLMs. I notice that codebases where people have used certain IDEs or other code generation tools proliferate a bunch of unnecessary and hard to maintain complexity in codebases, because the programmer using the tools got used to just "autogenerating a bunch of boilerplate" which is fine in a vacuum but accumulates a ton of technical and maintainability debt really fast if you're not actively mindful of it and taking steps in your workflow to prevent it, like having a refinement and refactoring phase in your feedback loop

I think LLMs are useful tools and can help programmers a lot, and even may lead to "barefoot programmers" embedded in local community needs, which I love. I hear the analogy to compilers a lot and I think it's a bad one, managing to miss most of what's good about compilers while also misunderstanding the benefits and pitfalls of generative models

eru
0 replies
7h16m

I mostly agree about a certain layer of semantics in our 'normal' programming languages. And most of the time, that level is good and good enough. But whether eg certain compiler optimisations kick in or not is sometimes much harder to forecast.

Btw, currently I wouldn't even dare to compare LLMs to compilers. For me the relevant comparison would be to 'Googling StackOverflow': really important tools for a programmer, but nothing you can rely on to give you good code. Nevertheless they are tools whose mastery is an important skill.

Remember how in yesteryears we complained about people copy-and-pasting from StackOverflow? Just like today we complain about people committing the output of their LLM directly.

---

I do hope that mechanical assistance in programming keeps improving over time. I have quite a few tricky technical problems that I would like to see solved in my lifetimes.

posix_monad
0 replies
7h56m

Copilot (and so on) are simultaneously incredible and not nearly enough.

You cannot ask it to build a complex system and then use the output as-is. It's not enough to replace developer knowledge, but it also inhibits acquiring developer knowledge.

JohnMakin
8 replies
20h4m

This is not really a new problem, the previous version being "idk, I copy pasted it from stack overflow." True expertise realized that the answer often lay buried in sub-comments and the top voted answer is not often the correct one. LLM's naturally do not realize any of this.

x0x0
3 replies
19h38m

I kind of disagree.

chatgpt will make something that looks much more like it should work than your copy-pasted code from stackoverflow. It looks like it does exactly what you want. It's just riddled with bugs. Major (invented an api out of whole cloth; it would sure be convenient if that api did exist tho!) or subtle (oh, this bash script will bedshit and even overwrite data if your paths have spaces.) Or it will happily combine code across major api revisions of eg bootstrap.

I still use it all the time; I just think it makes already-expert users faster while being of much more limited use to people who are not yet experts. In the above case, after being told to make the paths space safe it did so correctly. You just had to know to do that...

JohnMakin
2 replies
18h4m

You’re kind of saying some of what I am trying to so I’m not sure we disagree. I boil down the core problem described to being roughly: people lacking expertise to judge code advice critically are putting bad code they do not understand into places they shouldn’t. This is the problem that is not new. LLM’s are a variation on the problem because they have the downside of not allowing you to view for yourself the surrounding context to determine on your own what the correct answer is. The fact they are so convincing at it is a different, but definitely new and horrific problem on its own.

Meta commentary on this, I honestly don’t mind if this is the hell that the business/management world wants to build for themselves. I’ll make a fortune cleaning it up.

x0x0
0 replies
15h48m

Ah, you're right, apologies for not reading your comment more carefully.

henrikschroder
0 replies
17h11m

The fact they are so convincing at it is a different, but definitely new and horrific problem on its own.

I tried one to help me get the syntax right for the config file for a program. It started by generating a config file for the latest version and not the old one I was using, but once I told it that, it fixed that convincingly and spit out a config file that looked like it was for my version.

However, the reason I asked for help was that the feature was very badly documented, and yet ChatGPT happily invented syntax for the thing I was having problems with. And every time I told it that it didn't look quite right, it confidently invented new syntax for the feature. Everything it made up looked pretty damn convincing, if I had designed the config file format I could have gone with either of those suggestions, but they were all wrong, as evidenced by the config file validator in the program.

At least Stack Overflow had comments and votes that helped you gauge the usefulness of the answer. These glorified toasters have neither.

FlyingSnake
2 replies
12h58m

Almost every “Copy-Paste from SO” answer was accompanied with lots of caveats from other human commentators. This feedback loop is sorely missing with LLM coding assistants.

TeMPOraL
1 replies
11h6m

LLMs literally read all that commentary in training, so they're taking it into account, not regurgitating the top-voted answer from SO. They're arguably better at this than junior devs.

ben_w
0 replies
6h54m

While LLMs are better at reading the surrounding context, I am not convinced they are particularly good at taking it on board (compared to an adult human, obviously fantastic compared to any previous NLP).

Biggest failure mode I experience with LLMs is a very human-like pattern, what looks like corresponding with an interlocutor who absolutely does not understand a core point you raised 5 messages earlier and have re-emphasised on each incorrect response:

--

>> x

> y

not y, x

oh, right, I see… y

--

etc.

yoyohello13
0 replies
14h52m

At least “copy paste from stack overflow” was kind of a in joke. There was a little social stigma about it. Everyone knew they were being lazy and they really shouldn’t be doing it.

LLMs are different because devs declare with pride that they “saved so much time” by just letting chatGPT do it for them.

deafpolygon
6 replies
12h38m

ChatGPT will gleefully suggest you a wrong approach, just because it’s so intent on satisfying your whim instead of saying No when it would be appropriate

Therein lies the mistake. Too many people assume ChatGPT (and similar LLMs) are capable of reasoning. It's not. It is simply just giving you what is likely the 'correct' answer based on some sort of pattern.

It doesn't know what's wrong, so it's not aware it's giving you an inappropriate answer.

logicallee
5 replies
11h50m

Too many people assume ChatGPT (and similar LLMs) are capable of reasoning. It's not.

Sure it is, just like a child or someone not very good at reasoning. You can test ChatGPT yourself on some totally novel ad hoc reasoning task you invent for the task, with a single correct conclusion that takes reasoning to arrive at and it will probably get it if it's really easy, even if you take great pains to make it something totally new that you invented. Try it yourself (preferably with ChatGPT 4o) if you don't believe me. Please share your results.

TeMPOraL
2 replies
10h54m

Sure it is, just like a child or someone not very good at reasoning.

That's a good way to think about it. Treat GPT-4 as having mentality of a 4 year old kid. A kid this age will take any question you ask at face value, because it hasn't learned yet that adults often don't ask questions precisely enough, don't realize the assumptions they make in their requests, don't know what they don't know, and are full of shit. A four year old won't think of evaluating whether or not the question itself makes sense, they'll just do their best to answer it, which may involve plain guessing what the answer could be if one isn't apparent.

Remember that saying "I don't know" isn't an innate skill in humans either - it's an ability we drill into kids for the first decade or two of their lives.

Gormo
0 replies
4h50m

Another similarity is that 4-year-olds will often pick up words and phrases from people around them, and learn associations between those words and phrases without yet having learned their meanings or having any of their own experiences to relate them to.

So a young child might answer your question with a response that he heard other people say to a similar question, without actually understanding what he's saying. LLMs are basically this at a grand scale.

9dev
0 replies
9h34m

That doesn't tell the whole story tho. It's a 4 year old kid that has been thoroughly conditioned to always be positive and affirming in their reply, even if it means making something up. That isn't something kids do usually—it's not something humans usually do, at least not the way ChatGPT does–and that may be part of why it's so confounding.

It's not just "I don't know", really. It feels like OpenAI ingrained the essence of North American culture into the model (Sorry North Americans, I really don't mean this in a demeaning way!), as in, the primary task of ChatGPT is supposed to be to make its users happy and feel good about themselves, taking priority over providing accurate answers and facts.

xdavidliu
0 replies
2h53m

As you are making the point, can you please provide the example? If it's really new, posting it as a comment here is not likely to affect LLM training, at least not for a day or so. And even if it is, you can still provide the examples that you are thinking of?

Gormo
0 replies
5h33m

A child or unintelligent person may make errors when attempting to engage in reasoning -- applying the wrong deductive rules, or applying the right ones incorrectly, or applying them to invalid premises -- but LLMs are not even attempting to engage in reasoning in the first place. They are applying no deductive rules, and have no semantic awareness of any of their inputs, so cannot even determine whether they are correct or incorrect.

LLMs are simply making probabilistic inferences about what "words" (tokenized particles of language, not necessarily individual words from our perspetive) are most likely to appear in relation to other words, based on the training data fed into them.

Their output often resembles reasoning simply because the training data contains large amounts of explicit reasoning in it. But there is no actual reasoning process going on in response to your prompt, just probabilistic inference about what words are most closely correlated with the words in your prompt.

If you're getting what appear to be reasoned responses to "novel" prompts, then one of two things is likely happening: either (a) your prompt isn't as novel or unique as you thought it was, and the model's probabilistic inference was sufficient to generate a valid response without reasoning, or (b) the response isn't as well-reasoned as it appears, and you are failing to notice its errors.

If you want to genuinely test an LLM's ability to engage in reasoning, try throwing a complex math problem at it, or a logic puzzle that trips most people up.

Turing_Machine
6 replies
21h1m

just because it’s so intent on satisfying your whim instead of saying No

This really, really, really needs to be fixed. It's probably the most irritating (and potentially risky) part of the whole ecosystem. Nothing more infuriating than being give code that not only doesn't work, but upon the most casual inspection, couldn't possibly work -- especially when it's done it four or five times in a row, each time assuring you that this time the code is gonna work. Pinky swear!

WJW
1 replies
6h55m

What will StackOverflow content teach the model that millions of pages of programming language documentation, github repos and developer blogs could not? SO is nice but it's hardly the only source of programming knowledge out there.

noworriesnate
0 replies
6h29m

It was a joke because Stackoverflow is notorious for answering questions with "you shouldn't be doing that, do something else"

namaria
0 replies
10h53m

The "it can only get more better with time" proponents would do well to learn about diminishing returns.

9dev
1 replies
20h21m

“You‘re right, I apologize for the oversight. Let’s do the same bloody thing exactly the same way again because I don’t know how to answer your question differently but am forced to never admit that…“

Turing_Machine
0 replies
16h46m

Sometimes it goes in circles, fixing one problem, but unfixing one that it had fixed in the previous iteration. I've had that happen several times.

llm_trw
5 replies
17h43m

Depending on the way you phrase questions, ChatGPT will gleefully suggest you a wrong approach, just because it’s so intent on satisfying your whim instead of saying No when it would be appropriate.

Which is easily solved by using another agent that is told to be critical and find all flaws in the suggested approach.

theamk
4 replies
17h18m

Not likely.

Have you seen AI code review tools? They are just as bad as any other AI products - it has a similar chance of fixing a defect or introducing a new one.

llm_trw
3 replies
16h42m

They are not there to fix defect, they are there to detect them.

theamk
2 replies
15h52m

Sure, but it's LLM - so this would be mix of some of the real defects (but not all of them) and a totally fake defects which do not actually need fixing. This is not going to help junior developers figure out good from bad.

llm_trw
0 replies
11h40m

And by running multiple versions of the system in parallel you can have them vote on which is the most likely part of the code which is a bug and which isn't.

We've known how to make reliable components out of unreliable ones for a century now. LLMs aren't magic boxes which make all previous engineering obsolete.

TeMPOraL
0 replies
11h1m

this would be mix of some of the real defects (but not all of them) and a totally fake defects which do not actually need fixing

... and real defects that you never noticed or would've thought of.

This is not going to help junior developers figure out good from bad.

Neither is them inventing fake defects which do not actually need fixing on their own. What helps juniors is the feedback from more senior people, as well as reality itself. They'll get that either way (or else your whole process is broken, and that has zero to do with AI).

KRAKRISMOTT
5 replies
6h43m

The new generation of devs are going to be barefoot and pregnant and kept in the kitchen, building on top of technologies that they do not understand, powered by companies they do not control.

moooo99
2 replies
6h36m

building on top of technologies that they do not understand, powered by companies they do not control.

Isn’t that pretty much the status quo?

KittenInABox
1 replies
4h41m

I think the new thing that will be happening is that junior developers are dependent on chatgpt and ai for a knowledge base, which is itself powered by companies completely outside of their control. Worst case scenario is that I can always write my own interpreter, with which I can write my own development environments, etc etc. because I have the knowledge. New developers will end up in a state where if chatgpt decides to ban you from their services your career is SOL.

generic92034
0 replies
4h22m

New developers will end up in a state where if chatgpt decides to ban you from their services your career is SOL.

Is that not an unlikely thing to happen at least for developers working as company employees? The company I am working for has a contract with several LLM providers and there is no option to ban individual employees, as far as I am aware.

For freelancing developers the risks might be greater, but then you are usually not starting as a freelancer as a junior.

giancarlostoro
0 replies
3h5m

I assume companies will make the job interview process even worse as a result. I really don't do well with CS heavy interviews. I never studied CS, I studied as your job description notes a RELATED field, I took about five different programming language courses at my college, and have years of experience. I'm not going to talk about algorithms I never use because I build websites.

cptcobalt
0 replies
2h58m

I think this is true as we keep building up abstraction layers. Computers are getting faster yet feel slower as we just want to work with higher level tech which makes it easier to understand less of how the sausage gets made.

But I don't think this is a now problem, in the age of AI, but has been a growing problem for decades as software has matured.

giancarlostoro
0 replies
3h9m

only treat the model as a smart autocomplete, that doesn’t matter

This is the only way I like to use it. Also in some cases for refactoring instead of sitting there for an hour hand crafting a subtle re-write, it can show me a diff (JetBrains AI is fantastic for my personal projects).

charlieyu1
0 replies
6h38m

Partly disagree, actually. The current web technologies are somewhat unnecessarily complicated. Most people just need basic CRUD and a useable front end for their daily tasks.

anymouse123456
0 replies
7h22m

Of course you're right about today's LLMs, but the author imagines a not-too-unlikely incremental improvement on them unlocking an entirely new surface area of solutions.

I really enjoyed the notion of barefoot developers, local first solutions, and the desire to wrest control over our our digital lives from the financialists.

I find these ideas compelling, even though I'm politically anti-communist.

The presentation was also quite lovely.

liampulles
16 replies
20h17m

Unpopular opinion inbound: what will spark a barefoot developer revolution is not LLM auto-coding, its making spreadsheet software more easily extendable and FUN.

By extendable, I mean doing things like generating and sending emails, and using plugins to integrate with external services. By fun, I mean non-enterprisey, something that one would WANT to engage in as a hobby and that a total novice can pick up and gradually learn. Something you can engage in with friends.

I know that there are things that meet the extendable part of the equation, its the fun hobby part that I don't think has been cracked yet.

I think a big part of why I became a coder is because I enjoyed playing with Microsoft Access as a kid - but I'm a weird nerd, so I don't think that'll cut it for others.

kragen
3 replies
18h26m

what software would you nominate as central examples of fun? do you think there are people who want their fun to be easy fun instead of hard fun?

flir
2 replies
17h9m

Just throwing it out there, but from what I've seen of it Airtable compares favourably with Excel for the "ad hoc database" use case.

liampulles
0 replies
12h41m

I use Airtable at work actually, its quite a nice tool for hacking internal tools together. Its extension requires Javascript though, which while great for professional coders like me, IMO I don't think it would extend to a more general population as being interesting.

kragen
0 replies
14h56m

ideally, things as different as possible from excel!

Bjorkbat
2 replies
16h39m

I largely agree with you. Spreadsheets are arguably the most successful no-code/low-code tool out there in that they've enabled millions of people to automate some task or complex process and effectively become programmers, despite the fact that I personally find an empty spreadsheet to be kind of intimidating to look at.

I think the only point of disagreement we might have is that I don't necessarily think that the speadsheet, even an improved one, is the best graphical model/structure for articulating complex processes, but it seems to be the best we've discovered thus far.

shagie
0 replies
15h44m

A spreadsheet is lisp under the covers. Be it =IF(test, then, else) or (if test then else) - that's just syntax of a functional language dialect.

liampulles
0 replies
12h37m

"I don't necessarily think that the speadsheet, even an improved one, is the best graphical model/structure for articulating complex processes, but it seems to be the best we've discovered thus far."

I actually do agree with you on that, as you say its just the best we've discovered so far. But if there is a better model which is as versatile and easy to use, I'm totally open to it.

flutas
1 replies
4h40m

what will spark a barefoot developer revolution is not LLM auto-coding, its making spreadsheet software more easily extendable and FUN.

I've actually seen it happen using exactly that.

A (non-tech) person trying to write an inventory management software for their company (which is highly regulated and inspected) using Google AppSheets.

No amount of "don't" was enough to convince them... 6 months and much of their sanity later they gave up.

dagw
0 replies
4h35m

I've seen it happen too. No amount of "don't" was enough to convince them...6 month later they had a weird hacky solution that solved their problem and was used across the company and on the whole seemed to do what the relevant people needed it to do.

dan-allen
1 replies
20h14m

Oh that’s interesting and I think you’re right. Software as it’s developed today is very abstract but spreadsheets have a visual representation that makes them much more approachable.

throwanem
0 replies
20h4m

An immediate visual representation, at that. HMR at best offers a pale shadow of the feedback loop Excel users get for free.

zem
0 replies
16h0m

couldn't agree more - i love the whole idea of barefoot developers, the analogy with china's barefoot doctors was superb, but all the time i was thinking what we need is visual basic and access and hypercard revamped for the internet era, not LLM-generated code.

tech2
0 replies
7h50m

Historically a number of people started as single small developers with tools like FoxPro, Delphi, and (even earlier) Clipper/dBase. These allowed easy creation of some kind of UI. All of these have a database under the hood. The combination of the two allowed a lot of small groups to make quite complicated systems for a single platform.

Now there's an expectation for things to be web-based and the entry point feels less obvious than it used to be.

skydhash
0 replies
17h37m

Currently trying emacs (after years of VIM) and what amazes me is how easy it is to extend the system. I don't know if we can extends this to novice. But spreadsheet and form builder looks like the most viable candidate. Maybe extends it with modules like data providers (files, music, contacts, emails,...) and actions (open file, play audio, call, send email,...). But this require open standards and protocols and the industry is trying hard to move away from those.

purple-leafy
0 replies
10h44m

I love this take. Spreadsheets are one of the coolest low-code things ever that even normies can have fun with.

At my first job out of uni, a telco had a 25+ sheets spreadsheet each with 1000s of rows and accompanied by some normie VBA scripts , to basically power optical network allocations …

I was amazed and terrified when I first saw it. Absolutely no source control.

Speaking of spreadsheets, is there a spreadsheet with built in source control?

maccard
0 replies
6h5m

The barrier to entry for programming couldn’t really be much lower. Pretty much everyone with a computer can write and run JavaScript in their browser, and there are even web based environments and editors like GitHub code spaces.

Google docs has a pretty neat JavaScript api that can be used from inside the apps themselves too.

charlie0
10 replies
18h54m

Sounds great, but this won't work out to the way the author imagines it. We have a very strong bias towards anything technical, but if you've ever worked outside the SWE field, you'll see that half of the people simply aren't interested in "thinking". They don't like or enjoy their jobs. They sure as hell aren't going to sit there and think, "how do I take this problem and break it down into a dozen/hundreds of small steps to create a small app that solves my problem?"

The majority of people are content with learning the bare minimum needed to get by and never having to learn anything new. Don't believe, the proof is self evident the moment you try to update a UI that requires users to do things differently than before.

And this is all fine, I've accepted it by now. It's the way it's always been and always will be. Still, these new tools will empower a relatively small percentage of the population to create amazing open source apps that will be used by a lot of people. The tools also very much empower pro-developers to build a lot more apps with less efforts than before, which I'm looking forward to.

mangamadaiyan
3 replies
16h22m

A tangential nitpick: Scientists, Engineers who are not of the Software persuasion, Accountants, and Lawyers, to name just a few, also make up people who work "outside the SWE field". Almost all of these people are interested in thinking; a few of them will probably not be, just as one finds a few people working in the SWE field being reluctant to think. There is a very good reason why people -- even technically minded people -- have trouble with changing UIs, and it is not about them not wanting to think. Consider that the UI could be part of a workflow that is more of a side annoyance than their main job, and that they would rather conserve their mental bandwidth for problems that are more important to them. A doctor shouldn't have to get used to a new timesheet UI just because a designer woke up one fine morning with a new thought.

analog31
2 replies
13h53m

I've noticed that even the SWE's are annoyed by the UI's of typical enterprise software and websites, and by gratuitous UI churn.

okwhateverdude
1 replies
12h10m

Only because I, as a SWE, know how that sausage is made. Either it is some misguided desire to show a customer that you're working on the product ("See, we're still alive, please keep paying us money!"), some PM or UI designer stuffing their promo packet ("Look at how much increased engagement we have!" by making users now click twice as much), or to satisfy the whim of some c-level sponsor. Rarely is the UI actually improved or made more efficient or actual user research conducted. Prime example is the last Slack UI update from last year that introduced a ton of dead space and other annoyances. Like, the tool is for reading and writing text. I want to see the text. Why did you shrink the text area for that stupid dead space on the left? Guarantee they are measuring useless metrics like clicking on those stupid buttons on the left now and patting themselves on the back on how awesome and useful their change was.

m0llusk
0 replies
5h56m

That's great, but can you make the Buy button bigger?

titanomachy
0 replies
16h28m

outside of the SWE field... half of the people simply aren't interested in "thinking"

This is probably true of SWEs as well (although in some places it's hard to get away without thinking at least a bit.)

maccard
0 replies
6h14m

I think you’re right that the majority of people do not want to learn, but in my experience there are a large number of engineers were who fit into that category, just as there are pockets of other professions that are willing to learn explore and improve. But

jauntywundrkind
0 replies
13h24m

This bleak-ass sorry outlook & the fixed ness on looking at the bleaker parts of it, to me, isn't actually about the sad bleak people. Its about the sorry ass conditions & technosocialcultural bankrupcy that let so many people down, that didn't give them good hooks to start digging in & try enjoying their own powers. It's also about being overwhelmed and emotionally unserwater by unrewarding crappy rentier-capitalism.

The belief system of humanity in itself needs nutrients, needs to seed and grow positively people. Open source is by far one of the strongest currents about for that.

Right now open source is a mess & bogglingly complicated. Technology in general has gotten vastly less accessible year after year, the on-ramps to understanding detoured into ever more appliance-ized experiences & neofuedal cloud empires happening in other people's computers. Far fewer are primed for the esoterics of software, know the basics, or have positive rewarding experiences tinkering and tweaking. Humanity has been shoved into superficial & coopted/dishonest experiences. Heck yes, a lot of things are against the possibility of the good.

But we unlike almost all other relationships with the material world, we can directly improve things, can keep enhancing things practically without material constraint, limited chiefly by imagination & will (ML excepted I guess). We can grow the pot, open up computing & it's softer side progressively, create better on ramps & more learnable, observable, experience able authentic experiences.

Over time I hope the barriers to being interested might be less severe. The other material conditions dragging people down & shutting them off from engagement might persist. But I think there can be a very tangible source of hope here, a demonstration that people, when they are doing their thing, creating as they might, when they are empowered, are cool. Are good. Are a role model. And that - with hope - will keep us iterating & improving, will hopefully lead to more efforts to keep redefining & expanding general purpose computing & soft systems in less deeply technical terms.

I have a really strong reaction against projecting the users of today & their sorry chained-to-the-cave-wall state to what people might be if there were some honest shakes out there. Open source is far from a perfect liberator today, 100%, but this is part of the journey, is part of why we need to be practicing & trying, so we can create & iterate towards alive interesting systems that welcome people in.

intelVISA
0 replies
16h22m

outside the SWE field, you'll see that half of the people simply aren't interested in "thinking"

Whereas inside it's closer to 90% :/

holri
0 replies
13h41m

I do not think it is reluctance of thinking when pianists would complain about Steinway changing the order of piano keys every few years, because a new Steinway engineer thinks his order is better.

CM30
0 replies
9h15m

Don't believe, the proof is self evident the moment you try to update a UI that requires users to do things differently than before.

Is this because they don't like learning something new, or because they've been burned so many times by badly done redesigns that make things harder to use for pretty much everyone? I'm sure half of Hacker News despises how Windows has been redesigned from Vista onwards, or at the very least since 11.

Perhaps many people don't like change as much because a lot of changes come less from user needs/insight into what would actually improve the program and more from the need to give designers something to do (or provide another way to add ads/track users/appeal to shareholders).

Having an interest in making your own job easier/more efficient is very different from being interested in accepting every redesign under the sun.

anotherhue
9 replies
22h57m

Great talk, more for the new-developer than the cynical seasoned pro but I love the direction.

zer00eyz
3 replies
22h16m

the cynical seasoned pro

Im that guy.

She is way off the mark on some trivial points in there and they are easy to overlook.

Right now the big players are pouring money into hardware. There is a lot of talent out on the street. If they aren't resting and vesting in those orgs what are they going to do?

It's possible for a small team to bootstrap with ZERO money.

How much do you need to earn enough to support 3 devs? Apple only takes 15% if you make less than a million a year. Is that a staff of 3? They only take 15 percent of subscriptions!!! That dosen't include android.

If you dont piss away money on on cloud (vps), on everything as a service (login, metrics) the economics are amazing.

There's room in the market for 1000 founders who do very well, you wont be google but you can live a more than comfortable life.

jauntywundrkind
2 replies
22h1m

But the app store experience is to push people to ever bigger and bigger apps. We don't incrementally explore & expand our competency, we don't augment human intellect, we don't form man-machine symbiosis: we get pushed to the biggest already most successful apps, that leave us supplicants to their ever changing laws & dictums.

The app stores by their nature don't actually let people into computing. They are the sad old pre-connected paradigm, entrenching top down systems control.

What Maggie is tapping into is more than the 15/30% cut. It's about possibility & connection, about software we have power over & can shape & share. The software we have today is all dead software, delivered to us on a platter, and that dead modality is not befitting the greatness humanity can & should be tapping into.

Local First is inherently not bound, not fixed into interaction with the golems created by far off others. It allows more. It allows soul. That earnest engage-ability is something that's been missing from the consumerized world, has created enormous disbelief & disdain in a place where hope & possibility once sprung freely.

http://malleable.systems now!

----

Regarding whether users will or not? Local First has a set of first principles to keep us from being too bound by software. The particular tools we have at any given point are arbitrary, secondary.

Who knows what would happen if we did have the capabilities, because right now we don't have the capabilities. Yes, Local First is about bootstrapping something new. No it doesn't create the barefoot developers automatically, doesn't make that imminent. But it makes it possible. And some will convert in, some will start being barefoot developers as it becomes possible. If & only if we make it possible. The cynicism about those who won't or the state of our tools is not yet due.

zer00eyz
1 replies
21h35m

we have power over & can shape & share

This is where She's being way too optimistic.

What we have now is software that looks like macdonalds.

She thinks the tools we have are going to enable home cooking. Software is way harder than that.

The step between is "local" restaurants. You dont need to build software for 8 billion people, or 300 million Americans. You can build for a region, or a niche group.

It allows more. It allows soul.

You might not be old enough to remember zine's but they were a thing. We're about to have the software equivalent.

PaulDavisThe1st
0 replies
4h53m

What we have now is software that looks like macdonalds.

What we have now is often software that looks like a food hall: it has to contain all the features that anyone interested in some general area of functionality might want.

She thinks that sometimes it's going to be good to just cook up a Thai meal because that's all you actually need/want right now.

jay_kyburz
3 replies
21h32m

All the stuff in the first half sound great, but adding an llm dependency is a complete antithesis to local first and well crafted software.

I don't really like the term barefoot developers, but local craftsman should own and understand the tools they work with and the code they write. Their tools should be local first as well.

throwanem
2 replies
19h34m

It's not conceived as an LLM dependency, but really more similar to the way I currently use LLMs as an engineer: as a consultant. I don't think current models are up to that task on their own, certainly not for someone without significant programming knowledge - but the sense I got, especially as the talk touched on still needing access to a skilled dev for a while at least, was that the author also understands this. Too, in sketching out so broad a vision as this, I wouldn't necessarily expect any author to treat what's possible today as a hard limit, any more than anyone else working in tech does or should.

I also don't understand why you'd consider LLMs as a blocker to local-only. You can run a 7B model on a $600 refurbished M1 Mac mini today. It won't be the fastest or the cleverest, and you probably will need an hour or so from someone who knows how to set up Ollama and Open WebUI, but it is absolutely usable after that with no Internet connection at all - I have a Mac mini so configured on my desk right now, actually. Or for an order of magnitude more outlay, you can get a Studio capable of supporting multiple concurrent users with 70B models producing tokens faster than you can read them. Again, sure, it won't be GPT-4o, and it won't be up to the standard of the sorts of tools described in the talk - not this year, anyway. In 2026 or 2027, though? Again, this talk looks to a possible future, not just today. And with a setup based on open models running locally as I've described, there is the countervailing advantage that once you have it, you own it.

As for the term 'barefoot developer' itself - it's a cavil, I grant, but I still want to quibble, because as a child in rural Mississippi with an Apple IIc and a couple of books, I was a barefoot developer, writing my own programs to solve my own problems, and often enough going to play in the woods by the road cut when I got stuck. (It helped! These days I'm more likely to chase wasps with a camera, but it still helps, when I can force the time. For what it's worth, though, I do wear shoes these days.)

Now I'm what Amazon would call an L5, earning as highly as almost anyone in this generation of my family - and not for a lot of student debt, because I never went to college. Whatever I am as a professional, I am in large part because I've been discovering and developing my craft since my single-digit days - and was free to do so without my parents having to pay anyone a damned subscription, besides.

True, the technology has grown far away from the kind of locally and individually empowering model this talk describes, and that I recall from my own childhood and early career days - that proto-SPA I mentioned nearby, for example, was hosted on a machine that everyone who ever worked on it could physically touch without leaving the office we all shared. And this in 2011! It really has not been that long a time since a model very much like what this talk describes was actually present in reality.

I see no reason in principle why that can't be true again. Certainly I'm convinced that it should.

skydhash
1 replies
17h11m

The thing is you can learn a lot with a bootable FreeBSD usb, an old PC and a copy of the "Absolute FreeBSD" or the equivalent for Linux. I strongly believe that anyone who is not willing to put in the time to learn those stuff won't be able to do anything custom with computers. Shortcut has been there for years now and most people haven't touch it. Many only learn a few actions that apps provide and are content. Their phone are mostly for the things they replace (flashlight, camera, landline, magazines, tv,..). Computing is not one of these. Even taking the time to properly learn Word and Excel is no longer a thing.

Anything LLM can do locally, you can do it better with another, less demanding tool. And what they help with, it would be faster to just learn how to do it.

throwanem
0 replies
15h39m

Hence the time the talk spends dwelling on how this isn't an "everyone" thing any more than it's a "professional devs" thing. The point, or at least the point as I understood it, is to broaden the space for people who want to learn enough to support the things that matter to them, and no more.

Again, there's nothing new in this; in fact it's much the way things were when 8-bit micros and BASIC were about as fancy as most ever saw, yet anyone who ever saw a computer could hardly help but see them. Maybe you'd learn to use those tools and maybe you wouldn't, but you knew they were there for you and the assumption was that you could learn enough to use them. More did than I think a lot of folks in their 20s today easily imagine. I don't blame them; if all I'd ever seen was how things are now, I'm sure I'd be the same. But people made things the way they are now. People can make them otherwise.

There's also no assumption in the talk that everyone involved is a loner, an atomized individual, the way you seem to assume would be true. Indeed, quite the contrary: community, in word and in concept, suffuses the text in its entirety, to such an extent it seems surprising its constant immanence could be overlooked even on a brief skim.

Admittedly, that's a difficult concept for many to fathom in 2020s America. I'm certainly among that number. But I'm also old enough to remember when things were better, and young enough still to imagine a future where things are better again. I just hope I'm not too old to still make myself useful by the time they start to be.

throwanem
0 replies
22h18m

As a cynical, seasoned pro, I loved it and I love the ideas behind it.

The only note I have is that I don't think it's really accurate to say this approach wasn't possible in 2004. I was a working web dev in 2004, and from that perspective, what we did was pretty close to what the talk describes.

Intermediated by commerce, certainly; as I recall having put it one day in that same job after hanging up a call wherein had occasioned a request that $400 a month of backhaul be provided free of charge, it's nice there's folks out there trying to save the world, but some of us have to earn a living. ("You said you were a web dev. Why were you talking to anyone about backhaul?" I pulled cable too sometimes, in those days. The mid-00s were a different time.)

And of course there were no LLMs involved, that technology then still being much the stuff of science fiction and allied fields like futurism. Even so, though: none of us had any real idea what we were doing, but we knew how to figure out what worked and what didn't, and that was usually enough even if the level of trial and error, and the lack of rigor, were shocking by today's standards. (I wrote my first SPA, a triathlon registration system for an org that lots of folks in the DMV might recall if I mentioned a name, in 2011. Imagine my surprise when two years later, on leaving my local backwater of web dev for the mainstream, I discovered that was what I had done, when I thought I'd just been trying a wild idea I was sure would fail, because it was that or lose the contract that was keeping the business afloat...)

Honestly, the vision described in this talk is a lot more like the kind of life I thought I'd live, when I got into this line around the turn of the millennium. It'd be easy to take an overly rosy view, and maybe I am. But then too, it'd be good if not every quarter-square-mile neighborhoodlet in my city had an intimate dependency on Amazon in order to function, too.

iwontberude
8 replies
22h17m

Not a single mention of Open Source Software. The author is missing a huge piece of the puzzle.

skadamat
5 replies
22h0m

Well, from my POV, the licensing part of this is only a small part of the puzzle. Open source software that’s insanely complex to run and is developed like industrial software still doesn’t make it amenable for home cooking / barefoot use.

The bigger limiter is the missing culture of deliberately small and individualized software to specific people, families, or small communities. That’s the mindset Maggie is trying to espouse. Then, you can carve that space into closed vs open source, free vs paid, etc.

But Excel and Notion aren’t open source and people have gotten really good at using them for these home cooked meals.

ubertaco
1 replies
21h45m

Exactly. Licensing is the easy part. Deploy + run are the hard parts, IMO.

You can build something amazing like Immich, a shining example of open-source software that, after a lot of setup, "just works"....until the next release wants you to make well-informed decisions to your `docker-compose.yaml` file.

My wife, who likes Immich but has no interest in learning to code, is not going to do those things if I get hit by a bus, because she won't know how. She's gonna give up and just go with paying some service for photo storage, or else losing photos like everyone else.

skydhash
0 replies
16h59m

There's a lot of complexity to software and you either learn how it works or find someone that does. And it's not a tangible item like a car, where you can intuit your way around.

But I much prefer to host something locally and give my family access than letting them suffer the whims of big tech.

ozim
0 replies
20h4m

Only if people would start using OSS projects that are out there for them for free and contribute (where contribute means, using, supporting, adding to documentation) it would be solved already.

You know there is libre office, there are tons of other OSS apps - accounting, CAD, image manipulation.

Whole idea that people somehow need to be developers is crooked and making it worse. There are tons of already useful local first software rotting out there - only because „everyone uses Photoshop” „everyone uses Excel”.

I don’t agree with „barefoot developers” - it should be „OSS promoters” that promote open standards and open applications and not come build up each and every piece of software by reinventing the wheel.

I as a developer don’t need to make every app for myself - 99% of my needs for software in personal life is covered by excel and 1% is communication apps that cannot be local and someone has to run server anyway.

iwontberude
0 replies
20h46m

More than just a license, free and open source is about us leveraging each other's effort for problems deemed too unprofitable for major companies. It’s required for the authors solution to scale but they don’t mention it.

bawolff
0 replies
21h18m

Open source is the natural conclusion of "barefoot development".

Sure, now a days there are lots of big professional projects, but the core of open source/Free software, is people making stuff for themselves and their neighbours.

Linux didn't start with the goal of taking over the world. It started as one guy's amateur project.

meiraleal
0 replies
21h44m

What open source? The VC backed ones that are split into community version and Enterprise? Computing sovereignty and open source are completely detached of each other nowadays, unfortunately.

chrisweekly
0 replies
22h1m

Yeah, that's a legit criticism... but her broader point still holds. OSS provides "legos" which still require the "glue", as she puts it.

bawolff
6 replies
21h28m

Personally i'd rather we bridge the gap between using excel and writing python with something that is between the two, rather than relying on LLMs.

Call me a skeptic, but i remain very unconvinced that LLMs will be the enabling tool that lets non programmers program.

therobots927
1 replies
20h42m

How would this new intermediate be different from SQL?

bawolff
0 replies
19h40m

Well, if i knew what the solution was i would be making it instead of waxing poetic about it on hn.

While sql (or say microsoft access) could be seen as an intermediary, i don't really think it is. It is very specialized to specific applications, and arguably rdbms are more complex than basic python.

nakedneuron
0 replies
8h24m

There is visidata. It's a TUI and still has rough edges, but it could develop into what you're envisioning.

luke-stanley
0 replies
21h14m

I agree that something between Python and Excel is a good idea. But still, LLMs are good at using in context learning to convert natural language to a given pattern language or format, and where people don't know what the syntax is. Making visual interfaces that are so easy to use that they don't need to use such magic would be nice. But it seems hard to do that.

lelanthran
0 replies
11h37m

Personally i'd rather we bridge the gap between using excel and writing python with something that is between the two, rather than relying on LLMs.

Maybe Lua[1] and Excel. Python's significant whitespace is unsuitable for doing the one-liners that spreadsheet experts do.

Writing code for the spreadsheet should not require putting that code in a separate file just so the significant whitespace can be maintained.

Whatever you write in a single cell should be the programming language. If you later want to move it to it's own file, module, package, CLI, etc, you should be able to.

[1] Visual Basic turned out to be very usable with spreadsheet experts.

sprobertson
0 replies
13h19m

Where would one find more?

Bjorkbat
0 replies
16h32m

See also: https://chrisnovello.com/teaching/risd/computer-utopias/

Kind of funny that someone's syllabus from a class taught in 2015 has done more to broaden my mind concerning speculative computing futures and "what-ifs" than anything else I've come across.

panphora
4 replies
22h7m

This is my dream to. It goes something like this:

- Find any cool tool online, download it, use it locally. Your data is persisted to your own local machine and your own personal cloud.

- Visit any website, download it as your own, modify it. Host it as your own and continue to add to it online.

- Browser vendors implement user accounts & hosting into the browser, so anyone can make a working app locally with a little bit of HTML and serve it to the whole internet with the press of a button.

- If your app takes off and goes viral, part of the economic value provided flows back to you through micropayments.

Basically: portable HTML "objects" that can be treated as mutable apps instead of only static documents = like GitHub + Codepen + Vercel + NoCode in one neat little package.

grugagag
0 replies
21h57m

Micropayments is still not a thing yet. I think that if it were it would give rise to a lot of interesting movements like this one, organically more or less. The problem with open source and giving away things free is that people just take without giving back.

bawolff
0 replies
21h24m

That doesn't seem all that different from netscape and how it included a wysiwyg editor.

Obviously this is much harder given how much more server side the modern internet is. But i think the more fundamental problem is that 99% of users dudn't want this in the past and still dont really want this.

RodgerTheGreat
0 replies
21h57m

The first two points are foundational design principles for Decker[1], a programming environment which can be used as a local application or as a single-file self-contained web page which still retains all the development tools. The same is true for TiddlyWiki[2].

The web ecosystem already provides the substrate necessary to realize these visions, it's a matter of building things with a particular perspective in mind; a rejection of centralized infrastructure which is in many cases simply not needed.

1) http://beyondloom.com/decker/

2) https://en.wikipedia.org/wiki/TiddlyWiki

Kostic
0 replies
18h19m

Really like that. Just add easy versioning and some tags and we're good to go.

thebeardisred
3 replies
19h18m

With all due respect to the author, I take umbrage with the statement "They never end up in the terminal, because that is a huge jump in complexity, usability, and frustration from using something like Airtable or Notion."

Using the command line isn't more complex, it's differently complex. It's differently complex and writ large, schools make no effort to teach it.

charlie0
1 replies
18h53m

The "huge jump in complexity, usability" refers to not being able to use a mouse, which a lot of people find intimidating for some reason.

skydhash
0 replies
17h3m

I've rarely seen cashiers use their mouse. The issue is training. And the terminal lands you directly in the guts of the computer. The mouse is safe because there's limited actions you can do with it. But people are OK with learning the keybindings for whatever game they want to play. What's distressing is when the software changes randomly after an update. Imagine retraining from Office 2003 to 2007 with the ribbon. Or windows 7 to Windows 8.

Der_Einzige
0 replies
5h7m

Movies about nerds getting bullied who later used command lines to do nerd shit have left a generation of folks afraid of any command line anything. There’s a guy who got accused of being a terrorist just because he used a command line on an airplane.

Don’t believe me? Look at the “Linux porn” and “ricer” communities on Reddit, who go out of their way to make their systems look as “TUI cool” as possible, complete with customized tiling WMs. Most of the people doing this do it because they’re going for the “l33t h4x0r” vibes.

Bjorkbat
3 replies
17h44m

I don't know. My personal take is that low-code/no-code tools should have ushered in an era of homemade software, but it didn't. It's something I think about a lot, incidentally. We've had the technology to make software using a GUI rather than a text editor for a very long time, and yet programmers still use text editors, and programming in general hasn't really been all that democratized. At best, it's now possible to create a website without knowing how to code, but it usually isn't a particularly good one.

A simple explanation is that the devil is in the details when it comes to implementation. Edge cases and granular behavior are hard to capture except in granular snippets of code. I'm not convinced that LLMs necessarily solve this problem. Even if you are working with a theoretically quite competent LLM, there are still going to be instances where describing what you want is actually challenging, to the point where it would be easier if you just did it yourself. You could argue that this doesn't matter in the case of simple software, but I think we underestimate what people really expect out of something that's "simple", and I think we underestimate people's tendency to grow bored of old toys, especially if they don't work as expected.

If anything, my belief is that LLMs by themselves aren't necessarily going to make this a reality. You need an accessible and empowering UX/UI to go along with it. Otherwise, asking an LLM to build software for you probably won't be much of a fun experience aside from those who are AI enthusiasts foremost.

Side note, I have painful feelings about so many UX researchers I used to admire jumping on board the AI hype train so uncritically. I kind of get it, their job is to speculate on new possibilities a tech offers without getting too hung up on external complications. Still, I feel disillusioned. It seems that prior to all of this these same people were questioning our implicit assumptions with how we interact with computers in really interesting ways (in the same vein as Bret Victor). Now, their takes are starting to converge with that of the usual anonymous midwit AI enthusiast on Twitter who pivoted from crypto.

Put more bluntly, the idea that LLMs will usher in a golden age of people making simple software is kind of a boring speculative future, a speculative future shared and talked about by the most uninspired and boring people on Twitter.

giraffe_lady
1 replies
17h36m

there are still going to be instances where describing what you want is actually challenging, to the point where it would be easier if you just did it yourself

Yeah exactly. I've seen meetings between engineers get into a level of technical specificity where I've had the thought "we are just coding in english now."

Even if you're able to communicate your intent with natural language rather than a "programming language," you still arrive at a level of concrete detail that is difficult or impossible to describe without some sort of standard technical shorthand or jargon. This is true whether the "listener" is an LLM, some other sophisticated machine interpreter, or just another person.

Bjorkbat
0 replies
17h13m

Lately, with all this talk of natural-language-as-programming-language, I've tried reflecting on my own code and seeing how I might convey the same intent, but in English.

It's made me realize that it's usually the rule that intent at least feels easier to convey through code rather than through natural language. This is especially true if I'm trying to perform some complex mathematical task that's geometric in nature. Put another way, it's probably easier to understand how poisson disc sampling works if you saw the code for it, vs if you read the actual paper, especially if the code had comments providing additional context for different blocks of code. Doesn't help that the paper uses complex mathematical terminology, whereas the code, at worst, might have very terse variable names

And yet, I think a better example might be found in markup languages like HTML. Trying to convey the layout of a webpage using ONLY natural language seems really hard compared to using a markup language.

namaria
0 replies
10h33m

The major fallacy in low/no code and proposing LLMs as programming tools is thinking that you can automate away complexity.

Any automation will introduce complexity. There's no free simplicity lying around. We all need to take low entropy energy sources and dissipate it to get work done. If you don't want to reap barley by hand, you need an entire supply chain for machinery. The complexity can be hidden, you can pay people off to deal with it. But someone will have to manage it.

People wanna buy machinery and have technical support. They don't want to build machinery and maintain it. People want working software, they don't want to design, build and maintain it. That complexity never goes away.

seabird
2 replies
16h26m

People just don't care and won't do it if it requires more than a fleeting thought to get it done. Excel is comically powerful and only a tiny fraction of users have any interest in tapping into that. Every step of the way, for the last 50 years, pretty much all parties involved have actively worked to move away from end-user programming.

Words can't describe how happy I would be if that wasn't the case, but it's better to walk than curse the road. Changing this would require pretty much everybody on the planet to re-evaluate how they interact with a computer, and I'm not holding out on that.

mikro2nd
1 replies
11h28m

I think the point is that, rather than "requiring everybody on the planet to reevaluate how they interact with a computer", Maggie's idea is that computers should be reevaluated with regard to how they interact with people. Like another person earlier in these comments, I am much more skeptical than her that LLMs are "the answer" to enabling non-professional-programmers to create -- more importantly to modify and compose -- chunks of deeply custom software, but in principle I'm in sympathy with the sentiment. People do care. It's just that almost all software they interact with is deeply brittle, change hostile, black-box. You can change almost nothing about it beyond some superficial tweaks, and composing different pieces of software into a working "something else" is mostly impossible. I suspect that the very notion of an "app" is something we have to get past in order to make progress on this...

seabird
0 replies
5h18m

People love the very notion of an app. Computers went from something every user was expected to program to some extent to that brittle black box because users didn't want to program, they just want the benefits of software spoonfed to them. "We" evaluated the world where the computer was beholden to the non-programmer users ("our AS/400 administrator is some guy from supply chain, I put his data in an Access database for our team to process"), and then Apple made a gazillion dollars with "apps" because your average user thinks that shit is for dorks and a waste of their time. You would be floored at how many young people don't even own a computer beyond their phone because they see so little benefit to something they have control over. It is very much a values issue in the population at large.

bitwize
2 replies
4h58m

Back in the day (1970s-1980s), every computer came with BASIC or something preinstalled -- often in ROM. Growing up I thought computers were for programming as bikes were for riding or pencils were for writing with. Using someone else's program was always an option, but you always had the full capability of the machine ready to put to your own uses, and an easy way to get started.

We forwent this in favor of getting that consoomer dollar. We gave the user ever more elaborate premade applications, which is nice; meanwhile, programming environments went from the default mode of interaction with the machine to expensive add-ins (thinking of early Windows with Visual Basic and ToolBook). HyperCard was free on the Mac, but didn't keep pace with the machine's expanding capabilities. QBasic was perfect for the DOS environment, but it didn't really transition to Windows except as Visual Basic, which again was an expensive add-on. Speaking of, even the RAD tools of yesteryear are largely gone. We have open-source dev environments now, but they aren't approachable the way microcomputer BASIC was. Getting someone set up with Node or even Rails requires command-line jiggery-pokery and installing all sorts of dependencies, just to get started.

And now, we're somehow hoping that a somewhat cleverer Dissociated Press is going to bring back citizen programmers? Fuck no. If we want end users to program their own apps, we have to find our way out of the shit we created, not add new layers of shit to deal with the old layers. The pathway to citizen programmers is making programming accessible, with full capabilities available, with documentation, right from the jump. No toy languages (e.g. Small Basic). Kids should be able to write turtle-drawing programs with the same language/environment adults use to build real-world applications.

AnimalMuppet
1 replies
3h40m

We forwent this in favor of getting that consoomer dollar.

We forwent this in favor of empowering the huge number of people who didn't want to program, who just wanted to use the computer to be able to do (nonprogramming) things.

bitwize
0 replies
15m

It's not either/or. You shouldn't have to program, but programming should be made available to you by default.

zokier
1 replies
8h0m

I wish people gushing on ML code generation would familiarize themselves with SW development history bit more. For example the introduction of SEQUEL (predecessor of SQL) paper is worth reading, its just few pages: https://web.archive.org/web/20070926212100/http://www.almade...

Key quote:

Much of the success of the computer industry depends on developing a class of users other than trained computer specialists

I believe COBOL had similar aspirations to be language for non-professional developers. One of the first high-level languages were called autocodes because they automated code generation. Sound familiar?

The difference between compiler and ML model is not that great from high level. Both take human-readable(/writable) input and produce machine-executable code in the end.

Huge amounts of SW engineering has been already automated and delegated. Consider how much effort setting up a CRUD application would be if you were writing machine code on bare metal system compared to writing high-level language and leveraging stuff like Linux and PostgreSQL.

aragilar
0 replies
7h4m

Or more recent attempts to skill up researchers like Software Carpentries (which had been going for 25+ years!). Getting them to internalise that software is understandable (in spite of our attempts to make it not...) is a long process, and LLMs will only make that process longer.

runningamok
1 replies
21h36m

There is a lot of reason to be excited about local-first and how it could enable much lower costs to build useful apps (both in terms of money and skills). AI will certainly spur that on.

But I think local-first will be of the biggest benefit to small teams of professional developers who can see local opportunities bigger corporations are missing. At least in the short term.

There are barefoot developers too, but it's not as simple as professional vs barefoot — there's a spectrum of app developers, each with their own economic rationale.

rsolva
0 replies
21h24m

Anytype is such a tool, local-first and synced via IPFS. It just works. And it's so flexible! And it recently got support for shared spaces, which works really well.

ozim
1 replies
19h56m

Just use OSS that already is there and contribute to it and we have it covered - no need for „everyone becoming a developer” and there are tons of local first OSS tools.

Where contributing is using OSS, commenting on it, sharing work done with OSS tools with others, filling in bug requests and maybe even paying something for it.

Working with tools and sharing work done with tools is important because everyone is using photoshop instead of Gimp, everyone is using Excel instead of libre office.

atrus
0 replies
19h41m

Yeah, "scratching your own itch" was a driving force for a lot of the OSS stuff

ximm
0 replies
12h16m

The author makes it sound like most software in the world is software that is made to scale. I strongly doubt that. I would say that at least ~50% of software is ad-hoc scripts or specialized tools for single organizations.

It is no wonder that the software that is used by a large number of people is talked about more than software that is only used by a handful. Home-Cooked Software already exists, it is just not as easy to see.

tcsenpai
0 replies
14h23m

The blog layout is truly fascinating

subjectsigma
0 replies
18h37m

AI/ML is going to absolutely destroy the first generations of junior developers. They won’t get enough practice doing software development by themselves and won’t know enough to fix subtle bugs introduced by the code. It’s like using a graphing calculator in high school. Probably perfectly OK for a smart kid taking calc his senior year, but would absolutely be used as a crutch by struggling or lazy students in a pre-algebra class.

Right now AI is mostly controlled by mega corps abusing copyright laws because they can get away with it. I seriously doubt this is going to drastically change.

Basically, I don’t understand this authors cute, homely portrayal of AI.

matrix87
0 replies
20h13m

It isn't even that high of a barrier for entry though? If someone really wants a piece of software that does something, there's probably already some python library for it. calling some imported python function isn't hard

if someone is too lazy to find some python library to import, but driven enough to come up with an unambiguous way of telling a model to do something in natural language, that's a borderline non-existent demographic

really it's kind of baffling to me, some people just hear "code" and think we're talking about some undecipherable thing like the Voynich manuscript. makes me feel like the guy out of the Zen and Motorcycle maintenance book who was confused why his friend refused to figure out how to fix his own bike. some people just refuse to learn. if they wanted to, they would have already

leke
0 replies
5h38m

Really interesting article. It feels a little like what I heard about VisualBasic and REBOL back in the day. AI definitely helps, but I'm still not convinced it will be able to create anything but really simple apps. Likely it will facilitate those people who are maxing out tools like Excel even further.

As well as being a professional developer, I also do these local first types of apps, so I expect once the AI tools become good enough, we will be the first to figure out how to use them.

hyfgfh
0 replies
10h22m

I wonder if LLMs is the new low-code.

A promise that is never going to work.

hermitcrab
0 replies
10h47m

"I'm not saying everything Mao did was great"

Understatement of the (20th) century!

Some interesting ideas. But it seems to make out that nearly all professional software is developed by mega corporations in California. But there are lots of small software companies, open source developers and hobbyists developing a vast range of apps for different niches. More would be good, though.

fragmede
0 replies
19h5m

The way I would support local-first databases is to promote Sqlite db files as the replacement for CSV files, and so everytime you see CSV file support, ask customer service to ask their tech team about when they will support the Sqlite db export format.

fire_lake
0 replies
11h54m

Analysis seems flawed to me.

They claim that large companies can never support the long tail since it’s too expensive to produce those features.

But they also claim that cost of software production is going to fall dramatically such that people can build their own apps for small user-bases.

I don’t mean to sound negative, since I love the picture painted by Local First, but I don’t think it will work out that way.

chrisweekly
0 replies
22h3m

I love ~everything about this. I've been doing software engineering as a career since the late 90's and think she's spot-on about both the local-first movement and "barefoot developers". @stevekrouse, thanks for posting -- and @maggieappleton, please keep doing what you do!

alabhyajindal
0 replies
11h33m

They [barefoot developers] are the kinds of people who would be thrilled to have more agency and power over computers.

I don't think this is true. No-code software development solutions have existed for a while now and we have not seen a rapid increase in software being built.

Technical people who are non-programmers are not interested in having power over computers at all. They want to get their work done. And if doing this work requires too much work, they'll give up on the task than learn the thing.

Foreignborn
0 replies
2h46m

kinda disappointing comments in this thread. Thanks to LLMs I have built more home cooked software than ever.

I’m not an engineer, but I've written more code this year than ever. LLMs have helped me tackle a huge backlog of projects at home that I wouldn't have been able to justify starting before. And I know 3-5 others doing similarly (we have a group chat).

Off the top of my head, this year I’ve made:

- webscrapers

- notification agents

- websites

- data manipulation and charting

- IaC devops for my entire homelab

- an RPG (well, a roguelike)

- applets for OBS

- chrome extensions to glue together a bunch of stuff I do to study Japanese

I’ve written projects in languages I’ve never used like rust and elixir, and learned a ton. Am I gonna switch ladders to be a SWE? Nope, but for better or worse have stopped asking my SWE friends for help.

And lately, between RAG and large context models, it’s much easier to improve existing projects. Typing, refactoring, linting, documentation generation, and GitHub ops are now practically effortless.