I'm sure this, and other LLM/IDE integration has it's uses, but I'm failing to see how it's really any kind of major productivity boost for normal coding.
I believe average stats for programmer productivity of production-quality, debugged and maybe reusable code are pretty low - around 100 LOC/day, although it's easy to hit 1000 LOC/day or more when building throwaway prototypes/etc.
The difference between productivity in terms of production quality code and hacking/prototyping is because of the quality aspect, and for most competent/decent programmers coding something themselves is going to produce better quality code, that they understand, than copying something from substack or an LLM. The amount of time it'd take to analyze the copied code for correctness, lack of vulnerabilities, or even just decent design for future maintainability (much more of a factor in terms of total lifetime software cost than writing the code in the first place) would seem to swamp any time gained in not having to write the code yourself (which is basically the easiest and least time consuming part of any non-trivial software project).
I can see the use of LLMs in some learning scenarios, or for cases when writing throwaway code where quality is unimportant, but for production code I think we're still a long way from the point where the output of an LLM is going to be developer-level and doesn't need to be scrutinized/corrected to such a degree that the speed benefit of using it is completely lost!
Here's where my Emacs is putting the most effort when it comes to completion: shell sessions.
In my line of work (infra / automation) I may not write any new code that's going to be added to some project for later use for days, sometimes weeks.
Most of the stuff I do is root cause analysis of various system failures which require navigating multiple machines, jump hosts, setting up tunnels and reading logs.
So, the places where the lack of completion is the most annoying are, for example, when I have to compare values in some /sys/class/pci_bus/... between two different machines: once I've figured out what file I need in one machine in its sysfs, I don't have the command to read that file on the other machine, and need to retype it entirely (or copy and paste it between terminal windows).
I don't know what this autocompletion backend is capable of. I'd probably have to do some stitching to even get Emacs to autocomplete things in the terminal instead of or in addition to the shell running in it, but, in principle, it's not impossible and could have some merit.
I wonder what you mean. The `dabbrev-expand` command (bound to `M-/` but default) will complete the characters before point based on similar strings nearby, starting with strings in the current buffer before the word to complete, and extending its search to other buffers. If you have the sysfs file path in one buffer, it will use that for completion. You may need to use line mode for for non-`M-x shell` terminals to use `dabbrev-expand`.
This sounds like an ideal use case for literate programming. Are you using org-mode? Having an org-file with source blocks would store the path string for later completion by the methods described above (as well as document the steps leading to the root cause). You could also make an explicit abbrev for the path (local or global). The document could make a unique reference or, depending on how many and how common the paths are, you could define a set of sequences to use. For example "asdf" always expands to /sys/class/pci_bus/X and "fdsa" expands to something else.
Hope that helps or inspires you to come up with a solution that works for you!
No... not at all... Most of the "code" I write in this way is shell commands mixed with all kind of utilities present on the target systems. It's so much "unique" (in a bad way) that there's no point trying to automate it. The patterns that emerge usually don't repeat nearly often enough to merit automation.
Literate programming is the other extreme, it's like carving your code in stone. Too labor intensive to be useful in the environment where you don't even remember the code you wrote the day after and in most likelihood will never need it again.
They aren't nearby. They are in a different tmux pane. Also, that specific keybinding doesn't even work in terminal buffers, I'd have to remap it to something else to access it.
The larger problem here is that in my scenario Emacs isn't the one driving the completion process (it's the shell running in the terminal), for Emacs to even know those options are available as candidates for autocompletion it needs to read the shell history of multiple open terminal buffers (and when that's inside a tmux session, that's even more hops to go to get to it).
And the problem here, again, is that setting up all these particular interactions between different completion backends would be very tedious for me, but if some automatic intelligence could do it, that'd be nice.
Tramp?
How would Tramp know that I need an item from history of one session in another? Or maybe I'm not understanding how do you want to use it?
The only thing I've used GPT for is generating commit messages based on my diff, because it's better than me writing 'wip: xyz' and gives me a better idea about what I did before I start tidying up the branch.
Even if I wanted to use it for code, I just can't. And it's actually make code review more difficult when I look at PRs and the only answer I get from the authors is "well, it's what GPT said." Can't even prove that it works right by putting a test on it.
In that sense it feels like shirking responsibility - just because you used an LLM to write your code doesn't mean you don't own it. The LLM won't be able to maintain it for you, after all.
"it's what GPT said" should be a fireable offense
That may be a bit much, but I'd think it grounds for sitting down with the person in question to discuss the need for understanding the code they turn in.
The Duomo in Florence took multiple generations to build. Took them forever to figure out how to build a roof for the thing. Would you want to be a builder who focuses your whole life on building a house you can't live in because it has no roof? Or would you simply be proud to be taking part in getting to help lay the foundation for something that'll one day be great?
That's my dream.
Well, I'm just commenting on the utilty of LLMs, as they exist today, for my (and other related) uses cases.
No doubt there will be much better AI-based tools in the future, but I'd argue that if you want to accelerate that future then rather than applying what's available today, it'd make more sense to help develop more capable AI that can be applied tomorrow.
We need the full pipeline of tools. What jart did is helping the future users of AI gain familiarity early.
In general I almost never break even when trying to use an LLM for coding. I guess there’s a selection bias because I hate leaving my flow to go interact with some website, so I only end up asking hard questions maybe.
But since I wired Mixtral up to Emacs a few weeks ago I discovered that LLMs are crazy good at Lisp (and protobuf and prisma and other lispy stuff). GPT-4 exhibits the same property (though I think they’ve overdone it on the self-CoT prompting and it’s getting really snippy about burning compute).
My dots are now like recursively self improving.
Man, I really want to get this working. Any recommendations for how to prompt or where this functionality helps?
hear, hear! I have the exact same impression, probably since the gpt-4-turbo preview rolled out
Have you seen modern React frontend dev in JS? They copy paste about 500-1000 LOC per day and also make occasional modifications. LLMs are very well suited for this kind of work.
That does seem like a pretty much ideal use case!
They are really good at writing your print/console.log statements...