There's a running theme in here of programming problems LLMs solve where it's actually not that important that the LLM is perfectly correct. I've been using GPT4 for the past couple months to comprehend Linux kernel code; it's spooky good at it.
I'm a C programmer, so I can with some effort gradually work my way through random Linux kernel things. But what I can do now instead is take a random function, ask GPT4 what it does and what subsystem it belongs to, and then ask GPT4 to write me a dummy C program that exercises that subsystem (I've taken to asking it to rewrite kernel code in Python, just because it's more concise and easy to read).
I don't worry at all about GPT4 hallucinating stuff (I'm sure it's doing that all the time!), because I'm just using its output as Cliff's Notes for the actual kernel code; GPT4 isn't the "source of truth" in this situation.
This is close to how I've been using them too. As a device for speeding up learning, they're incredible. Best of all, they're strongest where I'm weakest: finding all the arbitrary details that are needed for the question. That's the labor-intensive part of learning technical things.
I don't need the answer to be correct because I'm going to do that part myself. What they do is make it an order of magnitude faster to get anything on the board. They're the ultimate prep cook.
There are things to dislike and yes there is over-hype but "making learning less tedious" is huge!
I'm finding myself using the extensively in the learning way, but also I'm an extreme generalist. I've learned so many languages over 23 years, but remembering the ones I don't use frequently is hard. The LLMs become the ultimate memory aid. I know that I can do something in a given language, and will recognised that it's correct when I see it.
Together with increasingly powerful speech to text I find myself talking to the computer more and more.
There are flaws, there are weaknesses, and a bubble, but any dev that can't find any benefit in LLMs is just not looking.
The advantage of using LLMs for use in coding, as distinct from most other domains, is that you can usually just directly check if the code it’s giving you is correct, by running it. And if it’s not, the LLM is often good at fixing it once the issue is pointed out.
Languages, syntax, flags, and the details... I too have touched so many different technologies over the years that I understand at a high level, but don't remember the minutiae of. I have almost turned into a "conductor" rather than an instrumentalist.
Especially for debugging issues that could previously take days of searching documentation, Stack overflow, and obscure tech forums. I can now ask an LLM, and maybe 75% of the time I get the right answer. The other 25% of the time it still cuts down on debugging time by helping me try various fixes, or it at least points me in the right direction.
You put words to what I've been thinking for a while. When I'm still new to some new technology it is a huge time-saver. I used to need to go bother some folks somewhere on a discord / Facebook group / matrix chat to get the one piece of context that I was hung up on. Sometimes it is hours or days to get that one nugget.
I feel more interested in approaching challenging problems in fact because I know I can get over those frustrating phases much more easily and quickly.
I came here to write essentially the same comment as you. Instead of going into a chatroom where people tell you you're lazy because you are unclear on ambiguous terms in documentation, these days I paste in portions of documentation and ask GPT for clarification on what I'm hazy about.
I use it like a dictionary (select text and lookup) and based on what I looked up and answer, I judge myself how correct the answers are, and they are on point usually.
It has also made making small pure vanilla html/js based tools fun. It gives me a good enough prototype which I can mold to my needs. I have wrote a few very useful scripts/tools past few months which otherwise I would never even have started because of all the required first steps and basic learnings.
(never thought I would see your comment as a user)
I've been using it for all kinds of stuff. I was using a drying machine at a hotel a while ago and I was not sure about the icon that it was display on the visor regarding my clothes, so I asked gpt and it told me correctly. It read all the manuals and documentations from pretty much everything right? Better then Google it and you just ask for the exact thing you want.
I used LLMs for something similar recently. I have some old microphones that I've been using with a USB audio interface I bought twenty years ago. The interface stopped working and I needed to buy a new one, but I didn't know what the three-pronged terminals on the microphone cords were called or whether they could be connected to today's devices. So I took a photo of the terminals and explained my problem to ChatGPT and Claude, and they were able to identify the plug and tell me what kinds of interfaces would work with them. I ordered one online and, yes, it worked with my microphones perfectly.
It's surprisingly good at helping diagnose car problems as well.
My washing machine went out because some flooding and I gave chatGPT all of the diagnostic codes and it concluded that it was probably a short in my lid lock.
The lid lock came a few days later, I put it in, and I'm able to wash laundry again.
The two best classes for me are definitely:
- "things trivial to verify", so it doesn't matter if the answer is not correct - I can iterate/retry if needed and fallback to writing things myself, or
- "ideas generator", on the brainstorming level - maybe it's not correct, but I just want a kickstart with some directions for actual research/learning
Expecting perfect/correct results is going to lead to failure at this point, but it doesn't prevent usefulness.
Right, and it only needs to be right often enough that taking the time to ask it is positive EV. In practice, with the Linux kernel, it's more or less consistently right (I've noticed it's less right about other big open source codebases, which checks out, because there's a huge written record of kernel development for it to draw on).
Exactly. It's similar in other (non programming) fields - if you treat it as a "smart friend" it can be very helpful but relying on everything it says to be correct is a mistake.
For example, I was looking at a differential equation recently and saw some unfamiliar notation[1] (Newton's dot notation). So I asked claude for why people use Newton's notation vs Lagrange's notation. It gave me an excellent explanation with tons of detail, which was really helpful. Except in every place it gave me an example of "Lagrange" notation it was actually in Leibniz notation.
So it was super helpful and it didn't matter that it made this specific error because I knew what it was getting at and I was treating it as a "smart friend" who was able to explain something specific to me. I would have a problem if I was using it somewhere where the absolute accuracy was critical because it made such a huge mistake throughout its explanation.
[1] https://en.wikipedia.org/wiki/Notation_for_differentiation#N...
Once you know LLMS make mistakes and know to look for them half the battle is done. Humans make mistakes, which is why we take effort to validate thinking and actions.
As I use it more and more often the mistakes are born of ambiguity. As I supply more information to the LLM it's answer(s) gets better. I'm finding more and more ways to supply it with robust and extensive information.
Yes, I like to think of LLM's as hint generators. Turns out that a source of hints is pretty useful when there's more to a problem than simply looking up an answer.
gpt: give me working html example of javascript beforeunload event, and onblur, i want to see how they work when i minimize a tab.
10 seconds later, I am playing with these out.