return to table of content

Improvements to static analysis in GCC 14

noam_k
59 replies
1d1h

Very cool stuff!

I haven't done much C development lately, so I'm curious how often `strcpy` and `strcat` are used. Last I checked they're almost as big no-nos as using goto. (Yes, I know goto is often preferred in kernel dev...) Can anyone share on how helpful the c-string analyses are to them?

sirwhinesalot
32 replies
1d1h

There's nothing wrong with simple usages of goto.

The strxcpy family on the other hand is complete garbage and should never be used for any reason. I'm horrified that they're used in the kernel at all. All of those functions (and every failed attempt at "fixing" them) should have been nuked from orbit.

laweijfmvo
28 replies
1d1h

What's wrong with `strncpy`?

i80and
17 replies
1d1h

strncpy won't always write a trailing nul byte, causing out of bounds reads elsewhere. It's a nasty little fellow. See the warning at https://linux.die.net/man/3/strncpy

strlcpy() is better and what most people think strncpy() is, but still results in truncated strings if not used carefully which can also lead to big problems.

sirwhinesalot
12 replies
1d1h

Speaking of strlcpy, Linus has some colorful opinions on it:

Note that we have so few 'strlcpy()' calls that we really should remove that horrid horrid interface. It's a buggy piece of sh*t. 'strlcpy()' is fundamentally unsafe BY DESIGN if you don't trust the source string - which is one of the alleged reasons to use it. --Linus

Maybe strscpy is finally the one true fixed design to fix them all. Personally I think the whole exercise is one of unbeliavable stupidity when the real solution is obvious: using proper string buffer types with length and capacity for any sort of string manipulation.

jjav
10 replies
22h4m

the real solution is obvious

If it were obvious it would have been done already. Witness the many variants that try to make it better but don't.

using proper string buffer types with length and capacity

Which you then can't pass to any other library. String management is very easy to solve within the boundaries of your own code. But you'll need to interact with existing code as well.

sirwhinesalot
6 replies
20h29m

If it were obvious it would have been done already. Witness the many variants that try to make it better but don't.

Every other language with mutable strings, including C++, does it like that. It is obvious. The reason it is not done in C is not ignorance, it is laziness.

Which you then can't pass to any other library. String management is very easy to solve within the boundaries of your own code. But you'll need to interact with existing code as well.

Ignoring the also obvious solution of just keeping a null terminator around (see: C++ std::string), you should only worry about it at the boundary with the other library.

Same as converting from utf-8 to utf-16 to talk to the Windows API for example.

jjav
3 replies
13h46m

The reason it is not done in C is not ignorance, it is laziness.

Of course not. C has been around since the dawn of UNIX and the majority of important libraries at the OS level are written in it.

Compatibility with such a vast amount of code is a lot more important than anything else.

If it were so easy why do you think nobody has done it?

Ignoring the also obvious solution of just keeping a null terminator around

That's not very useful for the general case. If your code relies on the extra metadata (length, size) being correct and you're passing that null-terminated buffer around to libraries outside your code, it won't be correct since nothing else is aware of it.

sirwhinesalot
2 replies
9h8m

If it were so easy why do you think nobody has done it?

People have done it, there are plenty strbuf implementations to go around. Even the kernel has seq_buf. How you handle string manipulation internally in your codebase does not matter for compatibility with existing libraries.

That's not very useful for the general case. If your code relies on the extra metadata (length, size) being correct and you're passing that null-terminated buffer around to libraries outside your code, it won't be correct since nothing else is aware of it.

You can safely pass the char* buffer inside a std::string to any C library with no conversion. You're making up issues in your head. Don't excuse incompetence.

jjav
1 replies
1h51m

People have done it, there are plenty strbuf implementations to go around.

Precisely!

Why plenty and why is none of them the standard in C?

sirwhinesalot
0 replies
1h31m

The TL;DR on that is basically "lazy, security unconscious assholes keep shutting it down".

Dennies Ritchie strongly suggested C should add fat pointers all the way back in 1990. Other people have pointed out the issues with zero terminated strings and arrays decaying into pointers (and the ways to deal with them even with backwards compatibility constraints) for years.

One of the most prominent was Walter Bright's article on "C's Biggest Mistake" back in 2009 and he was a C/C++ commercial compiler developer.

There is no excuse.

lelanthran
1 replies
13h8m

you should only worry about it at the boundary with the other library.

If this was a mitigation, it would solve all problems with nul-terminated strings i.e. do strict and error-checked conversions to nul-terminated strings at all boundaries to the program, and then nul-terminated strings and len-specified strings are equivalently dangerous (or safe, depending on your perspective).

The problem is precisely that unsanitised input makes its way into the application, bypassing any checks.

sirwhinesalot
0 replies
9h1m

It's impossible to avoid "sanitizing" input if you have a conversion step from a library provided char* to a strbuf type. Any use of the strbuf API is guaranteed to be correct.

That's very different from needing to be on your toes with every usage of the strxcpy family.

jandrese
2 replies
12h8m

For me the "real" solution looks something like this:

    ssize_t strxcpy(char* restrict dst, const char* restrict src, ssize_t len)
Strxcpy copies the string from src to dst. The len parameter is the number of bytes available in the dst buffer. The dst buffer is always terminated with a null byte, so the maximum length of string that can be copied into it is len - 1. strxcpy returns the number of characters copied on success, but can return the following negative values:

    E_INVALID_PARAMETER: Ether dst or src are NULL or len < 1, no data was copied
    W_TRUNCATED: len - 1 bytes were copied but more characters were available in src.
strxcat would work similarly. I have not decided if the return value should include the terminating null or not.

jjav
1 replies
11h53m

How is this useful though? I mean yes, it is useful in avoiding the buffer overruns. But that's not the only consideration, you also want code that handles data correctly. This just truncates at buffer size so data is lost.

So, if you want the code to work correctly, you need to either check the return code and reallocate dst and call the copy again. But if you're going to do that might as well check src len and allocate dst correctly before calling it so it never fails. But if you're already doing that, you can call strcpy just fine and never have a problem.

jandrese
0 replies
3h49m

Sometimes truncation is fine or at least can be managed. Yes, strdup() is a better choice in a lot of situations, but depending on how your data is structured it may not be the correct option. I would say my version is useful in any situation where you were previously using strncpy/cat or strlcpy/cat.

raverbashing
0 replies
7h44m

Wow yeah this seems to summarize well the usual api flakiness and just shuffling of C

It seems people come with "one more improvement" that's broken in one way or the other

Borg3
1 replies
23h4m

#define strncpyz(d,s,l) *(strncpy(d,s,l)+(l))=0

Of course this one is unsafe for macro expansion. But well, its C :)

teo_zero
0 replies
12h44m

I'd rather put the final nul at d+l-1 than at d+l, so that l can be the size of d, not "one more than the size of d":

  strncpyz(buf,src,sizeof buf);

kevin_thibedeau
0 replies
17h15m

strncpy() also zero pads the entire buffer. If it's significantly larger than the copied string you're wasting cycles on pointless move operations for normal, low-security string handling. This behavior is for filling in fixed length fields in data structures. It isn't suitable for general purpose string processing.

jandrese
0 replies
1d1h

The problem with strlcpy is the return value. You can be burned badly if you are using it to for example pull out a fixed chunk of string from a 10TB memory mapped file, especially if you're pulling out all of the 32 byte chunks from that huge file and you just wanted a function to stick the trailing 0 on the string and handle short reads gracefully.

It's even worse if you are using it because you don't fully trust the input string to be null terminated. Maybe you have reasons to be believe that it will be at least as long as you need, but can't trust that it is a real string. As a function that was theoretically written as "fix" for strncpy it is worse in some fundamental ways. At least strncpy is easy enough to make safe by always over-allocating your buffer by 1 byte and stuffing a 0 in the last byte.

lelanthran
6 replies
23h20m

It's not possible to use it safely unless you know that the source string fits in the destination buffer. Every strncpy must be followed by `dst[sizeof dst - 1] = 0`, and even if you do that you still have no idea if you truncated the source string, so you have to put in a further check.

    strncpy (dst, src, (sizeof dst) - 1);
    dst[(sizeof dst) - 1] = 0;
    int truncated = strlen (dst) - strlen (src);
Without the extra two lines after every strncpy, you're probably going to have a a hard to discover transient bug.

actionfromafar
5 replies
21h1m

if you really want to use standard C string functions, use instead:

    int ret = snprintf(dst, sizeof dst, "%s", src);
    if (ret >= n || ret < 0)
    {
        /* failed */
    }
or as a function:

    bool ya_strcpy(const char* s, char* d, size_t n)
    {
        int cp = snprintf(d, n, "%s", s);
        bool ok = cp >= 0 && cp < n;
        ok ? *s = *s : 0;
        return ok;
    }

kazinator
2 replies
12h29m

snprintf only returns negative if an "encoding error" occurs, which has to do with multi-byte characters.

I think for that to possibly happen, you have to be in a locale with some character encoding in effect and snprintf is asked to print some multi-byte sequence that is invalid for that encoding.

Thus, I suspect, if you don't call that "f...f...frob my C program" function known as setlocale, it will never happen.

lelanthran
1 replies
11h19m

Thus, I suspect, if you don't call that "f...f...frob my C program" function known as setlocale, it will never happen.

Of all the footguns in a hosted C implementation, I believe setlocale (and locale in general) is so broken that even compilers and library developers can't workaround it to make it safe.

The only other unfixable C-standard footgun that comes close, I think, are the environment-reading-and-writing functions, but at least with those, worst-case is leaking a negligible amount of memory in normal usage, or using an old value even when a newer one is available.

kazinator
0 replies
10h47m

I see that in Glibc, snprintf goes to the same general _IO_vsprintf function, which has various ominous -1 returns.

I don't think I see anything that looks like the detection of a conversion error, but rather other reasons. I would have to follow the code in detail to convince myself that glibc's snprintf cannot return -1 under some obscure conditions.

Defending against that value is probably wise.

As far as C locale goes, come on, the design was basically cemented in more or less its current form in 1989 ANSI C. What the hell did anyone know about internationalizing applications in 1989.

lelanthran
0 replies
13h6m

I actually do use `snprintf()` and friends.

aulin
0 replies
12h27m

except no one does that return code check and worse they often use the return code to advance a pointer in concatenated strings

spacechild1
0 replies
1d

As others have already pointed out it, it doesn't guarantee that the result is null-terminated. But that's not the only problem! In addition, it always pads the remaining space with zeros:

    char buf[1000];
    strncpy(buf, "foo", sizeof(buf));
This writes 3 characters and 9997 zeros. It's probably not what you want 99% of the time.

sirwhinesalot
0 replies
1d1h

It doesn't guarantee that the output is null terminated. Big source of exploits.

jlokier
0 replies
23h35m

`strncpy` is commonly misunderstood. It's name misleads people into thinking it's a safely-truncating version of `strcpy`. It's not.

I've seen a lot of code where people changed from `strcpy` to `strncpy` because they thought that was safety and security best practice. Even sometimes creating a new security vulnerability which wasn't there with `strcpy`.

`strncpy` does two unexpected things which lead to safety, security and performance issues, especially in large codebases where the destination buffers are passed to other code:

• `strncpy` does NOT zero-terminate the copied string if it limits the length.

Whatever is given the copied string in future is vulnerable to a buffer-read-overrun and junk characters appended to the string, unless the reader has specific knowledge of the buffer length and is strict about NOT treating it as a null-terminated string. That's unusual C, so it's rarely done correctly. It also doesn't show up in testing or normal use, if `strnlen` is "for safety" and nobody enters data that large.

• `strncpy` writes the entire destination buffer with zeros after the copied string.

Usually this isn't a safety and security problem, but it can be terrible for performace if large buffers are being used to ensure there's room for all likely input data.

I've seen these issues in large, commercial C code, with unfortunate effects:

The code had a security fault because under some circumstances, a password check would read characters after the end of a buffer due to lack of a zero-terminator, that authors over the years assumed would always be there.

A password change function could set the new password to something different than the user entered, so they couldn't login after.

The code was assumed to be "fast" because it was C, and avoided "slow" memory allocation and a string API when processing strings. It used preallocated char arrays all over the place to hold temporary strings and `strncpy` to "safely" copy. They were wrong: It would have run faster with a clean string API that did allocations (for multiple reasons, not just `strncpy`).

Those char arrays had the slight inconvenience of causing oddly mismatched string length limits in text fields all over the place. But it was worth it for performance, they thought. To avoid that being a real problem, buffers tended to be sized to be "larger" than any likely value, so buffer sizes like 256 or 1000, 10000 or other arbitrary lengths plucked at random depending on developer mood at the time, and mismatched between countless different places in the large codebase. `strncpy` was used to write to them.

Using `malloc`, or better a proper string object API, would have run much faster in real use, at the same time as being safer and cleaner code.

Even worse, sometimes strings would be appended in pieces, each time using `strncpy` with the remaining length of the destination buffer. That filled the destination with zeros repeatedly, for every few characters appended. Sometimes causing user-interactions that would take milliseconds if coded properly, to take minutes.

Ironically, even a slow scripting language like Python using ordinary string type would have probably run faster than the C application. (Also Python dictionaries would have been faster than the buggy C hash tables in that application which took O(n) lookup time, and SQLite database tables would have been faster, smaller and simpler than the slow and large C "optimised" data structures they used to store data).

rdtsc
1 replies
1d1h

There's nothing wrong with simple usages of goto

Indeed a like a few gotos here and there for doing cleanup toward the end of the function.

sirwhinesalot
0 replies
1d

Or to break out of nested loops. The problem is with unstructured goto spaghetti making the code impossible to follow without essentially running it in your head (or a debugger).

Goto + Switch (or the GCC computed goto extension) is also a wonderful way to implement state machines.

randomdata
14 replies
1d1h

> Last I checked they're almost as big no-nos as using goto.

Huh? Why is goto a no-no? It is there for good reason. I think we all agree with Dijkstra that, in his words, unbridled gotos are harmful, but C's goto is most definitely bridled. I doubt any language created in the last 50+ years has unbridled gotos. That's an ancient programming technique that went out of fashion long ago (in large part because of Dijkstra).

bluGill
12 replies
23h45m

Languages other than C give you options for flow control so that you don't need goto for that. It is a spectrum, if you only use goto to jump to the end of a small function on error it is okay, though I prefer something better in my language. I've seen 30,000 line functions with gotos used for flow control (loops and if branches) - something you can do in C if you are really that stupid and I think we will all agree is bad. This 30,000+ line function with gotos as flow control was a lot more common in Dijkstra's day.

lelanthran
4 replies
23h25m

Languages other than C give you options for flow control so that you don't need goto for that.

The idiom `if (error) goto cleanup` is about the only thing I see goto used for. What flow control replaces that other than exceptions?

sirwhinesalot
1 replies
22h58m

Jumping out of nested loops. Implementing higher level constructs like yield or defer. State machines. Compiler output that uses C as a "cross-platform" assembly language.

All of them are better served with more specialized language constructs but as a widely applicable hammer goto is pretty nice.

I don't expect C to have good error handling or generators any time soon but with goto I can deal with it.

nickpsecurity
0 replies
22h5m

Compiling HLL constructs in some of those scenarios ultimately produces a jump statement. So, it makes sense that a higher-level version of a jump would be helpful in the same situations.

randomdata
0 replies
23h15m

> What flow control replaces that other than exceptions?

defer has gained in popularity for that situation.

cozzyd
0 replies
19h19m

RAII + destructors

Though gcc supports cleanup functions, just not very ergonomically.

jjav
3 replies
22h1m

30,000 line functions with gotos

The problem there is the 30K line function, not the goto!

bluGill
1 replies
17h32m

30k functions are a problem but they are manageable if goto isn't used in them. I prefer not to but a have figured them out.

jjav
0 replies
13h53m

Wow! Longest single function I can think of having written is ~200 lines. I always feel bad when editing it but there's no useful way to break it down so I let it be. But a single 30,000 line function? Wow.

tazu
0 replies
3m

I'll take a 30k line function that does one thing over 30 1k line functions that are used once...

randomdata
2 replies
23h17m

We all agree that you shouldn't write bad code. Not using goto, not using any language construct.

But when unbridled gotos were the only tool in the toolbox, bad code was an inevitability in a codebase of any meaningful size. Not even the best programmer was immune. This is what the "Go to statement considered harmful" paper was about.

It was written in 1968. We listened. We created languages that addressed the concerns raised and moved forward. It is no longer relevant. Why does it keep getting repeated in a misappropriated way?

bluGill
1 replies
17h38m

In 1968 they had better languages and programmers were still using goto for control in them despite better options.

randomdata
0 replies
11h50m

Of course. The ideas presented in said paper went back at least a decade prior, but languages were still showing up with unbridled gotos despite that. But that has changed in the meantime. What language are you or anyone you know using today that still has an unbridled goto statement?

umanwizard
0 replies
1d

goto used in certain idiomatic ways (e.g. to jump to cleanup code after an error, or to go to a `retry:` label, or to continue or break out of a multiply nested loop) is fine. What's annoying is bypassing control flow with random goto spaghetti.

jandrewrogers
6 replies
1d1h

The use of goto is unambiguously correct and elegant in some contexts. Unwavering avoidance of goto can lead to unnecessarily ugly, convoluted code that is difficult to maintain. It usually isn't common but it has valid uses.

While use of functions like `strcpy` are less advisable, there are contexts in which they are guaranteed to be correct unless other strong (e.g. language-level) invariants are broken, in which case you have much bigger problems. In these somewhat infrequent cases, there is a valid argument that notionally safer alternatives may be slightly less efficient for no benefit.

xedrac
2 replies
23h48m

The use of goto is unambiguously correct and elegant in some contexts.

For C, absolutely. For C++, it's likely a footgun.

jandrewrogers
1 replies
22h43m

It has fewer use cases in C++ but it still has use cases where the alternatives are worse.

xedrac
0 replies
4h26m

What is a C++ use case where RAII doesn't solve the problem better? I imagine one exists, but I've never encountered it in 20 years. Conversely, I've seen it used inappropriately for cleanup many times (which would be fine in C).

sirwhinesalot
2 replies
1d

strcpy and friends don't really have any benefits beyond just being there. The "safer" versions are still unsafe in many cases, while being less performant and more annoying to use.

Writing a strbuffer type and associated functions isn't particularly hard and the resulting interface is nicer to use, safer, and more efficient.

bvrmn
1 replies
9h29m

I argue strview (non-owning) is almost always what is needed. Most of string operations are searching and slicing.

sirwhinesalot
0 replies
9h4m

You also need a strview. Not really relevant for avoiding strcpy and strcat though.

saagarjha
1 replies
1d1h

gotos are fine if used judiciously. strcpy and strcat are “fine” in that they work when you know your code is correct and you have big problems if you don’t. But this describes most of C, unfortunately.

dmit
0 replies
19h38m

gotos are fine if used judiciously

Is there a language feature that is not? :)

lelanthran
0 replies
23h30m

Last I checked they're almost as big no-nos as using goto.

I don't think so. Gotos are fine, strcat and strcpy without a malloc with the correct size in the same scope is a code smell.

i80and
0 replies
1d1h

Some usage of goto is still idiomatic in C if used in ways logically equivalent to structured programming constructs C lacks. It requires some care, but I mean, it's C.

(I'm not however fond at all of longjmp)

quincepie
43 replies
22h40m

To me fanalyzer is one of GCC killer features over clang. It makes programming C much easier by explaining errors. The error messages also began to feel similar to Rust in terms of being developer friendly.

mr_00ff00
31 replies
22h1m

I know Rust (esp on HN) is very hyped for its memory safety and nice abstractions, but I really wonder how much Rust owes its popularity to its error messages.

I would say the #1 reason I stop learning a technology is because of frustrating or unclear errors.

EDIT: Getting a bit of topic, but I meant more because I love C and would love it more with rust level error messages.

dist1ll
8 replies
21h46m

That's what makes me wary of modifying my NixOS config. A single typo and you get an error dump comparable to C++03 templates.

Quekid5
4 replies
21h21m

... but you do get an error. That's a lot better what you typically get with C or C++. Assuming it's valid systax, of course.

This is a veering off topic, but I do agree that Nix-the-language has a lot of issues.

(You might suggest Guix, but I don't want to faff about with non-supported repositories for table stakes like firmware and such. Maybe Nickel will eventually provide a more pleasant and principled way to define Nix configurations?)

nh2
3 replies
19h36m

My favourite Nix error message is

    infinite recursion encountered, at undefined position

vintermann
1 replies
4h0m

I tried some kind of BBC micro at a computer museum, and found out that if you had an error anywhere in your BASIC program, it would just print "error". No line number, no hint at what the problem was.

danudey
0 replies
1h57m

I could understand some kind of ancient system not having the detail or knowledge to explain what happened in particular, but this is something that still happens in a lot of Microsoft software in particular.

Outlook has a consistent tendency to give you errors like "Couldn't get your mail for some reason", or Windows saying "Hey networking isn't working". No "connection timed out" or "couldn't get an IP address" or "DNS lookup failed" or any other error message that is possible to diagnose. Even the Windows network troubleshooting wizard (the "let us try to diagnose why things aren't working for you" process) would consistently give me "yeah man idk" results, when the error is that I'm not getting an address from DHCP and should be extremely easy to diagnose.

I get that in a lot of cases, problems cut across lots of errors or areas of responsibility, and getting some other team making some other library to expose their internals to your application might be difficult in an environment like Microsoft, but it's just inexplicable that so much software, even these days, resorts to "nope can't do it" and bail out.

Quekid5
0 replies
18h25m

Haha, reminds me of some Scheme interpreter that would just say something like 'missing paren' at position 0 or EOF depending on where the imbalance was :)

... but, yeah... I'm pretty sure there could be some hints as to whereabouts that infinite recursion was detected.

pxc
0 replies
20h58m

That's definitely the most painful part of iterating on Nix code for me, even in simple configs. You eventually develop an intuition for common problems and rely more on that than on deciphering the stack traces, but that's really not ideal.

lynx23
0 replies
13h34m

Actually, thats a reason why I never even touched Nix. Besides, being functional and all the hype, but the syntax and naming of the language feels ad-hoc enough for me to never have caught on...

crest
0 replies
2h17m

It's what got me pissed of enough with xmonad to discard it.

darby_eight
6 replies
18h21m

Clang already had decent error messages by the time rust stabilized. There's simply not much you can do at runtime to explain a segfault.

GrumpySloth
4 replies
17h26m

Not when you called templated functions and were greeted with compile-time template stack traces. Or you called overloaded functions and were presented with 50 alternatives you might have meant. The language is inherently unfriendly to user-friendly error messages.

darby_eight
2 replies
17h25m

Rust doesn't have templates, mister c plus plus user.

Perhaps you might include an example of such a user-unfriendly message?

GrumpySloth
1 replies
17h20m

I’m talking about C++. You wrote that Clang already had friendly error messages. While they were less unfriendly than GCC, calling them friendly is a stretch.

Rust having traits instead of templates is a big ergonomic improvement in that area.

estebank
0 replies
12h19m

Funnily enough, trait bounds are still a big pain in the neck to provide good diagnostics for because of the amount of things that need to be tracked that are cross cutting across stages of the compiler that under normal operation don't need to talk to each other. They got better in 2018, as async/await put them even more front and center and focused some attention on them, and a lot of work for keeping additional metadata around was added since then (search the codebase for enum ObligationCauseCode if you're curious) to improve them. Now with the new "next" trait solver they have a chance to get even better.

It still easier than providing good diagnostics for template errors though :) (althought I'm convinced that if addressing those errors was high priority, common cases of template instantiations could be modeled internally in the same way as traits purely for diagnostics and materially improve the situation — I understand why it hasn't happened, it is hard and not obviously important).

CoastalCoder
0 replies
5h44m

I agree, and I'd go a step further:

In my opinion, the complexity of the interactions between C++'s {preprocessor, overload resolution, template resolution, operator overloading, and implicit casting} can make it really hard to know the meaning of a code snippet you're looking at.

If people use these features only in a very limited, disciplined manner it can be okay.

But on projects where they don't, by golly it's a mess.

(I suppose it's possible to write a horrible mess in any language, so maybe it's unfair for me to pick on C++.)

pwdisswordfishc
0 replies
27m

ASan seems to do quite a lot.

hardwaregeek
5 replies
21h27m

Yeah Rust is popular because it's a practical language with a nice type system, decent escape hatches, and good tooling. The borrow checker attracts some, but it could have easily been done in a way with terrible usability.

darby_eight
4 replies
18h19m

The borrow checker attracts some, but it could have easily been done in a way with terrible usability.

Why would anyone use the resulting language over C? What you're describing is C with a slightly friendlier compiler.

Ar-Curunir
3 replies
17h24m

I have never heard C as being described to have a good type system.

tialaramex
0 replies
5h47m

"Strongly typed, weakly checked". Which is a funny way to say "Not strongly typed" or perhaps more generously "The compilers aren't very good and neither are the programmers but other than that..." (and yes I write that as a long time C programmer)

But hey, C does have types:

First it has several different integers with silly names like "long" and "short".

Then it has the integers again but wearing a Groucho mask and with twice as many zeroes, "float" and "double".

Then an integer that's probably one byte, unless it isn't, in which case it is anyway, and which doesn't know whether it's signed or not, "char".

Then a very small integer that takes up too much space ("_Bool" aka bool)

Finally though, it does have types which definitely aren't integers, unfortunately they participates in integer arithmetic anyway and many C programmers believe they're integers, but the compiler doesn't so that's... well it's a disaster, I speak of course of the pointers.

jjgreen
0 replies
6h12m

To this day, many C programmers believe that strong typing just means pounding extra hard on the keyboard.

Peter van der Linden, "Expert C Programming"

darby_eight
0 replies
17h8m

You could try to argue this is the only source of rust's popularity.... or you could admit that the borrow checker is in fact a reason why folks use Rust over C.

Quekid5
3 replies
21h26m

The hard problem with C is that it's hard to tell if what the programmer wrote is an error. Hence warnings... which can be very hit or miss, or absurd overkill in some cases.

(Signed overflow being a prime example where you really either just need to define what happens or accept that your compiler is basically never going to warn you about a possible signed overflow -- which is UB. The compromise here by Rust is to allow one to pick between some implementation defined behaviors. That seems pretty sensible.)

uecker
2 replies
21h13m

For signed overflow I use -fsanitize=signed-integer-overflow .

Quekid5
1 replies
21h3m

Good. I wonder how many people do and also if their compilers support it. (One would hope so, of course. I assume clang and GCC do.)

... but the question is really what you ship to production.

Btw, possible signed overflow was just an example of things people do not want warnings for. OOB is far more dangerous, obviously... and the cost for sanitizer in that case is HUGE... and it doesn't actually catch all cases AFAIUI.

gpderetta
0 replies
9h57m

For OOB you can enable bound checking in the C++ standard library. That's relatively cheap. Of course it won't help with C raw pointers and C array.

estebank
0 replies
19h53m

Elm is acknowledged as being the initial inspiration for focusing on diagnostics early on, but Rust got good error messages through elbow grease and focused attention over a long period of time.

People getting used to good errors and demanding more, is part of the virtuous circle that keeps them high quality.

Making good looking diagnostics requires UX work, but making good diagnostics requires a flexible compiler architecture and a lot of effort, nothing more, nothing less.

darby_eight
0 replies
18h20m

Rust's eye towards errors predates Elm entirely.

jonathankoren
1 replies
21h10m

I would say the #1 reason I stop learning a technology is because of frustrating or unclear errors.

Overly verbose error messages that obscure more than illuminate are chief complaint against C++.

Honestly, they can just sap all the energy out of a project.

cogman10
0 replies
21h6m

"You violated a template rule. Here's a novel on everything that's broken as a result"

It's why the Constraint system was important for C++.

chc4
4 replies
21h52m

I have had the exact opposite experience: clang constantly gives me much better error messages than GCC, implementations of some warnings or errors catch more cases, and clang-tidy is able to do much better static analysis.

kolbe
3 replies
15h38m

"Copilot explain this error" has made this whole discussion irrelevant for me.

estebank
2 replies
11h53m

An issue is immediacy: problems are better the earlier they are pointed out (why online errors are better than compile errorswl, which are better than CI errors, which are runtime errors). Having to copy paste an error adds a layer of indirection that gets in the way of the flow.

Another is reproducibility and accuracy: LLMs have a tendency to confidently state things that are wrong, and to say different things to different people, the compiler has the advantage of being deterministic and generally have better understanding of what's going on to produce correct suggestions (although we still have cases of incorrect assumptions producing invalid suggestions, I believe we have a good track record there).

If those tools help you, more power to you, but I fear their use by inexperienced rustaceans being misled (an expert can identify when the bot is wrong, a novice might just end up questioning their sanity).

Side note: the more I write the more I realize that the same concerns I have with LLMs also apply to the compiler in some way and am trying to bridge that cognitive dissonance. I'm guessing that the reproducibility argument, ensuring the same good error triggers for everyone that makes the same mistake and the lack of human curation, are the thing that makes me uneasy about LLMs for teaching languages.

tialaramex
0 replies
6h0m

Certainly for the only new diagnostic I wrote for Rust, I expect an LLM's hallucinations are likely to have undesirable consequences. When you write 'X' where we need a u8, my diagnostic says you can write b'X' which is likely what you meant, but the diagnostic deliberately won't do this if you wrote '€' or '£' or numerous other symbols that aren't ASCII - because b'€' is an error too, so we didn't help you if we advised you to write that, you need to figure out what you actually meant. I would expect some LLMs to suggest b'€' there anyway.

kolbe
0 replies
5h38m

FYI, in VS Code, you highlight the error in the terminal, right click and select "copilot explain this." One less layer of indirection. In C++, I ultimately only end up using it for 10% of the errors, but because it's the type of error with a terrible message, copilot sees through it and puts it in plain English.

I was so impressed with gpt-4's ability to diagnose and correct errors that i made this app to catch python runtime errors, and automatically make gpt-4 code inject the correction: https://github.com/matthewkolbe/OpenAIError

Peter0x44
0 replies
2h58m

I've found it to have quite poor defaults for its analysis (things like suggesting "use annex k strcpy_s instead of strcpy"). fanalyzer is still by far the easiest to configure.

snarfy
1 replies
5h37m

This reminds me one of the reasons I hated C++ so much. 1000+ lines of error messages about template instantiation, instead of 'error: missing semicolon'.

danudey
0 replies
1h50m

In our programming class in high school we were using Borland C++; I had a classmate call me over to ask about an error they were getting from the compiler.

"Missing semicolon on line 32"

I looked at it, looked at them, and said "You're missing a semicolon on line 32". They looked at line 32 and, hey! look at that! Forgot a semicolon at the end. Added it and their program worked fine.

Even the best error messages can't help some people.

darby_eight
1 replies
18h22m

I'm quite surprised that clang doesn't have static analysis! That doesn't seem right, but I don't program much in C anymore.

bluGill
0 replies
17h28m

It does. However it catches some different things*

1udfx9cf8azi0
6 replies
1d

    if (nbytes < sizeof(*hwrpb))
        return -1;
    
    if (copy_to_user(buffer, hwrpb, nbytes) != 0)
        return -2;
The fix that was done was:

    if (nbytes > sizeof(*hwrpb))
But I think the correct fix is:

    if (copy_to_user(buffer, hwrpb, sizeof(*hwrpb)) != 0)
It never makes sense to copy out of the hwrpb pointer any size other than sizeof(*hwrpb).

pwagland
5 replies
1d

Right, but the size of the buffer is given, it doesn't make sense to stomp over end of the callers buffer either, so you can't use pass in something longer than `nbytes` either.

1udfx9cf8azi0
4 replies
1d

That's what the original check is for:

    if (nbytes < sizeof(*hwrpb))
If the buffer isn't large enough to hold *hwrpb, then it already fails. The original check was good, only needed to change the amount of bytes copied to sizeof(*hwrpb).

tom_
1 replies
23h16m

The original less-than check was deemed incorrect, and was replaced entirely. For good or for ill, it seems the author deems it valid to pass in a value smaller than sizeof *hwrpb, and that many bytes will be dutifully copied. This might form part of some barebones API versioning mechanism.

1udfx9cf8azi0
0 replies
23h6m

The original less-than check was deemed incorrect

It was only deemed incorrect because of an information leak. Not because it's a valid use-case for user space to copy smaller portions of *hwrpb into user space. https://github.com/torvalds/linux/commit/21c5977a836e399fc71...

sltkr
1 replies
23h7m

No, because if nbytes > sizeof(*hwrpb), your version causes the kernel to only write part of the buffer, and then when the app accesses fields at the end of the struct, it would read uninitialized data, which is very bad.

Recall that the API is intended to be used like this:

    struct hwrbp buf;
    getsysinfo(GSI_GET_HWRPB, &buf, sizeof(buf), /* .. */);
At first glance, it might seem unnecessary to pass the buffer size at all, because in theory the user and kernel should agree on what the sizeof(struct hwrbp) is. But the reason it is passed is because there are various reasons why the separately compiled kernel and user binaries might disagree (e.g., incorrect compiler flags, wrong header file being used, struct has changed between different versions, etc.), and it's useful to detect that. So you can make an argument that the most conservative check is:

    if (nbytes != sizeof(\*hwrpb)) return -1;
After all, if the user and kernel disagree on the correct size of the struct, then something is wrong! But allowing nbytes < sizeof(*hwrpb) has the benefit that the kernel developers can add fields at the end of the struct without breaking backward compatibility with older applications.

I would agree with you if the kernel had some other mechanism to pass the size of the buffer that was actually filled to the client (like e.g. the read() syscall does) but the getsysinfo() API doesn't return that data, so the kernel must either fill the buffer entirely or return failure.*

1udfx9cf8azi0
0 replies
22h48m

No, because if nbytes > sizeof(*hwrpb), your version causes the kernel to only write part of the buffer, and then when the app accesses fields at the end of the struct, it would read uninitialized data, which is very bad.

I would agree with you if the kernel had some other mechanism to pass the size of the buffer that was actually filled to the client (like e.g. the read() syscall does) but the getsysinfo() API doesn't return that data, so the kernel must either fill the buffer entirely or return failure.

As you mention, this struct is versioned. Userspace can tell how much of the struct was filled by checking the size field (hwrpb->size).

But allowing nbytes < sizeof(*hwrpb) has the benefit that the kernel developers can add fields at the end of the struct without breaking backward compatibility with older applications.

That's a related but separate issue. Backward compatibility can be handled by switching on nbytes or by copying fewer bytes with a carefully designed struct. It's not clear that backward compatibility was the original intention of this code, the original intention more seems to be sanitizing tainted input. This struct has not changed in at least 16 years.

aulin
3 replies
12h35m

now we want a GCC language server!

arp242
0 replies
1h31m

Few years before that Stallman personally sabotaged this kind of tooling "because someone might abuse it". LWN did a write-up: https://lwn.net/Articles/629259/

So not surprising gcc devs weren't especially interested in on it, since Lord Stallman can come in and decree it unethical on a whim out of misguided fears.

mgaunard
2 replies
22h54m

-Wstringop-overflow is the first warning I disable because of all the false positives.

I doubt the analyze variant would fare any better.

bregma
1 replies
18h35m

Isn't sort of like pulling the battery out of your carbon monoxide detector because the constant beeping is giving you a headache and making you sleepy?

gpderetta
0 replies
8h31m

No. -Wstringop-overflow is really broken with a huge amount of false positives.

At $JOB we disable it on a line by line basis, but I'm not sure it is worth the effort.

Davidbrcz
1 replies
1d

I wish there was a better output format for the analysis, because this is hell for screen readers.

dmalcolm
0 replies
23h46m

FWIW I implemented SARIF output in GCC 13 which is viewable by e.g. VS Code (via a plugin) - though the ASCII art isn't.

You can see an example of the output here: https://godbolt.org/z/aan6Kfxds (that's the first example from the article, with -fdiagnostics-format=sarif-stderr added to the command-line options)

I experimented with SVG output for the diagrams, but didn't get this in good enough shape for GCC 14.

saagarjha
0 replies
1d1h

Very nice. I’m glad to see these all have detailed reports explaining what’s wrong!

perihelions
0 replies
1d

36 more comments in this other thread:

https://news.ycombinator.com/item?id=39918278 ("GCC 14 Boasts Nice ASCII Art for Visualizing Buffer Overflows (phoronix.com)", 2 hours ago)

crest
0 replies
2h20m

It's hard to believe that more and more compiler writers realise that language lawyering alone isn't going to improve anything but runtime on an unchanging set of microbenchmarks. I still remember the bad old GCC 4.x error messages and those defending them explaining why they should stay like this despite a single template error easily filling ten unintelligible screen pages.

When clang was new users switched to it just for the error messages and promises not to fuck them over too hard e.g. start exploiting that signed integer overflows are "ackchyually undefined". Which is of course correct, but not what users complained about. They complained that what they considered a bugfix release broke code because the defaults changed and -fwrapv didn't even catch all the cases that used to compile to what the user needed/expected.

bvrmn
0 replies
9h34m

It's really great. Shear amount of work is huge. It seems difficulty level is on par with introducing fat pointers/array views into stdlib and C standard.

Eager
0 replies
1m

A few months ago I made a neat little linux utility.

It was a drop in replacement shim for an arbitrary executable that would pretend to be the original when invoked, fork off the original and hook up to its stdout and stderr.

The error output was then fed to a custom GPT assistant that knew what program the errors came from. That assistant was tasked with turning the original errors into friendly human readable form. The output from the assistant was then sent out of the shim stderr.

It worked very well, but then I got really sick and wasn't able to work on it anymore.

I was using it for GCC / Clang errors because I had become tired of staring at heavily nested compiler dumps for concept/template issues, but you could use it for anything of course.

It would be a nice project for someone to build again, do it properly and generalize it since it doesn't look like I am going to be bouncing around again for a while.