return to table of content

Progress toward a GCC-based Rust compiler

vlovich123
139 replies
1d2h

The claims in the article feel kinda weak as to the motivation.

Cohen's EuroRust talk highlighted that one of the major reasons gccrs is being developed is to be able to take advantage of GCC's security plugins. There is a wide range of existing GCC plugins that can aid in debugging, static analysis, or hardening; these work on the GCC intermediate representation

One more reason for gccrs to exist is Rust for Linux, the initiative to add Rust support to the Linux kernel. Cohen said the Linux kernel is a key motivator for the project because there are a lot of kernel people who would prefer the kernel to be compiled only by the GNU toolchain.

That explains why you’d want GCC as the backend but not why you need a duplicate front end. I think it’s a bad idea to have multiple front ends and Rust should learn from the mistakes of C++ which even with a standards body has to deal with a mess of switches, differing levels of language support for each compiler making cross-platform development harder, platform-specific language bugs etc etc.

A lot of care is being put into gccrs not becoming a "superset" of Rust, as Cohen put it. The project wants to make sure that it does not create a special "GNU Rust" language, but is trying instead to replicate the output of rustc — bugs, quirks, and all. Both the Rust and GCC test suites are being used to accomplish this.

In other words, I’d love gccrs folks to explain why their approach is a better one than rustc_codegen_gcc considering the latter is able to achieve this with far less effort and risk.

gkbrk
50 replies
1d1h

Rust should learn from the mistakes of C++

Rust should learn from the mistakes of C++ and C, which are one of the longest lasting, biggest impact, widely deployed languages of all time?

It's confusing when people think language standards are bad, and instead of saying this code is C99 or C++11, they like saying "this code works with the Rustc binary / source code with the SHA256 hash e49d560cd008344edf745b8052ef714b07595808898c835f17f962a10012f964".

pornel
26 replies
1d1h

C and C++ are widely used despite their language, compiler, and build system fragmentation. Each platform/compiler combo needs ifdefs and workarounds that have been done for so long they’re considered a normal thing (or people say MSVC and others don’t count, and C is just GCC+POSIX).

There’s value in multiple implementations ensuring code isn’t bug-compatible, but at the same time in C and C++ there’s plenty of unnecessary differences and unspecified details due to historical reasons, and the narrow scope of their specs.

gkbrk
25 replies
1d1h

Yes, that's how the story goes. Languages with specs are widely deployed despite being fragmented and bad, not because people find value in multiple implementations. Must be a coincidence that C, C++, Javascript and C# and Java all fall under this umbrella.

legobmw99
16 replies
1d1h

Java and C# seem to have actually gotten the idea of multiple implementations correct, in the sense that I have never needed to worry about the specific runtime being used as long as I get my language version correct. I have basically never seen a C/C++ program of more than a few hundred lines that doesn’t include something like #ifdef WIN32 …

bluGill
13 replies
1d1h

Is there more than one Java implementation that is usable? All I can find are obsolete products that never got very far.

gkbrk
11 replies
1d1h

Here are a few Java implementations that I've used recently.

- My Android phone

- OpenJDK on my laptop

- leJOS for a university robotics course to run on Lego robots

- Smart cards, I have a couple in my wallet

Perhaps I'd call the Lego robot stuff obsolete, certainly not the Android userspace or smart cards though.

vlovich123
10 replies
1d

Your Android phone and the latest Java share very little commonality. It only recently supports Java 11 which is 5 years old at this point. The other non-OpenJDK implementations you mentioned are much more niche (I imagine the smart cards run JavaCard which is still probably going to be running an OpenJDK offshoot).

pjmlp
4 replies
22h22m

Java 17 LTS is supported from Android 12 onwards.

PTC, Aicas, microEJ, OpenJ9, Azul, GraalVM are a couple of alternative JVM implementations.

vlovich123
3 replies
20h37m

Again. I’m not claiming that alternative implementations don’t exist, just that they’re not particularly common/popular compared with OpenJDK/Oracle (which is largely the same codebase). Android is the only alternative implementation with serious adoption and it lags quite heavily.

BTW GraalVM is based on OpenJDK so I don’t really understand your point there. It’s not a ground-up reimplementation of the spec.

pjmlp
2 replies
20h31m

GraalVM uses another complete infrastructure, JIT and GC compilers, which affect runtime execution, and existing tooling.

Doesn't matter how popular they are, they exist because there is a business need, and several people are willing to pay for them, in some cases lots of money bags, because they fulfill needs not available in OpenJDK.

the-smug-one
1 replies
17h28m

I don't think what you're describing here is accurate re: GraalVM. GraalVM is Hotspot but with C1/C2 not being used, using a generic compiler interface to call out to Graal instead AFAIK.

pjmlp
0 replies
12h21m

Only if you are describing the plugable OpenJDK interfaces for GraalVM, that have been removed a couple of releases after being introduced, as almost no one used them.

I have been following GraalVM since it started as MaximeVM on Sun Research Labs.

gkbrk
4 replies
1d

Who said anything about the latest Java. Since Java has versioned specs, platform X can support one version, and you can target it based on that information even if other versions come out and get supported by other platforms.

For example C89 and C99 are pretty old, and modern C has a lot that is different from them. But they still get targeted and deployed, and enjoy a decent following. Because even in 2024, you can write a new C89 compiler and people's existing and new C89 code will compile on it if you implement it right.

sunshowers
3 replies
22h36m

But as a developer I do want to use (some of) the latest features as soon as they become available. That's why most of my crates have an N-2 stable version policy.

vlovich123
2 replies
20h36m

And since Rust does a release every 6 weeks, we’re talking a ~4 month lag. That’s unheard of for C/C++.

lanstin
1 replies
13h20m

It is the sort of thing that people aimed for in the 90s, when there was more velocity in c and c++. If rust lives long enough the rate of change will also slow down.

pjmlp
0 replies
8h29m

In the 90s if you wanted new features you had to pay for them.

hansjorg
0 replies
1d1h

Eclipse OpenJ9 (previously IBM J9) is in active development and supports Java 21.

vintermann
1 replies
22h25m

Well, there's Android... and Unity, which as I recall is stuck on an old version of C# and its own way of doing things. I also had the interesting experience of working with OSGI at work a couple of years.

pjmlp
0 replies
8h28m

And Meadow, and Capcom, and Gotdot 3,

vlovich123
6 replies
1d1h

C# has multiple compilers and runtimes? Mono used to be a separate thing but if I recall correctly mono has been adopted by MS & a lot merged between the two.

JavaScript itself is a very simple language with most of the complexity living in disparate runtimes and that there are multiple runtime implementations is a very real problem requiring complex polyfills that age poorly that are maintained by the community. For what it’s worth TypeScript has a single implementation and it’s extremely popular in this community.

Java is probably the “best” here but really there’s still only the Sun & OpenJDK implementations and the OpenJDK and Oracle are basically the same if I recall correctly with the main difference being the inclusion of proprietary “enterprise” components that Oracle can charge money for. There are other implementations of the standard but they’re much more niche (e.g. Azul systems). A point against disparate implementations is how Java on Android is now basically a fork & a different language from modern-day Java (although I believe that’s mostly because of the Oracle lawsuit).

Python is widely deployed & CPython remains the version that most people deploy. Forks find it difficult to keep up with the changes (e.g. PyPy for the longest time lagged quite badly although it seems like they’re doing a better job keeping up these days). The forks have significantly less adoption than CPython though.

It seems unlikely that independent Rust front end implementations will benefit it’s popularity. Having GCC code gen is valuable but integrating that behind the Rust front-end sounds like a better idea and is way further along. gccrs is targeting a 3 year old version of Rust that still isn’t complete while the GCC backend is being used to successfully compile the Linux kernel. My bet is that gccrs will end up closer to gcj because it is difficult to keep up.

gkbrk
3 replies
1d

C# has multiple compilers and runtimes?

Yes. Roslyn, Mono and some Mono-like thing from Unity to compile it into C++.

Mono used to be a separate thing

Mono is still a thing. The last commit was around 3 months ago.

multiple runtime implementations is a very real problem requiring complex polyfills

You can target a version of the spec and any implementation that supports that version will run your code. If you go off-spec, that's really on you, and if the implementation has bugs, that's on the implementation.

TypeScript has a single implementation

esbuild can build typescript code. I use it instead of tsc in my build pipeline, and only use tsc for type-checking.

[Typescript is] extremely popular in this community

esbuild is extremely popular in the JS/TS community too. The second most-popular TS compiler probably.

[Java has] only the Sun & OpenJDK implementations

That's not true. There are multiple JDKs and even more JVMs.

Java on Android is now basically a fork & a different language from modern-day Java

Good thing Java has specs with multiple versions, so you can target a version that is implemented by your target platform and it will run on any implementation that supports that version.

Python is widely deployed & CPython remains the version that most people deploy. > The forks have significantly less adoption than CPython though.

That is because Python doesn't have a real spec or standard, at least nothing solid compared to the other languages with specs or standards.

It seems unlikely that independent Rust front end implementations will benefit it’s popularity.

It seems unlikely that people working on an open-source project will only have the popularity of another open-source project in mind when they spend their time.

vlovich123
1 replies
23h57m

Yes. Roslyn, Mono and some Mono-like thing from Unity to compile it into C++.

Roslyn is more like the next gen compiler and will be included in Mono once it’s ready to replace msc. I view it as closer to polonius because it’s an evolutionary step to upgrade the previous compiler into a new implementation. It’s still a single reference implementation.

Mono is still a thing

I think you misunderstood my point. It had started as a fork but then Microsoft adopted it by buying Xamarin. It’s not totally clear to me if it’s actually a fork at this point or if it’s merged and shares a lot of code with .NET core. I could be mistaken but Mono and .Net core these days also share quite a bit of code.

rebuild can build typescript code

Yes, there are plenty of transpilers because the language is easy to desugar into JavaScript (intentionally so - TS stopped accepting any language syntax extensions and follows ES 1:1 now and all the development is in the typing layer). That’s very different from a forked implementation of the type checker which is the real meat of the TS language compiler.

The second most popular TS compiler probably

It’s a transpiler and not a compiler. If TS had substantial language extensions on top of JS that it was regularly adding, all these forks would be dead in the water.

That’s not true. There are multiple JDKs and even more JVMs

I meant to say they’re the only ones with any meaningful adoption. All the other JDKs and JVMs are much more niche and often benefit from living in a niche that is often behind on the adoption curve (i.e. still running Java 8 or something or are willing to stay on older Java versions because there’s some key benefit in the other version that is operationally critical).

Good thing Java has specs with multiple versions, so that you can target a version…

Good for people implementing forks, less good for people living within the ecosystem in terms of having to worry about which version of the compiler to support with their library. For what it’s worth Rust also has language versions but it’s more like an LTS version of the language whereas Java versions come out more frequently & each implementation is on whatever year they wanted to snapshot against.

Rohansi
0 replies
22h38m

FYI Mono has been shipping Roslyn as its C# compiler for a few years now. Mono's C# compiler only fully supports up to C# 6 while Roslyn supports C# 12, the latest version.

Mono shares a lot of code with .NET (Core) but is mostly limited to the standard libraries and compiler. Mono is still its own separate implementation of the CLR (runtime/"JVM") and supports much more platforms than .NET (Core) today.

tester756
0 replies
19h43m

Yes. Roslyn, Mono and some Mono-like thing from Unity to compile it into C++.

Mono has approx. 0.x% market share outside Unity. Also Mono is used by .NET Core for Blazor WASM iirc.

Let's don't compare this scenario of sane world with the mess that exists in C++ world.

lioeters
1 replies
1d

TypeScript has a single implementation

It's probably a matter of time until there's a TypeScript compiler implemented in Rust. But the surface area of the language is pretty big, and I imagine it will always lag behind the official compiler.

Forks find it difficult to keep up with the changes

That's interesting to think of multiple implementation of a language as "forks" rather than spec-compliant compilers and runtimes. But the problem remains the same, the time and effort necessary to constantly keep up with the reference implementation, the upstream.

vlovich123
0 replies
1d

There’s been plenty of attempts to implement TS in another language or whatnot. They all struggle with keeping up with the pace of change cause the team behind TS is quite large. There was an effort to do a fairly straight port into Rust which actually turned out quite well, but then the “why” question comes up - the reason would be to do better performance but improving performance requires changing the design which a transliteration approach can’t give you & the more of the design you change, the harder it is to keep up with incoming changes putting you back at square 1. I think Rust rewrites of TS compilers (as long as TS is seeing substantial changes which it has been) will be worse than PyPy which is basically a neat party trick without serious adoption.

bluGill
0 replies
1d1h

Or do languages get specs because they are widespread and becoming fragmented? That clearly apples to C and C++ - both were implemented first and then a formal spec was written in response to fragmentation.

flooow
13 replies
1d1h

Or, y'know, "rustc 1.74.0" like a normal person.

legends2k
10 replies
22h53m

That's besides the point. Adhering to a language standard is much clearer than specifying it by a language compiler's version. Behaviour is documented in the former while one has to observe the output of a binary (and hope that side effects are understood with their full gravity).

estebank
8 replies
21h37m

But no-one writes code against the standard. We all write code against the reality of the compiler(s) we use. If there's a compiler bug, you either use a different version, a different vendor, or we side step the code triggering the issue. The spec only tells you that the vendor might fix this in the future.

_gabe_
7 replies
17h2m

This is definitely not true. Whenever I have a question about a C++ language feature I typically go here first[0], and then if I’m looking for compiler specific info I go to the applicable compiler docs second. Likewise, for Java I go here[1]. For JavaScript I typically reference Mozilla since those docs are usually well written, but they reference the spec where applicable and I dig deeper if needed[2].

Now, none of these links are the specifications for the languages listed, but they all copiously link to the specification where applicable. In rare situations, I have gone directly to the specification. That’s usually if I’m trying to parse a subset of the language or understand an obscure language feature.

I would argue no one writes code against a compiler. Sure we all validate our code with a compiler, but a compiler does not tell you how the language works or interacts with itself. I write my code and look for answers to my questions in the specification for my respective languages, and I suspect most programmers do as well.

[0]: https://en.cppreference.com/w/

[1]: https://docs.oracle.com/javase/specs/index.html

[2]: https://developer.mozilla.org/en-US/docs/Web/JavaScript

estebank
6 replies
15h4m

If the compiler you use and the spec of your language disagree, what do you do?

The Project is working on a specification. The Foundation hired someone for it.

A Rust spec done purely on paper ahead of time would be the contents of the accepted RFCs. The final implementation almost never matches what was described because during the lengthy implementation and stabilization process we encounter multitude of unknown unknowns. The work of the spec writers will be to go back and document the result of that process.

For what is worth the seeming dragging of feet on this is because the people that would be qualified and inclined to doing that work were working on other, more pressing matters. If we had had a spec back in, let's say, Rust 1.31, what would have that changed, in practice?

throwaway17_17
3 replies
14h44m

Can you articulate what a spec for a programming language entails in your mind. I am sure it is a niche opinion/position, but I base any discussions of formal specification of a language on ‘The Definition of ML’. With that as a starting point I don’t see how the implementation process of a compiler would do anything that could force a change to the spec. Once the formal syntax is defined, the static and dynamic semantics of the language laid out and those semantics proved approximately consistent and possessing ‘type safety’ (for some established meaning of the words). Any divergence or disagreement is a failing of the implementation. I’m genuinely interested in what you expect a specification of Rust to be if, as your comment suggests, you have a different point of view?

estebank
2 replies
14h33m

A specification can be prescriptive (stating how things should work) or descriptive (stating the observable behavior of a specific version of software). For example earlier in Rust's post-1.0 life the borrow checker changed behavior a few times, some to fix soundness bugs, some to enable more correct code to work (NLL). An earlier spec would have described different behavior to what Rust is today (but of course, the spec can be updated over time as well).

How should the Rust Specification represent the type inference algorithm that rustc uses? Is an implementation that can figure out types in more cases than described by the specification conformant? This is the "split ecosystem concern" some have with the introduction of multiple front ends: code that works on gccrs but not rustc. There's no evidence that this will be a problem in practice, everyone involved seems to be aligned on that being a bad idea.

Any divergence or disagreement is a failing of the implementation.

Specs can have bugs too, not just software.

throwaway17_17
1 replies
13h43m

I think it was probably a rhetorical question, but with regards to type checking, as with ML, type inference is an elaboration step in the compiler that produces the correct syntax of the formal language defined in the language definition. Specifying the actual implemented algorithm that translates from concrete syntax to abstract syntax (and the accompanying elaboration to explicit type annotations) is a separate component of the specification document that does not exert any control or guidance to the definition of the formal language of the ‘language’ in question.

I think this may be a large point of divergence with regard to my understanding and position on language specifications. I assume when posts have said Rust is getting a spec, that that included a formal, ‘mathematically’ proven language definition of the core language. I am aware that is not what C or C++ or JS includes in a spec (and I don’t know if that is true for Ada), but I was operating under the assumption the Rust’s inspiration from Ocaml and limited functional-esque stylings that the language would follow the more formalized definition style I am used to.

estebank
0 replies
13h22m

Look at Goals and Scope in https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio... for what the Rust spec will be.

gkbrk
1 replies
14h51m

If the compiler you use and the spec of your language disagree, what do you do?

If the compiler claims to follow the specified version of the spec, and it doesn't, you file a compiler bug.

And then use the subset that it supports, perhaps by using an older spec if it supports that fully. Perhaps looking for alternative compilers that have better/full coverage of a spec.

"Supports the spec minus these differences" is still miles better than "any behaviour can change because the Rust 2.1.0 compiler compiles code that the Rust 2.1.0 compiler compiles".

estebank
0 replies
14h21m

If the compiler claims to follow the specified version of the spec, and it doesn't, you file a compiler bug.

And then use the subset that it supports, perhaps by using an older spec if it supports that fully. Perhaps looking for alternative compilers that have better/full coverage of a spec.

If you encounter rustc behavior that seems unintentional, you can always file a bug in the issue tracker against the compiler or language teams[1]. Humans end up making a determination whether the behavior of the compiler is in line with the RFC that introduced the feature.

"Supports the spec minus these differences" is still miles better than "any behaviour can change because the Rust 2.1.0 compiler compiles code that the Rust 2.1.0 compiler compiles".

You can look at the Rust Reference[2] for guidance on what the language is supposed to be. It is explicitly not a spec[3], but the project is working on one[4].

1: https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Ais...

2: https://doc.rust-lang.org/reference/introduction.html

3: https://doc.rust-lang.org/reference/introduction.html#what-t...

4: https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...

wongarsu
0 replies
20h53m

But barely anyone gets to write C++11. You write C++11 for MSVC2022, or C++11 that compiles with LLVM 15+ and GCC 8+, and maybe MSVC if you invest a couple hours of effort into it. That's really not that different from saying you require a minimum compiler version of Rust 1.74.0.

gpm
1 replies
1d1h

And in the rare instances where you're using in-development features "rust nightly-2023-12-18"

Literally the only reason to specify via a hash* would be if you were using such a bleeding edge feature that it was only merged in the last 48 hours or so and no nightly versions had been cut.

*Or I suppose you don't trust the supply chain, and you either aren't satisfied with or can't create tooling that checks the hash against a lockfile, but then you have the same problem with literally any other compiler for any language.

rcxdude
0 replies
1d1h

Indeed. If I'm specifying a hash for anything I'm definitely not leaving things up to the very, very wide range of behaviours covered by the C and C++ standards.

ragnese
3 replies
1d1h

It's confusing when people think language standards are bad, and instead of saying this code is C99 or C++11, they like saying "this code works with the Rustc binary / source code with the SHA256 hash e49d560cd008344edf745b8052ef714b07595808898c835f17f962a10012f964".

I don't know if that's totally fair. I remember that it took quite a while for C++ compilers to actually implement all of C++11. So, it was totally normal at the time to change what subset of C++11 we were using to appease whatever version of GCC was in RHEL at the time.

staunton
1 replies
21h31m

Technically, no compiler ever implemented "all" of c++11... You'd have to implement garbage collection, for example :D

the_why_of_y
0 replies
20h8m

Not to mention "export", which was in C++98 but only ever supported by Comeau C++.

schuyler2d
0 replies
17h29m

standards are good. Canonical implementations managed/organized by a single project are also good (python, go, ruby, ...).

sophacles
1 replies
1d1h

the longest lasting, biggest impact, widely deployed languages of all time

Can be true at the same time as:

c and c++ have made mistakes, and have had issues as a result of bad choices

The later should be learned from by any language in a place to not make those same mistakes. We call this technological progress, and its OK.

bluGill
0 replies
1d1h

C and C++ both learn from the mistakes of others too. Of course as mature languages they have a lot they cannot change. However when they do propose new features it is common to look at what other languages have done. C++'s thread model is better than Java's because they were able to look at the things Java got wrong (in hindsight, those choices looked good at the time to smart people - lets not pick on Java for not predicting how modern hardware would evolve in the future. Indeed it is possible in a few years hardware will evolve differently again and C++'s thread model will then be just as wrong despite me today calling it good)

sunshowers
0 replies
22h38m

I think people generally specify their MSRVs as version numbers for libraries, and often a pinned toolchain file for applications. I haven't seen anyone use a hash for this, though I'm sure I might have missed something.

I don't think language standards are "bad".

rcxdude
0 replies
1d1h

The long-lasting and widespread nature of C and C++ is why their mistakes are the ones most worth learning from.

Guvante
0 replies
22h10m

That phrase doesn't mean what you think it is.

"Learn from the mistakes of X" doesn't mean X is bad, it means X made mistakes.

ndiddy
46 replies
1d1h

Having another Rust implementation allows for an "audit" to help validate the Rust spec and get rid of any unspecified behavior. It would also give users options. If I hit a compiler bug in MSVC, I can file a report, switch to GCC and keep working on my project until the bug is fixed. With Rust, that's not currently possible.

nindalf
25 replies
19h28m

Your sentiment is a commonly expressed one, but not usually by people who have adopted Rust. It’s usually Clang/MSVC/GCC users who have decided this is the optimal flow and want to replicate it in all future codebases they work in, regardless of language.

In reality if you hit a compiler bug in Rust or Go (or any other language with one main impl like Python, Ruby…) you would file a report and do one of two things - downgrade the compiler (if it’s an option) or write the code a different way. Compiler bugs in these languages are rare enough that this approach works well.

That said, for people who are really keen on a spec, there is one being worked on (https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...).

But this GCCRS effort doesn’t get you any closer to your ideal C/C++ style workflow because they are committing to matching the semantics of the main compiler exactly. Bugs and all. And that’s the way it should be.

The Rust ecosystem becomes worse if I have to install a different toolchain and learn a different build system with a different compiler for every new project I interact with. And after all that extra effort it turns out there are subtle differences between implementations. My developer experience is just worse at that point. If I wanted to code like this, I would code in C++ but I don’t.

moregrist
15 replies
18h22m

Your sentiment is a commonly expressed one, but not usually by people who have adopted Rust.

Perhaps because it’s currently not possible in Rust, so users don’t really see the advantages yet.

Multiple implementation are always a good thing. They bring in diversity of thought and allow for some competition in the implementation. Having multiple slightly different implementations also gives more clarity to a spec.

This isn’t the kind of stuff that users tend to think about until it’s already there.

MindSpunk
8 replies
16h45m

I agree there is value to multiple implementations. We wouldn't be where we are today with open compilers and efficient machine code if it weren't for the competing implementations of LLVM and GCC.

However there isn't enough benefit to multiple language _frontends_ that I can see that outweighs the many downsides.

Not only do you have to contend with the duplication of effort for implementing two or more language frontend projects, you now create ecosystem splits when inevitably the implementations do not match their behavior.

The biggest example of multiple frontends, C++, is not one language. C++ is Clang-C++, GNU-C++, MSVC-C++, and so-on. All of these implementations have subtle incompatibilities with how they handle identical code, or they have disjoint feature matrices, or they have bugs, and so on. Multiple frontends is IMO one of the _worst_ parts of C++ because it makes writing truly portable code almost impossible. How many new C++ features are unusable in practice because one of the main 3 toolchains hasn't implemented it yet?

Rust, on the other hand, everything is portable by default. No #ifdef CLANG, no wrangling with subtly incompatible toolchains. If I write code I can trust it will just work on any architecture Rust can build for, ignoring implementation bugs and explicitly platform specific code.

I have a hobby game engine I work on in my spare time that was developed exclusively on x86-64 for years and I ported it to run on Apple Silicon ARM in two afternoons. Rust just _worked_, all the time was spent on my C/C++ dependencies and new code needed for MacOS support.

I simply don't see how nebulous, difficult to quantify benefits of competing implementations can outweigh the practical advantages of truly portable code that comes from a single implementation. The ecosystem benefits to a uniform language, and the benefits to my sanity with being able to trust my code while compile everywhere I need it to.

adrianN
6 replies
11h41m

Writing truly portable C++ it’s not all that hard. In the worst case you need a handful of ifdefs even if you write systems code.

messe
3 replies
9h7m

#ifdefs are a signed of ported code, and not portable code.

adrianN
2 replies
8h53m

If you need a few hundred lines of ifdef in a codebase with millions of lines it is portable in the sense that you are able to port it with reasonable effort.

nindalf
1 replies
8h18m

That's nice, you're welcome to continue writing your ifdefs. Nothing you or anyone else has said has built a compelling case for alternate frontends. You're saying "it's not much effort". Sure, but I prefer 0 effort.

What we're saying is that everyone actually using Rust today is fairly happy with how the language is. That's why Rust has had an 80%+ approval rating among Rust developers since its creation and C++ is at 49% (https://survey.stackoverflow.co/2023/#section-admired-and-de...).

This suggestion of alternate frontends still has no good technical reason driving it, just people pattern matching on their past experience. If it happens, all it will do is lower Rust's approval to C++'s level.

adrianN
0 replies
7h10m

If only alternative frontends were the only reason that C++ developers were unhappy with the language...

smaudet
1 replies
47m

Only #if you ignore the hundred other issues with building portable c++ code:

* Libraries * ABIs * Build system(s) * Platform compiler support...

In compiled/interpreted land, most if not all of these are solved problems. Even for libraries, usually dependencies can be reliably downloaded and recompiled (if they are not compiled on the fly).

I don't really care if you can create a "portable header" library when the ecosystem is sharded, and "portable" only holds true for some subset of c++ compiler/platforms.

adrianN
0 replies
18m

That seems orthogonal to having multiple frontends. I have in fact worked on several systems that compiled under three compilers and ran on multiple architectures and/or operating systems and it was not a hot mess of ifdef spaghetti.

usrusr
0 replies
7h54m

It's bad enough already when there's a single implementation, but people can't agree which version increment has left the newfangled stage.

rblatz
2 replies
17h17m

All i hear is fragmented ecosystem.

crotchfire
1 replies
16h55m

Have you had your ears checked lately?

Klonoar
0 replies
8h47m

Nah, they’re right.

newZWhoDis
1 replies
14h13m

That is dumb and counterproductive. Your entire scenario presupposes unlimited labor capacity and funding (monetary or otherwise), so “diversity is better!”

In reality, the people wasting their time to produce a redundant ground-up rewrite of existing functionally could have spent their time and effort on something else that actually moves the world forward.

moregrist
0 replies
5h30m

I think the only thing I’m assuming here is that there are people who want to do the work. Perhaps because it’s fun to write a compiler (it is!), or because they don’t view llvm as the pinnacle of compiler technology, or maybe their employer sees value in it.

And apparently there are people who want to do the work, because we have work on a couple of implementations.

You can choose to use your one true implementation, no one is stopping you. But it’s kind of weird to assume that people working on another one are detracting from yours. That’s not usually the way open source software works.

ilayn
0 replies
12h2m

That's only true if you are not distributing your code. If you want to kill portability just involve MSVC in anything.

crotchfire
8 replies
16h57m

Monopoly is never the optimal flow.

Sincerely,

- Rust adopter

TheDong
7 replies
14h40m

Monopoly rings hollow when it comes to a community open source project.

Monopolies are bad when they stifle progress, but rust's written under a free software license. At any point, anyone can take the project and fork it into a new one, can carry on the progress under a new flag.

Unlike traditional monopolies, the rust project's priorities aren't extracting profits or holding power, but rather in improving the language, and they're open to anyone (well, anyone who's tolerant of others) joining and helping.

The argument here is that we will end up with a better outcome if we work together as a community, rather than if we fragment just for some perceived ideal of multiple implementations.

While you're criticizing the rust monopoly, you might as well go and also yell at the group of friends voluntarily picking up trash off the sidewalk for free since they're working together as one monopoly of friends instead of creating two competing cleaning groups.

fsckboy
4 replies
11h31m

rust's written under a free software license

the way you are using "free" is undefined behavior. "Free" as in speech software, the F in FOSS, is associated with copyleft FSF and GNU, and not like BSD, MIT, and Apache, the open source in fOSs, which is what Rust is.

gpderetta
2 replies
8h6m

The FSF recognizes BDS, MIT and Apache as Free Software (but not Copyleft) licenses.

fsckboy
1 replies
6h58m

no, they recognize those licenses as compatible with the GPL, but not as free software

https://www.fsf.org/

"Free software means that the users have the freedom to run, edit, contribute to, and share the software."

those licenses allow derivatives which users cannot edit and contribute to thus they don't guarantee the 4 freedoms of free software.

gpderetta
0 replies
6h45m

That's literally contrary to what FSF says.

Please read: https://www.gnu.org/licenses/license-list.html .

For example:

  Apache License, Version 2.0 
  This is a free software license, compatible with version 3 of the GNU GPL.
Or:

  Expat License [this is the MIT license]
  This is a lax, permissive non-copyleft free software license, compatible with the GNU GPL.

The fact that a license allows derivative work to take away the 4 freedoms don't make the license not a free software license. An end user in reception of a permissively licensed free software can make full use of the 4 freedoms.

TheDong
0 replies
8h13m

The english language is full of ambiguity if you choose to be pedantic, yet somehow most people manage to communicate.

The fact that you knew I was referring to rust's license means my English was in fact not undefined, but exactly as clearly defined as one can hope english to be, i.e. we both thought it meant the same thing.

In my opinion, in context, "free software license" is also a clear and common phrase, not undefined behavior.

Essentially, I think everyone has agreed that "free software", unless you add more qualifiers, will refer to the FSF definition of free software https://www.gnu.org/philosophy/free-sw.html

The FSFs definition includes both the licenses rust is licensed under: https://www.gnu.org/licenses/license-list.html#Expat

The FSF would happily call BSD, MIT, and Apache "Free Software Licenses", and in fact do so.

imadj
1 replies
10h17m

Monopoly rings hollow when it comes to a community open source project.

Last I heard `open source` was used to refer to code not projects.

Every project has its "decision makers" (no matter the structure or titles) that have the power to dictate the direction and vision. You can literally see it every day in many projects, not trying to single out Rust or anything. The power is not distributed evenly, not even within the elite inner circle.

There's no 'together as a community` when it's followed by a clause like `we need to follow the one direction (X) has set`.

What power does `the community` exactly have? to open a PR and get denied? is to fork? Can the community ellect leaders and change course? You said it yourself, anyone that want to have their voice heard, they have to climb up the ladder and join the inner circle. You're descriping `Power elite` and not `community`. Even forks don't work in real life, what happens is the project usually die slowly and everyone move to the next thing.

I'm not dismissing the structure itself, it's a good pragmatic approach, it's just that you're twisting and polishing it. In short:

  You: Everyone is welcome. No need to fragment. Rust is run by community.

  Rust's governance: We rule. We also pick any new members. Community doesn't have much say about anything.
So pursuing efforts like the one in the post is exactly how the community is empowered and what might enable Rust to survive any challenges it faces in the future. People doing their own things is how the open source ecosystem began and continue to flourish. So can you just welcome their effort and let them be?

germandiago
0 replies
8h24m

Reminds me of iron law of oligarchies. Which basically is true of any group.

VBprogrammer
10 replies
1d

I don't have a lot of experience in C or C++ but I wonder if this ever works in practice for a non-trivial codebase? I'd be really surprised if, without diligently committing to maintaining compatibility with the two compilers, it was easy to up sticks and move between them.

jandrewrogers
1 replies
22h21m

Many complex C++ codebases have full parallel CI pipelines for GCC and LLVM. It encourages good code hygiene and occasionally identifies bugs in the compiler toolchain.

If you are using intrinsics or other architecture-specific features, there is a similar practice of requiring full CI pipelines for at least two CPU architectures. Again, it occasionally finds interesting bugs.

For systems I work on we usually have 4 CI pipelines for the combo of GCC/LLVM and ARM/x86 for these purposes. It costs a bit more but is generally worth it from a code quality perspective.

izacus
0 replies
20h24m

Adding CI pipeline running compiles on MSVC was one of the big shakeouts of undefined behaviour and bugs in our C++ codebase - while making that compiler happy is annoying if you come from Unixy land, it did force us to shed quite a few bad habits and even find several bugs in that endeavor.

(And then we could ship the product on Windows, so that was nice.)

zik
0 replies
17h54m

It's easy to write C or C++ code which works with both gcc and clang. Generally if there's a problem it's a standards compliance issue in your code so it's good to know and fix.

MSVC is a bit more quirky... At times its standards compliance has been patchy but I think it's ok right now.

ska
0 replies
23h43m

It works but you have to keep it at least "semi-active". Some shops have CI services setup to cross compile etc. already. Mainly I've seen this not as a tool to maintain a "backup" but as a way to shake out bugs and keep non-portable stuff from creeping into the codebase.

You could probably do most of it with conservative linting and some in-house knowledge of portability issues, non-standard compiler extensions, etc.

It's typically a lot easier to do this for different compiler, same target, than different targets.

logicchains
0 replies
1d

I'd be really surprised if, without diligently committing to maintaining compatibility with the two compilers, it was easy to up sticks and move between them

Many places deliberately compile with multiple compilers as part of the build/test pipeline, to benefit from more compiler warnings, diagnostics etc.

jupp0r
0 replies
23h54m

There are lots of libraries that need to compile on a variety of platforms (ie different versions of LLVM for Android and MacOS, MSVC for Windows and GCC for some embedded targets not well supported by LLVM).

jamesfinlayson
0 replies
14h19m

In a presentation a few years ago Valve Software said that getting the Source engine working on Linux and Mac helped them find some issues in their Windows code (I don't remember the exact timeline though - I assume they had an Xbox 360 build pipeline at the same time, but maybe not yet a PlayStation 3 build pipeline).

gpderetta
0 replies
22h42m

It is quite common. In most places I have worked we did GCC and clang builds. Some did GCC/clang/msvc/ICC.

And of course plenty of OSS libraries support many compilers even beyond the main three.

cozzyd
0 replies
1d

It's pretty easy to move between gcc/clang/icc for most codebases. Though there are some useful features still that are gcc only. (And probably some that are clang only, though I pretty much only use gcc...)

Guvante
0 replies
22h47m

Our codebase works on all three, we compile on MSVC for Windows, GCC for Linux, and Clang for Mac.

It isn't easy and honestly the idea of "just don't use MSVC for a while" is strange to me. Sure you can compile with any of them but almost certainly you are going to stick to one for a given use case.

"This release is on a different compiler" isn't something you do because of a bug. Instead your roll back a version or avoid using the unsupported feature until a fix is released.

The reason is as much as they are supposed to do the same thing the reality is bugs are bugs, e.g. if you invoke undefined behavior you will generally get consistent results with a given compiler but all bets are off if you swap. Similarly it is hard not to rely on implementation defined behavior without building your own standard library which specifically defines that behavior across compilers.

KolmogorovComp
3 replies
22h20m

Rust has currently no formal specification, so what would you use as an arbiter?

Also the article says otherwise > The project wants to make sure that it does not create a special "GNU Rust" language, but is trying instead to replicate the output of rustc — bugs, quirks, and all

lambda
2 replies
22h10m

You use the same process as for deciding if changes to rustc are compliant or not; the judgement of the language and compiler teams.

And running into these kinds of questions, both in a single project that evolves over time (rustc) and a separate project, help to feed information back into what you need for a formal specification; which is something that is being planned out. Having a second implementation can help find these areas where you might need to be narrower or broader in your specification, in order to clarify enough for it to be implementable or broaden it enough to accommodate reasonable implementation differences.

They are not trying to diverge in language implementation, but there will always be simple compiler bugs, which may crop up in one implementation but not another. For instance, some LLVM optimization pass may miscompile some code; gccrs wouldn't necessarily be trying to recreate that exact bug. I think that "bugs, quirks, and all" really means that they aren't trying to fix major limitations of rustc, such as introducing a whole different borrow checker model which might allow some programs that current rustc does not. They're trying to fairly faithfully recreate the language that rustc implements, even if some aspects might be considered sub-optimal, but they aren't going to re-implement every ICE and miscompilation, those are places where there could be differences.

KolmogorovComp
1 replies
21h15m

I agree with you it has benefits, what I’m wondering about is if not more bugs (though not the same ones) would be fixed by contributing directly to rustc compared to the massive effort of building a new compiler.

Narishma
0 replies
19h51m

It's about finding those bugs in the first place. Working on a different implementation is one way of doing that.

vlovich123
0 replies
1d

The C++ spec regularly has all sorts of errata and misdesigns. Not sure I buy this argument.

tester756
0 replies
19h39m

Having another Rust implementation allows for an "audit" to help validate the Rust spec and get rid of any unspecified behavior. It would also give users options. If I hit a compiler bug in MSVC, I can file a report, switch to GCC and keep working on my project until the bug is fixed. With Rust, that's not currently possible.

Theory is cool, but in practice other compiler has its own quirks too

How about having one compiler which is used by all developers, which increases chances of bugs getting caught faster and fixed faster meanwhile you just use workaround?

slashdev
0 replies
20h44m

I hit a compiler bug in rust and downgraded it to an earlier stable version. That’s usually enough.

nicoburns
0 replies
18h6m

That works until you have a popular project that needs to support all the compilers. Then you need you need to work around n sets of bugs and incompatibilities.

justinclift
0 replies
17h23m

Not sure about that, as the article has this:

The project wants to ... replicate the output of rustc — bugs, quirks, and all.
bryanlarsen
10 replies
1d1h

Rust should learn from the mistakes of C++

They are. You quoted them doing it, the care taken not to become a superset. The problem with C/C++ stemmed from compiler vendors competing with each other to be "better" than their peers.

Multiple front ends of an implementation of a language usually shakes out a bunch of bugs and misimplementations. That's the primary benefit of having multiple front ends IMO.

delfinom
8 replies
1d1h

The problem with C/C++ stemmed from compiler vendors competing with each other to be "better" than their peers.

Yea but this is Linux and OSS. NIHisms are fucking rampant everywhere.

I give it a year before gccrs announces a new "gnurs" mode with language extensions.

saghm
3 replies
1d1h

I'd be surprised about this mostly because I can't imagine anyone using gcc-specific Rust extensions. Not only does rustc have an extraordinary amount of traction by being basically the only choice up until this point, Rust doesn't really have the reputation of moving slowly to adopt new features; if anything, there's been as much skepticism about new features added over the years as excitement. I can honestly imagine more people adopting a Rust implementation with a commitment _not_ to add new features that are added to rustc than one that adds its own separate features.

Even as someone who probably will never use gccrs, I think having more than one implementation of Rust is a good thing for the language (for all the usual reasons that get cited). In the long term, I'd love for Rust to eventually to be able to specify its semantics in a way that isn't tied to rustc, which is a moving target and gets murky when you consider implementation-specific bugs.

nequo
2 replies
1d

Rust doesn't really have the reputation of moving slowly to adopt new features

There are some exceptions to this. Although there are possibly good reasons for such features moving slowly, a competing compiler could popularize them before they are stabilized in rustc. One example is generators[1] but there's a longer list in The Unstable Book.[2]

[1] https://github.com/rust-lang/rust/issues/43122

[2] https://doc.rust-lang.org/stable/unstable-book/language-feat...

estebank
0 replies
23h40m

Prevalent use of nightly Rust purely for specific unstable features would do the same. It's been a while since there has been a "must have" feature has kept people confined to nightly, though.

Guvante
0 replies
22h14m

I feel like having a distinct implementation that is unstable is more likely.

Stabilizing things from rustc is a recipe for disaster as if rustc builds something new you are in a bad place.

Guvante
3 replies
22h42m

NIHI is usually "using product X is almost as hard (or expensive )as building it"

Linux tends to shun unfree software which is a very different take. Buy or build is roughly money vs time (aka money). Software freedom is not the same thing.

Also why does software freedom lead to divergences? Certainly GCC partook in the arms race around C++ but recently they have been pretty good about aiming for full standard support as the goal. (The exception being proposed features but that is unavoidable when C++ prefers battle tested proposals)

pjmlp
2 replies
22h28m

Actually one of the problems we are having with compilers catching up with ISO is that since C++11, features aren't battle tested, rather designed in paper or some special compiler branch, and eventually adopted by everyone.

Guvante
1 replies
22h7m

Honestly depending on the feature that might be for the best. If all three compilers have distinct on by default divergent settings that would be terrible.

My point is outside of instances where everyone agrees something has to change you need a compiler branch at the minimum. This means the compiler you changed will get that feature first.

Given it takes years to implement the full standard this leads to divergence between the compilers in standards compliance.

Honestly all totally workable but makes talking about "standard C++" hard.

pjmlp
0 replies
20h38m

There are more than three compilers, and many of the changes if done at all in an existing compiler, are a mix between private branches and ongoing unstable features, hardly battle tested as the first standards were, when existing practice is what came into the standard.

schuyler2d
0 replies
17h32m

I worry that once gssrs is marginally used, it implicitly gives permission for Microsoft to create some MSVRust and then we really do descend into standard-body hell.

edelsohn
8 replies
21h20m

Based on that logic, why did the LLVM community develop Clang, Clang++, libc++, etc. instead of continuing with DragonEgg? There already were GCC, G++, libstdc++ , as well as EDG C++ front-end.

GCC, Clang, MSVC, and other compilers complement each other, serve different purposes, and serve different markets. They also ensure that the language is robust and conforms to a specification, not whatever quirks a single implementation happens to provide. And multiple implementations avoids the dangers of relying on a single implementation, which could have future problems with security, governance, etc.

The GNU Toolchain Project, the LLVM Project, the Rust project all have experienced issues and it's good to not rely on a single point of failure. Redundancy and anti-fragility is your friend.

vlovich123
7 replies
20h40m

LLVM saw growth for a number of reasons, but nothing to do because it was actually beneficial for the C++ ecosystem:

* A C++ codebase. At the time GCC was written in C which slowed development (it’s now a C++ codebase adopting the lessons LLVM provided)

* It had a friendlier license than GCC which switched to GPLv3 and thus Google & Apple moved their compiler teams to work on LLVM over time.

* Libc++ is a combination of friendlier license + avoiding the hot garbage that was (maybe still is?) libstdc++ (e.g. there were incompatible design decisions in libstdc++ that inhibited implementing the C++ spec like SSO). There were also build time improvements if I recall correctly.

* LLVM provided a fresher architecture which made it more convenient as a research platform (indeed most compiler academics target LLVM rather than GCC for new research ideas).

Basically, the reason LLVM was invested in instead of DragonEgg was a mixture of license & the GCC community being quite difficult to work with causing huge amounts of investments by industry and academia into LLVM. Once those projects took off, even after GCC fixed their community issues they still had the license problem and the LLVM community was strongly independent.

Compilers don’t typically generate security issues so that’s not a problem. There are questions of governance but due to the permissive license that Rust uses governance problems can be rectified by forking without building a new compiler from the ground up (e.g. what happened with NodeJS until the governance issues were resolved and the fork reabsorbed).

It’s funny you mention the different C++ compilers consider that Clang is well on its way of becoming the dominant compiler. It’s already targeting to be a full drop-in replacement for MSVC and it’s fully capable of replacing GCC on Linux unless you’re on a rarer platform where GCC has a bit richer history in the embedded space). I think over the long term GCC is likely to die and it’s entirely possible that MSVC will abandon their in-house and instead use clang (same as they did abandoning IE & adopting Blink). It will be interesting to see if ICC starts porting their optimizations to LLVM and making them freely available - I can’t imagine ICC licenses really bring in enough money to justify things.

nicoburns
2 replies
17h57m

I think I read somewhere that recent ICC is LLVM based, just not freely available because of course the LLVM licence doesn’t require that. Can’t remember the source though, so take it with a pinch of salt.

gpderetta
0 replies
7h53m

is it llvm based or clang based?

edit: it is both!

claudex
0 replies
8h46m
pjmlp
0 replies
8h31m

it’s now a C++ codebase adopting the lessons LLVM provided

LLVM isn't the first compiler written in C++, and it isn't the first compiler framework building tool either

The most famous one, certainly.

naasking
0 replies
17h55m

I doubt very much that using C slowed development. To my recollection it was more that for years the FSF pushed for a GCC architecture that resisted plugins to avoid proprietary extensions. LLVM showed the advantages of a more extensible archite and they followed suit to remain competitive.

girvo
0 replies
14h2m

GCC has a bit richer history in the embedded space

Yeah as someone who works in the embedded space, I do wish that LLVM had more microcontroller support. Maybe one day! It was weirdly hard to get up and running with the STM32L4 and LLVM, whereas GCC was easy as, just for a recent example. Apparently I can pay ARM for a specific LLVM implementation?

alecthomas
0 replies
5h50m

It’s funny you mention the different C++ compilers consider that Clang is well on its way of becoming the dominant compiler.

I'd be interested to know what metric you're basing this on.

Outside the Apple ecosystem, LLVM seems to have made very little inroads that I can see. No major Linux distribution that I'm aware of uses LLVM, and Windows is still dominated by MSVC.

bayindirh
7 replies
1d

The main motivation is having a GPL licensed, independently developed complete Rust compiler which is not dependent on LLVM.

cozzyd
6 replies
1d

Yes this is important. Otherwise in the future you'll see proprietary rust distributions for various devices or with specific features

GrumpySloth
3 replies
23h23m

C++ and Java, which have many implementations, also have proprietary distributions, see Apple’s fork of Clang, Microsoft’s MSVC, Azul VM, Jamaica VM, ...

And some of them are pretty nice. Life is good.

bayindirh
2 replies
22h59m

When you don’t have a GPL implementation, the only good implementation might be closed source fork of the LLVM version, effectively forking Rust into Rust and Rustium. Will life be good then?

sunshowers
1 replies
22h34m

This seems like an astoundingly remote possibility.

bayindirh
0 replies
20h52m

Given the companies backing LLVM and the surrounding ecosystem, I think it’s an astounding possibility.

Companies love permissive licenses for a reason.

swsieber
1 replies
21h45m

You'll still see those though? A fully GPL implementation doesn't prevent the current implementation from being used in a proprietary rust distribution... unless of course the GPL implementation were to fork from the current implementation... which is the main thing people against the GPL implementation are worried about

bayindirh
0 replies
20h54m

I think, the GCCRS project takes the rustc implementation as reference, and targets 100% compatibility with them as a fundamental goal and requirement. There is no schism involved.

rhdunn
4 replies
1d1h

So by that definition you would only want one web browser as there are currently multiple HTML+CSS+JavaScript "front ends". In which case, you end up with a Chrome monopoly where Google gets to decide what the web looks like!

gpm
2 replies
19h13m

A monopoly where a party with conflicting interests to the users being the sole arbiter is obviously problematic. A monopoly which is run by a group of individuals that do not have obvious conflicts of interest, and work for different organizations to mitigate the chance of non-obvious conflicts of interest, is far more interesting. Rust has the latter.

It's not plausible that that will that a similar entity will take over the HTML/CSS/JS space, because of the amount of money competing with users interests in the space due to things like advertising, the desire to invade users privacy, etc. Still, if somehow we could magic up this monopolizing organization without conflicts of interest that would be far superior to the current state of affairs...

edelsohn
1 replies
17h1m

The Rust community is not perfect. Neither is the LLVM community nor the GCC community, not any Open Source community. Consider the recent drama / growing pains that has occurred within the Rust community. Everyone has biases and conflicts of interest. Anyone who doesn't recognize the benefit of alternatives will learn the hard way.

The Rust community rationalization that they don't need / want alternatives for "reason" is self-serving and all about control. I don't care if someone is the BDFL, they aren't right 100% of the time and not always doing things for altruistic reasons. The Rust community has imbued the leadership with some godlike omniscience and altruism because it makes them feel good, not because it's sound policy.

gpm
0 replies
16h42m

There's a long distance between "perfect" and "conflicts of interest", the latter bears more resemblance to corruption than imperfection.

Of course rust leadership is going to make and has made mistakes, I'm confident every BDFL ever has also made and will continue to make mistakes, the same goes for the 'herd' of clang/gcc/msvc/... and the herd of browser makers. The target is not perfection, but merely being better than the alternative.

I think that, in the absence of conflicts of interests, single source of truth models (e.g. what rust rust/python/java/... does) are likely to do better than 'herd' of competing implementation models (C/C++/javascript) at making a good language. The latter probably does better at working despite conflicts of interest, but that's not a problem with most programming languages where there is relatively (compared to the browser ecosystem) little opportunity for a powerful corporation to push their interest to the detriment of others.

I think the rust community is quite clear that the rust leadership is flawed, but that's not very interesting without a way to make leadership better. If you can convince people you have that way - you'll get a lot of interest.

mtrower
0 replies
19h49m

A goodly number of people appear to want just that.

I don’t agree, mind you, but someone reading your comment with this mindset will probably be reinforced rather than swayed.

matheusmoreira
2 replies
14h5m

The project wants to make sure that it does not create a special "GNU Rust" language

That's disappointing. GNU's extensions to the C language are extremely valuable features. I'd like to see what sort of contributions they would make.

awestroke
0 replies
11h41m

Hah

Kwpolska
0 replies
11h16m

They may be valuable C features, but Rust isn't designed by committee, and if it's missing some useful feature, you could probably convince the Rust team to include it, without having to invent GCC-specific extensions.

mhh__
1 replies
23h49m

There are arguments for and against but for colour: D in GCC uses a shared frontend, which means it's not hard to maintain the frontend aspects — but that also means the frontend is a monoculture which is not very healthy at the moment.

If someone will pay you to do it I would do a new frontend — it's surprisingly little work to get something you can bootstrap with.

p0nce
0 replies
20h5m

Having maintained both C++ and D for different compilers, it's way easier to do it in D since the front-end features are the same (modulo builtins) and the stdlib is 99% the same.

cbmuser
1 replies
13h7m

I don’t understand why this question is kept being asked.

Is Rust holy in some sense that’s it’s not allowed to rewrite the frontend?

It’s still a PITA to bootstrap Rust on a new architecture and there is also no working Rust compiler for architectures not supported by LLVM.

I know that there is codegen_rust_gcc but that one has the same bootstrap problems as the original Rust compiler and also requires architecture support to be landed in various Rust part which Rust maintainers have beem hesitant about (been there, done that).

So, I am absolutely glad that there will be an out of the box usable Rust compiler in the near future which will finally allow me to build rustified libraries again on architectures such as alpha without much pain.

fl0ki
0 replies
2h24m

With GCC itself being written in C++, what is better about GCC's own bootstrap story except that it's been part of operating system distributions for longer? I am not aware of anyone using clang or MSVC to bootstrap GCC on Linux distributions for example, they use GCC's own lineage traceable back to C. It has the same bootstrap problem, and it isn't a problem in practice because vendors distribute binary toolchains. Even BSD and Gentoo start out with an initial set of binaries before rebuilding the world.

The architecture support angle is a better one for why a GCC backend is helpful, but that will also be addressed by rustc_codegen_gcc so it isn't a relevant argument here. And by the looks of it, that will be a usable solution much sooner than gccrs, and it will remain usable with a much lower long-term resource commitment. This is an argument more in favor of rustc_codegen_gcc than gccrs.

The strongest argument I can see about bootstrapping issues here is that Rust moves much faster and Rust's own compiler code requires a relatively recent Rust version, so in the extremely specific circumstance where binaries are not available and cross compilation is also not possible, bootstrapping will take more steps than it would for a gccrs frontend limited by C++'s slower rate of evolution. I don't see this being anywhere near as much of a problem as gccrs trying to catch up to & keep pace with Rust's frontend evolution. The former we can solve with the same build farms and cross compiles we need anyway, the latter requires continuous ongoing human effort with all of the disadvantages of C++.

neuromanser
0 replies
1d1h

I think it’s a bad idea to have multiple front ends and Rust should learn from the mistakes of C++ which even with a standards body has to deal with a mess of switches

I do not disagree, yet at the same time, having the same set of switches across different languages is nice, too.

thesuperbigfrog
39 replies
1d1h

Rust needs a language standard:

https://blog.m-ou.se/rust-standard/

https://rust-lang.github.io/rfcs/3355-rust-spec.html

https://github.com/rust-lang/rfcs/pull/3355

There are many organizations and industries that will not adopt Rust until it has a standard.

C, C++, C#, and even JavaScript (ECMAScript) have language standards. Why shouldn't Rust have one too?

C: https://www.iso.org/standard/74528.html

C++: https://isocpp.org/std/the-standard

C#: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...

JavaScript / ECMAScript: https://ecma-international.org/publications-and-standards/st...

faitswulff
13 replies
22h56m

Mara’s blog post (your first link) says essentially that Rust does not need a standard since it already has means for adding features and maintaining compatibility.

thesuperbigfrog
12 replies
22h21m

Mara's blog post also describes the benefits of standardizing Rust.

Since she created the RFC for standardizing Rust (https://github.com/rust-lang/rfcs/pull/3355) and is also on the team that is working on Rust standardization (https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...), I think she was making the point that Rust has good controls in place for adding features while compatibility, not that "Rust does not need a standard".

If she really believed that Rust does not need a standard, why would she create the RFC and join the team working on the effort?

Rust is a great language. There is no reason why it should not have a standard to better formalize its requirements and behaviors.

faitswulff
8 replies
21h39m

Yes, it’s a nuanced blog post. But that also means it doesn’t coming out swinging hard for needing a standard, either. It seems like there is as strong an argument to be made that “Rust is a great language. There is no reason why it needs a standard.”

See the Ferrocene compiler which has been qualified for ISO standards. It’s essentially a standard Rust 1.68 compiler with a lot of added documentation. If you need a Rust compiler for safety critical environments, it’s reasonably priced and requires essentially zero changes to the Rust compiler that they didn’t just upstream. Without a standard.

Yes, it would be nice to have a standard for reducing ambiguity. But does the language need a standard? And if so, then for what purpose?

thesuperbigfrog
7 replies
20h51m

They created a specification for Ferrocene because Rust does not yet have a language standard:

https://spec.ferrocene.dev/

> But does the language need a standard?

Yes, Rust needs a standard.

> And if so, then for what purpose?

For the same purpose that all standards have--to formally define it in writing.

Ferrocene's web site (https://ferrous-systems.com/ferrocene/) shows that it meets the ISO 26262 standard (https://en.wikipedia.org/wiki/ISO_26262).

Why does ISO 26262 matter? What purpose does it serve? Couldn't a vehicle manufacturer just say "our vehicles are safe"?

Which would you trust more: a vehicle that is verified to meet ISO 26262 standards, or a vehicle whose manufacturer tells you "it's safe" without formally defining what "safe" means?

I stated it above, but I will re-state it here: Without a language standard, there are many organizations and industries that will not use Rust. Not because Rust is not a fantastic tool for the job, but because laws, regulations, etc. require standardization and qualification of components.

This means that I can use a qualified C compiler and toolchain to write safety-critical code, but I can't use Rust despite the fact that Rust is a better choice and will help to prevent problems. Standards do matter. Rust needs a language standard.

faitswulff
6 replies
20h37m

For the same purpose that all standards have--to formally define it in writing.

This is tautological. It's equivalent to saying "it needs a standard to be written because it needs a written standard."

I mean what use case is there for Rust language users that isn't already met by the Ferrocene project? And the Ferrocene project is not a standard as in "other implementations will be found lacking," but a description of the 1.68 compiler as-is. That is a specification, not a standard. Ferrous Systems did not need Rust to have a standard in order to qualify the compiler for ISO 26262 and IEC 61508.

thesuperbigfrog
5 replies
20h26m

> "it needs a standard to be written because it needs a written standard."

Yes. And if the law in your country requires it to be standardized for specific use cases, then a language standard is needed.

> what use case is there for Rust language users that isn't already met by the Ferrocene project?

Can you legally use Rust for the control software in aircraft? (https://en.wikipedia.org/wiki/DO-178C)

What about the safety systems for railroads? (https://ldra.com/ldra-blog/software-safety-and-security-stan...)

What about the control systems for nuclear reactors? (https://www.nrc.gov/docs/ML1300/ML13007A173.pdf)

faitswulff
4 replies
20h23m

And if you need a ISO 26262 qualified Rust compiler, one exists. Hurrah.

Since you edited your post…simply having a standard won’t immediately qualify the language for those industries. There is only a tenuous link between having a standard and qualifying the language for industrial use.

thesuperbigfrog
3 replies
3h56m

> simply having a standard won’t immediately qualify the language for those industries. There is only a tenuous link between having a standard and qualifying the language for industrial use.

Not having a language standard disqualifies Rust.

Administrators say that it shows that Rust is "not serious" and "not ready" for critical work.

faitswulff
2 replies
3h45m

Not having a language standard disqualifies Rust.

Then explain how Ferrous Systems qualified a stock Rust compiler.

thesuperbigfrog
1 replies
3h8m

Ferrocene is not a "stock Rust compiler". The "stock Rust compiler" is not qualified for safety critical work. If Ferrocene did not add value above what is offered by the stock Rust compiler, why would anyone buy Ferrocene?

Ferrocene is qualified for some safety critical work and plans to have more qualifications soon.

Ferrous Systems wrote a blog post about the process: https://ferrous-systems.com/blog/qualifying-rust-without-for...

faitswulff
0 replies
2h42m

The blog post you link to says it's an unmodified fork. Here's a Ferrous Systems employee saying as much:

Ferrocene is upstream rustc but with some extra targets, long term support, and qualifications so you can use them in safety critical contexts. This is what was stopping things like automotive companies from moving to Rust for things like engine control units, etc.

It basically costs some money for the support and the qualification documents, but they will be all you need to prove qualification to any pertinent regulatory body so that your software can be certified for use in a real vehicle or whatever.

...Ferrocene is just unmodified rustc

https://old.reddit.com/r/rust/comments/17qi9v0/its_official_...

Basically the value add was to expand the support and documentation, which was required for qualification.

Again...no "standard" needed.

I think you are conflating standards and specifications. Ferrous fleshed out the specification, the description of the 1.68 compiler as-is. That means Rust 1.68 as-is was good enough for ISO qualification. Without a standard.

A standard is a minimum bar for languages to meet in order to be considered compliant. That's not a problem right now because there is, for all intents and purposes, a single canonical compiler and that is not likely to change.

3836293648
2 replies
18h35m

Spec ≠ Standard

Rust absolutely does not need a standard. Having a standard is a completely outdated way to design software in the internet era. Having a specification is a great idea and is what most people actually mean.

GrumpySloth
1 replies
8h50m

What’s the difference?

3836293648
0 replies
2h18m

A standard is a way of writing and ratifying a specification. A spec is just a formal document describing the requirements for something, but a standard is when you have large coalition of relevant players accepting the spec as a committee. In this context it's also implied that it's an ISO standard which is extremely slow, rigid and formal and is a downgrade in every way from the current system of anyone who wants to voice their opinion just chiming in on an RFC on github.

jen20
9 replies
1d1h

There are many organizations and industries that will not adopt Rust until it has a standard.

Counterpoint: rust is doing fine without those organizations and industries. Why change what is working well?

yjftsjthsd-h
3 replies
1d

If rust wants to replace memory-unsafe languages, it needs to cover their use cases.

junon
2 replies
19h20m

Which usecases does it not cover?

yjftsjthsd-h
1 replies
17h34m

>> There are many organizations and industries that will not adopt Rust until it has a standard.

So if you want those orgs/industries to get memory safety via rust, you either need a standard, or to convince them to not require that.

junon
0 replies
10h32m

Those are bureaucratic reasons, not anything specifically lacking in Rust.

thesuperbigfrog
2 replies
1d

> Counterpoint: rust is doing fine without those organizations and industries. Why change what is working well?

Because Rust is a game changer.

Wouldn't it be better to have Rust used for the code that runs on automobiles and aircraft? Or you would prefer that they keep using (subsets of) C and C++?

Is the security of your Internet-of-Things (IoT) devices good enough or could they be better?

What do you have against Rust being used in more places and for more purposes?

faitswulff
1 replies
23h2m

Ferrous Systems has a basically bog standard Rust 1.68 compiler that’s been certified for use in the most safety critical environments: https://ferrous-systems.com/blog/officially-qualified-ferroc...

This happened without a standard.

thesuperbigfrog
0 replies
22h30m

> This happened without a standard.

They created a specification for Ferrocene:

https://spec.ferrocene.dev/

While it is technically not a Rust language standard, it serves a similar purpose for Ferrocene.

gkbrk
1 replies
1d

Is it doing fine without those? It seems like every time someone makes a personal or professional project in C/C++, the Rust community floods the comments section and talks about how it's irresponsible to use C/C++ and how the author should just throw away the whole project and re-write it in Rust.

It happens so often it became a meme at this point.

sgift
0 replies
23h35m

It happens so often it became a meme at this point.

No, the meme is kept alive by C++ people who say that this happens without it actually happen. It happened a long time ago a few times, since then it's either an active discussion about languages, where for some reason talking about Rust is a problem, but every other language is okay or it's someone feeling attacked by the mere idea that projects which only could be done in C++ could now also be done by another language and starts crying about how the Rust community would flood every topic.

There's no longer an area where C++ is the only available option. Get over it. The rest of the world did a long time ago.

mjw1007
4 replies
1d

That RFC is accepted, and this is starting to happen.

Progress has been disappointingly slow, but the project is alive, and has potential to speed up next year.

https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...

starlevel003
3 replies
20h53m

That RFC is accepted, and this is starting to happen.

Progress has been disappointingly slow,

I don't think there's ever been a more concise summary of Rust.

Ar-Curunir
2 replies
18h34m

LOL some people complain that Rust is moving too slow, and others that Rust is moving too fast and adds too many features. Can never make everyone happy...

scottlamb
0 replies
13h43m

I just filled out the Rust survey [1] and may have done both—iirc there were checkboxes for missing features and concerns the language is getting too complicated. It's a hard balance to find.

[1] https://blog.rust-lang.org/2023/12/18/survey-launch.html

SV_BubbleTime
0 replies
12h15m

Depending on the scenario, I don’t hear mutual exclusion there.

It can be moving too slow on important things, and too fast on unimportant or volatile things.

gkbrk
3 replies
1d

Go has a really nice spec and multiple implementations too.

https://go.dev/ref/spec

loeg
2 replies
21h58m

Does any other go implementation support the full language? I thought gccgo lagged significantly.

eikenberry
1 replies
21h8m

The official GCC releases tend to lag a bit as they are released on a longer cadence than the standard Go compiler but upstream. The current release is a bit further behind than normal due to the complexities around implementing the generics back end, but it is being worked on (https://groups.google.com/g/golang-dev/c/5ZKcPsDo1fg).

pjmlp
0 replies
8h26m

Thanks for the heads up, I was starting to think it would join gjc in retirement home.

ajross
2 replies
1d1h

Yeah. The culture crash here is shocking dissonant. The people you'd normally expect to be the biggest voices for documentation robustness are...

... suddenly finding themselves in the "Actually, language standards are bad" camp, all because of a tribal opposition to the FSF?

Write the standard. Then argue that gccrust doesn't do it right. Don't refuse to document the language just to hamstring a competitor.

Also, please start with the borrow checker semantics. I don't think I've ever met anyone who could explain exactly what the rules are regarding what it can/can't prove.

estebank
0 replies
1d

There's been active work on a Rust specification for a while now. It'll happen.

https://blog.rust-lang.org/inside-rust/2023/11/15/spec-visio...

Rusky
0 replies
1d

What culture crash? The Rust project has a language reference, is working on expanding it into a more formal spec, and there are efforts like Ferrocene's to qualify the existing compiler for use in safety critical environments.

The argument is not that language standards are bad, it's that a C++-like ISO standard is unnecessary (when the quality documentation can exist in another form) and C++-like implementation fragmentation is bad.

(Have you read the NLL RFC? The Polonius work?)

ivanjermakov
1 replies
22h44m

This is really odd to me. Language design should start with specification. Your compiler is just a reference implementation.

sunshowers
0 replies
22h33m

Rust language design does start from specifications, namely RFCs. What is being discussed here is producing a more formal specification than the ones that currently exist.

pie_flavor
0 replies
22h1m

The Ferrocene spec permits Rust to be used in those industries.

rayiner
11 replies
1d2h

I’m surprised at the negative reaction to GCC-RS. If a language doesn’t have multiple implementations, it’s a pretty sad excuse for a language.

PoignardAzur
6 replies
1d1h

I’m surprised at the negative reaction to GCC-RS. If a language doesn’t have multiple implementations, it’s a pretty sad excuse for a language.

That used to be the common wisdom (especially because of C/C++), but it's a lot more debated these days.

The consensus in the Rust community is that the current situation (one canonical-by-definition compiler, lots of documentation, a minimal spec for safety-critical industries, and specs for some modular sub-parts) get most of the advantages of multiple implementations without the drawbacks.

not2b
3 replies
1d1h

I think that the people who are debating it are missing some things. It often happens that only when a second implementation is attempted, unspecified holes in the documentation are exposed and standards are tightened up. Any differences in the two compilers will either mean that there's a bug in the new implementation (most likely), a bug or unclear issue in the documentation, or a bug in the mature implementation (it happens).

And no, the official compiler isn't canonical by definition. If it were, it would mean it has no bugs, and if there's a crash or a wrong result that's what the language is supposed to do.

vlovich123
2 replies
1d1h

Everything in engineering is trade offs. A single front end ends up in a stronger position for the community (i.e. the users of the language) because bug reports are easier (only one project to report them to), collaboration is easier/simpler for OSS maintainers (no issues where “crate foo works fine on rustc but doesn’t on gccrs” to triage/maintain), and language features come quicker (no need to synchronize/debate with other implementations). The downsides of underspecification are much smaller by comparison. As far as documentation goes that’s a red herring because gccrs is reusing the single standard library implementation which means that any documentation issues would still exist (I don’t think I’ve even once needed to lookup documentation for the compiler & language docs issues would be shared as well since rustc in this model still remains the canonical ground truth).

not2b
1 replies
23h21m

At this point, gccrs is quite immature, so you can simply ignore it. If we get to the point where its quality is good enough, this will change, but the likelihood is that any problems found will result in improvements to documentation (if something is underspecified). There could also be optimization bugs in LLVM that aren't present in GCC, so we could find bugs in rustc that aren't in gccrs at some point, but I think that will only be significant if gccrs greatly improves.

For now, anyone who finds that “crate foo works fine on rustc but doesn’t on gccrs” can just report a bug to gccrs.

vlovich123
0 replies
20h51m

I’m not saying that gccrs contributors should stop. If they want to invest their time & energy into it kudos. I happen to think the costs outweighs the benefits but you’re right that I can just ignore it to no ill effect right now. Where I would caution that though is when the Rust project starts needing to making accommodations to help gccrs (which is something described in the article). It’s possible that some accommodations help Rust anyway / there’s no harm, but it’s also useful to be mindful that more & more of these accommodations can start to have a negative cost over time & thus impact Rust as a whole & these costs are obnoxiously hard to quantify and because people like to get along there’s a general preference to be more accommodating. That’s the real danger that gccrs poses and the intangible benefits that you lay out are likely not that significant in the long term compared with the cost to build gccrs / modify Rust to accommodate gccrs. Of course, we’re just arguing over opinions since it’s so hard to quantify any of this.

binary132
0 replies
1d1h

sounds like cope for the fact that there is not a good spec (a "Standard", perhaps?) tbh

Longlius
0 replies
21h15m

it's a lot more debated these days

By who specifically? I only ever see arguments against standardization and multiple implementations from the Rust community.

sylware
1 replies
1d1h

The problem is on the long run: syntax stability, avoiding feature creeps with extensions/attributes, like we actually have with C (c++ is beyond saving due to its absurd and grotesque complexity).

Without that, you won't have real-life alternatives.

ajross
0 replies
22h16m

avoiding feature creeps with extensions/attributes,

That seems like a purely semantic argument? Rust is adding features extremely rapidly! You're just saying it's "development" if it's done by one entity but "creep" if it's done by someone else?

duped
1 replies
1d

Just personally I can see the virtue of multiple/different implementations. But the issue is building on top of gcc. The GNU toolchain is a dumpster fire, and I legitimately don't know how anyone can develop on it.

Not just ideologically, I mean literally - I don't understand how one sets up a development environment for GCC itself. I've had the misfortune of bootstrapping it a handful of times and it's the single worst behaved piece of software I've ever seen.

crotchfire
0 replies
16h49m

Bootstrapping GCC is not hard at all:

https://github.com/NixOS/nixpkgs/blob/master/pkgs/developmen...

(there are a ton of infrequently-used options in there; if you ignore them what's left is quite simple. replace all the options with `false` and do the dead-code elimination.)

Hint: don't use glibc. The real problem comes from glibc and the way it depends on gcc internals, which depend on libc (i.e. circularly). Bootstrapping GCC on Musl is quite easy.

glibc is the real dumpster fire.

johnklos
8 replies
1d

So we'll finally see Rust support for all the architectures that gcc supports that LLVM doesn't, like Alpha, SuperH and VAX, for starters. That'll be nice!

crotchfire
3 replies
16h40m

And mips64, which rustc recently dumped support for after their attempt to extort funding/resources from Loongson failed:

https://github.com/rust-lang/compiler-team/issues/648

This is the biggest problem with the LLVM mentality: they use architecture support as a means to extract support (i.e. salaried dev positions) from hardware companies.

GNU may have annoyingly-higher standards for merging changes, but once it's in there and supported they will keep it for the long haul. It's like buying vs renting. It takes a lot more dev hours to get support into GCC, but once it's there, it stays there.

khuey
1 replies
16h23m

This is the biggest problem with the LLVM mentality: they use architecture support as a means to extract support (i.e. salaried dev positions) from hardware companies.

I have a hard time seeing this as a bad thing. Hardware companies seem like the most logical people to pay for maintaining support for the architectures they sell.

crotchfire
0 replies
5h51m

The pain is inflicted not on the hardware companies or future customers, but on their past customers.

Their customers pay full cost up front. Vendors can pay full dev cost up front too. GCC's model encourages this.

estebank
0 replies
14h50m

Support wasn't dumped, it was demoted from Tier 2 to Tier 3 to better reflect the level of support that backend effectively already had. As mentioned in that thread, if someone steps up to maintain it, it can be bumped again to Tier 2.

Be aware that GCC doesn't have a similar level of specificity about the state of each platform they support. There are packages in Debian that "compile" but when trying to execute them on "exotic" platforms you encounter bugs immediately. Supporting a platform in a static codebase of a compiler is easy, but in codebases actively being worked on, like GCC and LLVM, keeping things working is not trivial.

Asraelite
2 replies
22h56m

I would assume it also means additional configuration options for already-supported architectures.

For example, I recently discovered that with RISC-V, GCC supports the RV32E target but LLVM doesn't.

monocasa
1 replies
21h43m

Are you sure about that?

I was pretty sure that llvm has supported RV32E for years now. https://reviews.llvm.org/D70401?id=395048

Asraelite
0 replies
21h10m

Oh, last time I checked clang didn't support it.

In any case, there are a lot of other compiler flags that are exclusive to gcc.

segfaultbuserr
0 replies
22h54m

Can't wait to see some PDP-11 machine code from Rust (the last time I checked, freestanding C compiling still worked on GCC).

charcircuit
5 replies
1d1h

the Linux kernel is a key motivator for the project because there are a lot of kernel people who would prefer the kernel to be compiled only by the GNU toolchain.

Linux can already be compiled with clang if you want an all LLVM based toolchain. The duplicate effort of developing and maintaining this does not sound worth it to have GNU "purity."

mtrower
2 replies
19h40m

I think you may be misunderstanding here. They aren’t trying to keep the kennel GNU exclusive —— they merely want the option for a pre GNU toolchain.

charcircuit
1 replies
18h38m

What is a pre GNU toolchain?

ColonelPhantom
0 replies
3h24m

They presumably mean "pure GNU toolchain", not "pre".

edelsohn
1 replies
16h56m

It's not about purity, it's about options. The ClangBuiltLinux community advocated that Linux should not be dependent upon a single compiler. But when Rust came along, suddenly many of the same people suddently decided that a single compiler was okay.

rightbyte
0 replies
5h23m

I always understood it as that they want control over "best practice".

A GNU compiler could add convenient feutures that the hardcore Rust users don't want other Rust users to be able to have.

Like ideological purity.

gumby
4 replies
23h8m

A lot of care is being put into gccrs not becoming a "superset" of Rust, as Cohen put it. The project wants to make sure that it does not create a special "GNU Rust" language, but is trying instead to replicate the output of rustc — bugs, quirks, and all.

In my experience, the part in italics is a significant mistake.

Rust does not have a specification; there is a reference but it is explicitly not normative. A language udocumented except for a single reference implementation (as is tpday's fashion) have a long term weakness. What is the motivation for slavishly trying to maintain compatibility with the bugs and accidental quirks of another implementation? To guarantee that existing code will work in both implementations, which sounds sensible. And it is a sensible goal, but it does so at enormous cost by enforcing it in the wrong place.

The problem is that sometimes decisions are wrong, and sometimes bugs are written. But when you promise that all implementations will be bug compatible as part of compatibility you are also signing up to fossilize these bugs whether you want to or not.

A good example of someone who embraced this (to their credit!) is Microsoft: they spend a lot of person-power making sure that old programs continue to run while trying to fix security and reliability bugs. Rust need not and should not sign up for this burden so early in its lifespan. They should learn from history.

If they want the language to evolve they should embrace QA and QC. Famously "you cannot test quality into a product". You need QA: architecture, design, design and code reviews, etc to ensure that things will work properly and when not, that "failure heads in the appropriate direction". Then later in the development cycle QC (test cases) tries to see if you missed. This doesn't just apply to product development -- it applies to language development especially.

The strong standards (e.g. ComonLisp, C++, FORTRAN) embraced this belief. The weak, de facto ones (Most notably Python, but plenty of others) can still become popular, but change is difficult. Look at how long the Python 2->3 transition took, and how few python implementations there are.

nicoburns
3 replies
17h49m

If they find a big then presumably they’ll report it upstream and both implementations can be changed.

gumby
1 replies
16h42m

The point is that fixing that bug may break existing, running code that depends on it.

estebank
0 replies
15h13m

Thats why we run crater against all of crates.io: there are changes that should be allowed that we hold off on because 9f breakage, and fixes we assumed we couldn't land that in reality had no real world impact. As time goes on the confidence a "clean" crater run gives us goes down, due to adoption in closed source environments, but it is still invaluable signal. Depending on the bug, we can keep the current behavior around for prior editions but fix it for future ones.

akira2501
0 replies
15h37m

Presumably they'll both agree 100% of the time as well.

mkesper
1 replies
22h44m

Thanks for posting the lwn.net link, reminded me of renewing my subscription!

sophacles
0 replies
21h25m

It's a good subscription to have. I've gotten far more value from my LWN subscription than I spent, and recommend everyone that does lowish level work get one.

dj_gitmo
1 replies
17h9m

Cohen listed a few things that gccrs is already useful for. According to him, the Sega Dreamcast homebrew community uses gccrs to create new games for the Dreamcast gaming console, and GCC plugins can already be used to perform static analysis on unsafe Rust code. The Dreamcast community's interest stems from the fact that rustc's LLVM backend does not support the Hitachi SH-4 architecture of the console, whereas GCC does; even in its incomplete state, gccrs is helpful for this embedded use case.

This is delightful.

habitue
0 replies
11h24m

This is a little misleading. You don't need to have a gcc frontend for this, just a gcc backend