return to table of content

Rust – Faster compilation with the parallel front-end in nightly

insanitybit
89 replies
1d4h

I know it's early days on this, but compilation speed isthedownside to Rust IMO. Having worked in a Rust monorepo, my number one complaint was compilation speed. It made CI/CD more expensive and it could really slow down dev time if we needed to remove the cache (happened sometimes - not cargo's fault, actually it's a docker bug, but still).

Glad to see this progress.

rayiner
52 replies
1d2h

It’s unlikely that it’ll ever get dramatically better. It’s already been heavily optimized, and the Rust compiler now has more parallelism than pretty much any other mainstream compiler. Language design choices make Rust more challenging to compile than a language (like Go) that is specifically designed for fast compilation.

insanitybit
23 replies
1d1h

I don't agree. There are a lot of things on the table, performance-wise.

1. The compiler could ship binary artifacts, which would avoid all compilation of build scripts/ proc macros,andallow those to be compiled with performance optimizations enabled. This would be huge on its own.

2. Cranelift could potentially improve backend codegen compile times significantly as well.

3. Link times are still suboptimal, mold is promising here.

We can definitely still get significant wins out of the compiler.

Pretty sure compile times can get cut in half (or better) with those changes.

rayiner
10 replies
1d1h

Maybe, but even twice as fast would still make it a “slow compiling language” in comparison to a “fast compiling language” like Go or Pascal.

This is not a knock on Rust—I doubt it’s possible to do what Rust does—including zero overhead abstractions—in a fast compiling language. Go certainly pays a performance penalty with things like boxed generics.

insanitybit
3 replies
23h48m

Twice as fast (or more) is just what I'm aware of in terms of "things that are possible to do today but aren't the default/ would take work to hack in". I don't even know what other options there are beyond that.

But sure, twice as fast isn't fast, it's just faster. My point is that we're not at the point of serious diminishing returns, there's tons of stuff left to do.

estebank
2 replies
23h25m

If there was a magic pot of gold, it would be technically possible to precompile every crate version with every rustc version on every supported platform and distrivute those prebuilt rlibs to users through cargo. That would help with first compile times when using the standard tooling, and not just for proc-macros.

adastra22
1 replies
22h0m

Not with incremental compile times, which is what people are complaining about.

estebank
0 replies
20h39m

Different people with different use cases have different complains. I haven't quantified but have certainly seen various complaints about both cases from different people.

foldr
2 replies
22h33m

Go’s generics aren’t boxed. At least, not in the sense that Java’s are. For example, you can write generic functions that operate over slices of unboxed values.

neonsunset
1 replies
22h0m

Still worse than true monomorphized generics in C# which too has fast compilation times (by nature of being JIT-compiled, but AOT target is still faster than Rust once you download dependencies).

foldr
0 replies
1h9m

Go’s implementation strategy for generics essentially is monomophirization plus obvious code size optimizations (e.g. don’t generate different code for different pointer types given that they all have the same underlying representation). Do you have specific scenario in mind where Go’s implementation strategy carries a significant performance penalty? I think possibly there are some misconceptions in this thread about how Go’s implementation actually works.

cxr
2 replies
23h44m

This is not a knock on Rust

It is a knock on Rust. The circumstances of Rust's state of existence in 2023, as a language created in this millennium but not in the last decade, are absurd.

I doubt it’s possible to do what Rust does—including zero overhead abstractions—in a fast compiling language

People packaging releases for software written in Rust and others who are passive consumers and finding themselves downloading some project repo to compile from source for whatever reason (e.g. because the creators don't do binary releases themselves) don't need the Rustlang toolchain to do the things that active contributors to a given project (who want type system diagnostics, etc.) need from it.

I'd call this oversight a massive lack of imagination on the part of TPTB, but that would be wrong, because there is no need to imagine the differences between these use cases. They exist. An adequate toolchain for dealing with projects written in Rust—despite the deliberate decisions made during language design that led to these problems—does not.

insanitybit
1 replies
23h9m

The circumstances of Rust's state of existence in 2023, as a language created in this millennium but not in the last decade, are absurd.

Sorry, could you be more vague?

Anyway, your postseemsto be about packaging software? Or something? Confusing since that has nothing to do with the language...

cxr
0 replies
22h46m

Sorry, could you be more vague?

Yes, I can, since "2023", "this millennium", and "in the last decade" are all concrete, well-defined things.

mariusor
8 replies
1d1h

It would be great it people would pay Rui to make mold versions for Windows and Mac, which ideally would be required before making it a part of the official Rust tool chain.

satvikpendem
7 replies
1d

He did monetization the wrong way around, IMO. Most CI is on Linux, but most developers are on Windows or macOS, so he should've capitalized on the Linux builds being paid while the local developer builds on Windows and macOS being free.

mariusor
4 replies
1d

I doubt that anyone cares all that much about linking times in the CI. And even if someone does, it's probably an individual developer or team, ie, someone without decision power to pay for something as niche as a linker.

Also, mold was designed as an alternative to gold / lld, therefore it would require to be open-source and free on their main platform: linux.

insanitybit
2 replies
23h46m

I care deeply about linking times on CI. It's very frustrating having your code all build and run tests locally just to wait a long time for it to pass all of the CI barriers. Plus, CI builds often go stale much faster, so you're looking at much longer build times without caches.

mariusor
1 replies
23h24m

Sure, but you're not really contradicting me unless you're able to get your company to pay for faster tooling. And if you can, why haven't you already?

satvikpendem
0 replies
9h59m

Well, yes, that is the crux of this argument, that onecanconvince their employer to use mold. Otherwise, what is the point of using it? Desktop users by and large will not notice a small 3-5% improvement in compile times while those that pay for CI will.

satvikpendem
0 replies
1d

Well, CI is where the costs are, and if the application is big enough, even a few percent reduction via faster linking times would equate to lower costs, while in contrast, developers won't really care or notice a few percent reduction on their local machine.

It's AGPL on Linux now, and they sell commercial licenses for companies that won't touch that license, and they were contemplating earlier making mold only available under a non-free source available license like BSL, so there's no "requirement" as such that it be free and open source, even on Linux.

jylam
1 replies
22h39m

Most CI is on Linux, but most developers are on Windows or macOS

Do you have any data on this ? Maybe that's industry dependent, but I hardly know any Windows (not even talking about macOS, that's almost nil) developers outside of video games and web dev. 100% of Rust devs I know use Linux, to keep on the subject.

satvikpendem
0 replies
10h0m

Data that most people don't use Linux as their day-to-day desktop OS for development? I suppose you can just look at desktop Linux statistics, which shows <5% usage. In my experience, most use macOS, or Windows via WSL2, which does use Linux but I am not sure if that is actually reflected in any desktop OS statistics.

hobofan
2 replies
21h20m

2. Cranelift could potentially improve backend codegen compile times significantly as well.

I've been told that the Cranelift team (at least for the time being) doesn't have the intention on focusing on the optimizer to a degree where it would be competitive with LLVM's optimizers (which would also be a huge effort). So if you want faster compile times you would have to take significant performance hits (which for a lot of code compiled in CI is not a trade-off that people are willing to take).

insanitybit
0 replies
21h8m

Yes, to be clear, Cranelift would be suitable for dev and test builds, you'd likely use llvm for release builds. So in your CI builds you'll almost certainly stick to llvm.

(1) and (3) are still very significant, fwiw.

estebank
0 replies
20h43m

Beyond specific optimization and implementation details of a compiler, the three variables of "compilation speed", "generated code optimization" and "language expressiveness" are fundamentally in tension. In order to move one axis you have to affect one or both of the other two.

kaba0
12 replies
1d1h

It’s not really that go is better designed for fast compilation - it is just a plain language where the compiler can just spit out vaguely optimized code, and call it a day.

Rust’s unique feature itself fundamentally depends on extensive static analysis. It’s not a design choice, it is pretty much what Rust is - a low-level language without a GC that is still memory safe. The price for that is hefty compile times.

zeroxfe
5 replies
1d1h

It’s not really that go is better designed for fast compilation

One of the explicit goals, by Go's creators, was fast build times. I still remember Rob Pike introducing Go during an all-hands at Google, where he talked about the very long build times for C++ and Java in Google's monorepo, and then showed some promising demos. (Most of us rolled our eyes at it then, because it was just a "hello world", but it's quite impressive how the language has evolved and remained true to its goals.)

- it is just a plain language where the compiler can just spit out vaguely optimized code, and call it a day.

It's a simple language, but I wouldn't call it plain, nor characterize the optimizers that way.

kaba0
4 replies
1d1h

It is not faster at compilation than Java, which was not particularly designed for such.

Also, as can be seen, go is not a well-designed language, having language warts we knew for 50 years. I would take the creators’ claims with a huge grain of salt.

Mawr
1 replies
18h55m

Java is compiled to bytecode, for later compilation to machine code at runtime (JIT). Go is compiled AOT, straight to machine code. It makes no sense to compare them.

Unless you meant that Java's AOT compilation is faster than Go's?

kaba0
0 replies
9h44m

The parent comment explicitly mentioned that Java is slow at compilation, which is just false.

Also, there are single-pass compilers that produce machine code, they are not fundamentally slower than a byte code generator. Of course extensive optimizations will be more expensive.

riku_iki
0 replies
23h44m

It is not faster at compilation than Java

why do you think so? Their goal was to be much faster, but I can't find much benchmarks..

rayiner
0 replies
1d

But Java is inspired by Smalltalk, which is a late-binding language that defers most things to runtime. I believe in Java you can generate bytecode directly as you’re parsing the source file.

mirashii
5 replies
1d1h

Profiling the compilation process suggests that this isn't the case. Rust's higher level passes are rarely the dominant part of execution time.

Check outhttps://github.com/lqd/rustc-benchmarking-data/tree/main/res...and the other benchmarks in that repository for some data on how real world crates compilation times are spent. You'll find that backend code generation and optimization dominate most crates compile times. There are a few exceptions: particularly macro heavy crates, a couple crates with deeply nested types that hit some quadratic behavior in the compiler. But overall, the backend is still the largest piece.

rayiner
2 replies
23h35m

The front end is time-consuming enough where replacing the backend with something lightweight like Go’s wouldn’t get you a 5-10x improvement, which is what I think you’d need to really move the needle on user perception. Moreover, a lot of the backend slowdown is due to front end choices monomophization which generates large amounts of intermediate code that must then be optimized away.

pcwalton
1 replies
19h26m

I doubt that a hypothetical version of Rust that avoided monomorphization would compile any faster. I remember doing experiments to that effect in the early days and found that monomorphization wasn't really slower. That's because all the runtime bookkeeping necessary to operate on value types generically adds up to a ton of code that has to be optimized away, and it ends up a wash in the end. As a point of comparison, Swift does all this bookkeeping, and it's not appreciably faster to compile than Rust; Swift goes this route for ABI stability reasons, not for compiler performance.

What you would need to go faster would be not only a non-monomorphizing compiler but alsoboxedtypes. That would be a very different language, one higher-level than even Go (which monomorphizes generics).

mirashii
0 replies
11h10m

Just wanted to note Go does only a partial monomorphization, only monomorhpizes for gcshapes and not for all types. This severely limits the optimization potential and adds a runtime cost to dispatch, at least in its initial implementation.

https://github.com/golang/proposal/blob/master/design/generi...

kaba0
1 replies
1d1h

Then there is an open niche for a “development mode”, that outputs barely optimized binaries with proper error handling, fast. (I do know about debug, etc).

I think Zig is correct in having different modes.

insanitybit
0 replies
23h44m

That is what Cranelift is optimizing for.

ragnese
8 replies
1d

I agree with this assessment, despite the optimism of some others. C++ has had slow compile times since forever, and so will Rust. Rust does a lot more work at compile time than most other popular languages. And it's largely stuff that's fundamental to the language. For example, besides borrow checking, the de facto default way to do polymorphism/generic programming in Rust is at compile time via what is essentially code-gen. In Java if you write `void useFoo(Foo foo)`, it'll compile quickly and will use runtime polymorphism to make sure that the argument is a subtype of `Foo`; in Rust if you write `fn use_foo(foo: impl Foo)`, the compiler is going to spit out a `use_foo` definition for each concrete type that is passed to `use_foo`. That takes time.

That being said, I definitely find the trade-off worth it. Though, I've never been the kind of programmer that desires the constant iteration and feedback of something like "REPL driven development".

insanitybit
5 replies
23h49m

C++ has had slow compile times since forever, and so will Rust.

Rust has a massive advantage, which is having a 'sanctioned' package manager and built-time capabilities. A huge part of Rust's slowdown is due to:

a) Having to compile build scripts

b) Those build scripts being builtwithoutoptimizations (100s of times slower at runtime)

If cargo + crates.io supports pre-built dependencies that is amassiveoptimization.

This isn't theoretical or optimistic, it's just a fact - we can already see this by compiling build and proc macro crates with optimizations, it's just not the default and they still have to be compiled once. IF you remove that compilation time, again, it's not theoretical, it's turning N time spent on those deps into 0 time spent.

There is easily a 200% performance win available, just from theknownoptimizations that are on the table.

vlakreeh
3 replies
22h50m

we can already see this by compiling build and proc macro crates with optimizations, it's just not the default and they still have to be compiled once.

I'm hopeful something like watt (https://github.com/dtolnay/watt) will land in Cargo that'll allow us to ship pre-compiled wasm blobs for proc-macros so we can just have sandboxed binaries.

insanitybit
2 replies
22h41m

That'll be great for proc-macros, although it won't work for build scripts.

adastra22
1 replies
22h1m

Why not?

insanitybit
0 replies
21h10m

It could, in theory. But build scripts can do arbitrary things so the wasm sandbox would need to allow for that.

Rusky
0 replies
22h4m

Rust has another advantage in the language itself- generic code can be type-checked and (partially) optimized before being instantiated.

When you export a generic function in C++, every file that pulls it in has to re-parse it, and every instantiation has to re-type-check it. C++20 modules should help with the first part, but they can't help with the second (and neither can concepts). Further, separate translation units can wind up duplicating the same instantiations, which the linker has to deduplicate.

When you export a generic function in Rust, by the time it gets pulled in somewhere else, it takes the form of pre-parsed, pre-type-checked MIR. It can also be pre-optimized, so type-independent optimization work is shared between instantiations. The compiler can also tell, before instantiation, which type parameters a function does not actually depend on, and essentially erase them ("polymorphization"). Further, Rust's compilation model reduces the redundant duplicate instantiations C++ does, both by using larger translation units and by automatically sharing any instantiations in dependencies with their dependents (though you can do this by hand in C++).

(Incidentally, these differences also apply to inline functions- in C++ you wind up putting their definitions in headers and recompiling them from scratch over and over; in Rust they are shared MIR form.)

jvanderbot
1 replies
22h48m

C++ compiles times are awful insomuch as you have to do the multiple times because the "template barf" makes finding root causes very challenging, esp with multiple problems.

Rust makes the problems easier to fix, IMHO. So, maybe even with same (or slightly longer) compile times, you'll hopefully have faster time to delivery.

ragnese
0 replies
22h1m

you'll hopefully have faster time to delivery.

In fact, in my experience, Rust has faster time to delivery than any other language I've used. It takes forever to compile, but I have so many fewer runtime bugs that have to be caught (hopefully) by testing, that it still comes out ahead, overall (again, for me and my various projects).

I also find write-time to not be as slow as others complain about, except when it comes to async/futures where it is, indeed, pretty rough. But, if I sit and think about how many times I have to flip back and forth between my code and some library code to try and guess what exceptions it may or may not throw in other languages or whether something could be null or not, I find that the dev times aren't so much better in these other languages as people sometimes claim.

Sure, if you're a fulltime JavaScript dev with 10 years of experience, you might remember things like that calling the Array constructor with 0 or >1 arguments creates an array with those values, but if you call it with exactly 1 number, it will create an empty array with thatcapacity. But, since I have to switch between many languages regularly, my time to delivery is significantly reduced by nonsense like that. Likewise, it's reduced by NPEs in Java, double-frees in C++, Kotlin's inane idea to use exceptions for errors and coroutine control-flow, etc, etc.

nindalf
4 replies
1d

unlikely it’ll ever get dramatically better

This is a HN thread about a blog post about how compile times have become dramatically better thanks to newly introduced parallelism in an area that was completely single threaded.

PoignardAzur
3 replies
21h26m

From the post:

However, at this point the compiler has been heavily optimized and new improvements are hard to find. There is no low-hanging fruit remaining. But there is one piece of large but high-hanging fruit: parallelism.

From discussions I've seen, there's not much high-hanging fruit left either, short of rewriting the entire compiler for better incremental compilation.

insanitybit
2 replies
20h24m

I think if you're talking about the compiler getting faster at what it does today, how it does it today, that's true. But that's a heavy constraint. If we got support for binary dependencies, that wouldn't be a compiler optimization in the same sense as parallelism is, but it would radically improve compile times for the average project.

PoignardAzur
1 replies
1h41m

Yeah, but binary dependencies or watt-style precompiled macros aren't going to get improve the build times peoplereallycare about, incremental build times. The parallel frontend is the plausibly the last major improvement we'll see on that front for years.

Incremental matters more than clean build times because (A) you're likely to do a lot more of them (B) they break developer flow more than waiting on CI does (C) at least in theory, you can always add more cores to your CI and get reasonable speedups, less so for incremental.

insanitybit
0 replies
59m

Yeah, but binary dependencies or watt-style precompiled macros aren't going to get improve the build times people really care about, incremental build times.

Why not? If I add a new struct with `#[derive(serde::Serialize)]` I'll benefit from serde being compiled with optimizations.

they break developer flow more than waiting on CI does

Eh, depends.

dralley
0 replies
19h37m

How do you define "dramatically"?

It might not get 10x better, but 3x isn't outside the realm of possibility. Just swapping the LLVM backend for cranelift can cut compile times in half.

The low-hanging fruit is gone but there are lots of hard but likely-significant improvements left on the table.

lucasyvas
10 replies
1d3h

As someone who has only done small projects in Rust, I'm curious how many LoC are we're talking? And were you splitting your project into crates where it made sense?

eminence32
8 replies
1d3h

One thing I've observed is that every smallish projects (~10k LoC) can take a while to build if you're working on slow network filesystems.

When using local disks, a 10k LoC project takes ~5 seconds to build

When using network FS, the same project takes ~23 seconds to build

lolinder
2 replies
1d2h

Wouldn't this be true of any compiler? Do other compilers have optimizations that avoid excess disk usage when on a slow file system?

ragnese
0 replies
1d

I wouldn't be surprised if Rust/Cargo does more disk IO than other build tools, though. Rust does a lot of compile time code gen and caches a lot of stuff on disk.

eminence32
0 replies
1d1h

Yes, I think that's right, but I don't have any data to compare rust to other compilers.

insanitybit
2 replies
1d1h

FWIW we did this to significantly reduce storage usage

    strip = "debuginfo"
This was a ~20x reduction in size, 279M -> 19M. If you have slow storage it could really help performance (don't recall, personally).

eminence32
1 replies
1d1h

Good hint, thank you. I'm trying this out and I do see an improvement in performance. The ~23 second build now takes about ~14 seconds

insanitybit
0 replies
23h44m

Hell yeah! I'm so glad I could help, that's a big win.

FWIW it does break debuggers since you won't have the symbols. Just comment it out when you need a build that has all of that info.

the8472
1 replies
1d2h

That doesn't seem particularly surprising. Lots of software is slow when your storage is slow.

eminence32
0 replies
1d

You're right that some slowdown is expected, but for me personally I hadn't realized how bad this particular FS was, nor had I expected how much it impacted build times

insanitybit
0 replies
1d3h

We split each service into crates, plus some libraries. There were some native dependencies as well, which could really impact compile times, as well as some codegen for things like protobuf.

From a quick tokei, looks like 49kloc of Rust.

runeks
8 replies
1d1h

How slow are we talking here? How many minutes to build your monorepo?

Don't hesitate to be brutally honest — nothing can shock me as my go-to compiler is GHC.

insanitybit
6 replies
1d1h

It was over a year ago so it's a bit hard to recall... Maybe 20 minutes clean? 1 or 2 minutes for cached. Things have probably improved since then but idk. We did stuff like protobuf, we had a few proc-macros of our own, plenty of serde, and I think ~3 native dependencies (zstd, librdkafka, something else I don't recall). The native dependencies could be brutal, it caused long serial stalls, if I recall correctly. Linking took up a lot of time as well, but I forget why we didn't use mold, there was a reason at the time... but, again, over a year ago.

Here's the code:https://github.com/grapl-security/grapl/tree/main/src/rust

Dowwie
5 replies
23h58m

what was the caching strategy?

insanitybit
4 replies
23h42m

We did our builds in docker, for various reasons. So we relied on the docker buildx cache and some other tricks that I don't recall because I didn't work a ton on the build system.

Yoric
3 replies
20h26m

Oh... Docker-based Rust builds are extremely slow in my experience.

I seem to remember that caching was really, really bad.

insanitybit
2 replies
20h21m

Docker isn't the issue. I just did a clean build, 19 minutes 44 seconds (with a ~year old compiler though).

estebank
1 replies
20h13m

Just out of curiosity, can you check the time for the same project with a more recent toolchain?

insanitybit
0 replies
14h32m

    Finished dev [optimized + debuginfo] target(s) in 16m 58s

Neat.

thinkharderdev
0 replies
1d1h

Depends on the code and whether you are doing a release or debug build. I work on a very large rust project (~1m LoC) with a lot of dependencies. We've split it into multiple crates and the compile times don't really frustrate my dev workflow (incremental compilation works well and debug builds are pretty fast anyway). But building in our CI pipeline where we do a fully optimized build (single codegen unit, LTO enabled) it takes a while (~30m) which is annoying when you are waiting for a hotfix to be ready. It's also incredibly resource intensive (mainly linking with LTO enabled) so we've been the bane of our platform teams existence since we need something like 50GB of memory in our build container to do a full release build :)

jicea
4 replies
21h16m

I really disagree here: I'm a maintainer of a medium sized open source Rust project [1] and I'm always surprised by Rust compilation time speed (local). On a MacBook Pro, it's a matter of seconds, in debug. Release compilation and CI/CD are slower, but, since the beginning of my Rust journey (2 years ago), Rust compilation seems just very fa...

To balance / explain my point:

- my day work is Java / Kotlin with Gradle. Now, we can talk aboutglacialcompilation times

- on my open source Rust project, we try to minimise dependencies, don't use macros (apart derive[Debug, Clone] etc...), and have a very moderate generics usage

If you take the time to `cargo build` my project, I'll be happy to have feedbacks on compilation times

[1]:https://github.com/Orange-OpenSource/hurl

insanitybit
2 replies
20h35m

https://github.com/grapl-security/grapl/

I just did a clean build `cargo build`, 19 minutes 44 seconds.

I added 1 line (`dbg!("foo")`) and it took 14.76s

jicea
1 replies
19h57m

In grapl/src/rust:

- cloc shows that there are ~70,000 lines of Rust

- with `cargo tree`, I see that the project depends on ~600 crates

In my toy project:

- cloc shows that there are ~40,000 lines of Rust

- with `cargo tree`, I see ~40 crates

I don't know the scope of grapl, but 600 (transitives) crates seems a lot to me. Maybe that explains why this particular build is so long. I haven't managed to build it (seems to have prerequisites on proto buffer stuff).

insanitybit
0 replies
15h0m

Yes, it'll require a protoc installation to actually compile, as well as some native dependencies.

Naturally more crates means more time on compilation. Grapl is a pretty large project, lots of services that do different things, so it isn't too surprising that it has a lot of dependencies relative to what I assume is a more tightly scoped project.

For example, Grapl talks to multiple different databases, AWS services, speaks HTTP + JSON and gRPC (with protobuf), has a cli, etc etc.

pdimitar
0 replies
41m

21.33s on my iMac Pro (Xeon-2150B, 10 core / 20 thread). Unoptimized.

Optimized: 1m 25s

zozbot234
3 replies
1d3h

Rust-analyzer is even more of a resource hog than rustc itself. Not sure how directly applicable this work might be, but hopefully we'll see big improvements there as well. It's something that's clearly needed for state-of-the-art IDE support.

sigmonsays
2 replies
1d

I agree. rust-analyzer often eats up more ram and cpu than all my web browser tabs (firefox).

pdimitar
0 replies
2h18m

To be fair, I don't care. RA is extremely valuable, and I am _not_ one of the people who think 16GB of RAM and 250GB of SSD is good for a programmer machine.

chubot
0 replies
1d

That’s interesting/surprising. I remember this in the Eclipse days, and it was often attributed to Java’s allocation-heavy style, garbage collection, and lack of value types

Also Java style of tiny classes and tiny files.

It’s an issue on both the implementation side and the thing-being-implemented

I would have thought Rust would be better on both fronts.

How many lines are the Rust codebases and their dependencies?

echelon
2 replies
1d2h

We're using a Rust monorepo too.

async-stripe takes over two minutes to build due to codegen. We're considering switching to dolladollabills.

Our core API server takes a minute to build, and we have about a dozen services and command line apps, a bunch of little shared library crates, two desktop apps, and a Bevy app.

Our Github actions docker build takes ~10 minutes if you don't include the tests, but we're starting to shave off more time. (Our monorepo is 105589 Rust LOC total)

lucasyvas
0 replies
1d2h

We're considering switching to dolladollabills

I thought this was a joke. That has to be one of the best library names I've ever seen.

FridgeSeal
0 replies
20h41m

async-stripe takes over two minutes to build due to codegen. We're considering switching to dolladollabills.

Ooohh interesting. We also use async-stripe, definitely going to have to check out dolladollabills though. Also in the Rust monorepo camp, our proof release takes ~5 mins from clean, tests are about 6 mins. We’ve invested a bit of effort getting our build-time down: don’t build in a docker container, we just copy the final artefact in-this wiped the most time of our builds. More parallel codegen units too.

stjohnswarts
1 replies
23h19m

I never understood the "gotta clean cache every time" with CI/CD. I'm sure it makes sense sometimes but you can make a compromise. I worked at two places where I only clean up our c++ caches on the build systems on the weekend. We did that "just in case" caching was hiding a problem, but would only have a "small 1 week" set back at most. We were not on heavy release cycles though so we could afford that. We never had a single problem traced back to the cache or build hiding something. This was for internal company software. Not sure why people are willing to pay the cost for a full rebuild every time if that full rebuild takes a long time (rust or c++). I'm sure there are cases for it though, just that it doesn't have to be done everywhere.

insanitybit
0 replies
22h38m

It's not that you have to, it's that you have many different builds that are going to stomp on each other's caches, plus your build services are often ephemeral - especially since I was at a small startup where we wanted to shut systems down overnight to keep the money.

adastra22
1 replies
22h3m

I’m continuously amazed that this opinion is so prevalent. I maintain both a C++ and a Rust project, and incremental compile times on Rust are vastly better. It is, I believe, one of the fastest statically typed compiled languages. Go is faster, but I think that’s it.

What improvements are you expecting?

metaltyphoon
0 replies
18h30m

C# AOT seems to be faster than Rust too.

papaver-somnamb
45 replies
1d4h

What I find most impressive about Rust is the marketing.

It's not the language itself. It's not the safety and other attributes. And it certainly can't be the adoption (currently low % according to StackOverflow [0]).

It's how hyped Rust is. It's how effusive every blurb and sound bite seems to be. Glowing articles frequently make the front page of HN. Famous for penetration into Linux kernel development. What is this mechanism? Who is behind it? Is this veritable storm of hitting me on the head with Rust-this-Rust-that coordinated behind the scenes by some powerful entity? A hyperactive grassroots cheerleader squad? Does it infect C/C++ programmers who've dared to sample it once, turning them into noisy advocates, a la addictive drugs or parasitic fungi? Is Rust merely the It-Thing at the moment that people are mimetically/socially driven to latch onto?

We didn't see this with Lua, Ruby (that was mainly RoR anyways), Python, Swift, C#, certainly not newer-spec C and C++, or any of the others, even Java back in the day.

I don't know Rust, maybe it's deserving of the adulation. But I gotta say, the Rust marketing machine is one of the most superlative campaigns in IT I've ever seen.

[0]https://survey.stackoverflow.co/2022#most-popular-technologi...

Footnote: Some folk seem to be taking offense at my question where none is offered. I'm not for or against Rust, merely ambivalent & curious. Seen many of these waves come through, Rust is definitely the current wave, and the biggest so far!Thatis something I wish to learn from.

dijit
13 replies
1d4h

new systems languages are much more rare than new scripting languages, and the focus on usability of the surrounding toolchain makes it a particular darling for anyone who touches it.

The other arguments (memory safety et al.) aresort of on the sideimo; I really enjoy writing, reading and running rust because the developer experience is just so solid.

When I say "developer experience" I mean the crates system and cargo, not necessarily the language itself (which I find a bit ugly to be frank).

ReleaseCandidat
12 replies
1d4h

new systems languages are much more rare than new scripting languages

No, and have never been. C and later C++ are (were) just _that_ prominent, that most people never heard of the alternatives (except for Pascal and/or ADA, Objective-C and maybe D).

Nowadays there is Zig (most people have heard about that, I guess), Carbon, Cppfront, Odin, Jai, Vale, Austral (and some more I've forgotten about).

dijit
7 replies
1d4h

For your 7 systems languages I can name 150 new scripting languages.

Objective-C requires a runtime, it's not a systems language because of that, I think Pascal is also requiring a runtime but that's not important right now.

D is a great example, I tried very hard to make D work and it just didn't.

I was even trying to use D-Langs own mailing list frontend (written in D) and it was nearly impossible for an outsider.

Contrast that to Rust, and aside from picking the right toolchain (nightly or stable); "cargo build" after following the quickstart on the internet will build practically all rust projects. With the minor caveat that if there are any bindings to system libraries then you need those too (libssl-dev for debian being a common requirement for openssl-sys for example).

ReleaseCandidat
6 replies
1d4h

For your 7 systems languages I can name 150 new scripting languages.

Please don't take that as an offence, but please name at least 5 or 10 (which aren't "just" compiled to JS, I've actually forgotten about them ;). I've only heard about Elixir and Verse (yes, because of SPJ) which actually are somewhat used "in the wild".

But my point is exactly that Rust _is_ really an outsider by being significantly better than the (many or not so many does not matter that much ;) alternatives to C++.

nicoburns
3 replies
1d3h

I can't name 150, but since C++ was released in the 80's there has been: JavaScript itself, Java, C#, PHP, Python, Ruby, Go, Swift, Kotlin, and many otherverypopular languages in the "higher level programming languages" category. And only really Rust has reached a comparable level of popularity in the "systems level language" category (with perhaps D and Zig in the next "tier").

pjmlp
2 replies
1d2h

Category systems programming.

Cedar, Ada, Modula-2, Modula-2+, Modula-3, Oberon, Oberon-2, Component Pascal, Object Pascal, Turbo Pascal, Delphi, Active Oberon, OCaml, Objective-C, Oberon-07

nicoburns
1 replies
1d1h

A lot of those are even older than C++. And of your list, only Objective-C has seen adoption on the scale of Rust, and it both predates C++ and requires a much heavier runtime than C/C++/Rust.

pjmlp
0 replies
22h55m

CFront was released in 1983.

I bet you can manage yourself to find out when the other ones came into the world of computing.

NeXTSTEP drivers were written in Objective-C, macOS DriverKit is named in homage to the original NeXTSTEP DriverKit.

Metal is written in Objective-C, with C++14 as basis for Metal Shading Language.

As for adoption, moving goalposts, I thought we were talking about system languages being created, not about taking the computer market by storm.

dijit
1 replies
1d4h

I mean, from the top of my head; Crystal, V-Lang, Hack (based on PHP), Futhark, Raku & Mojo

pjmlp
0 replies
1d4h

Crystal, V-Lang, Futhark, Mojo are also targeted to the C and C++ space, and aren't scripting languages.

insanitybit
2 replies
1d4h

People make new interpreters/ managed languagesall the time, it's not even close.

ReleaseCandidat
1 replies
1d4h

Yes (even I did and do), but I thought we were talking about languages that are actually used by more than 10 users.

insanitybit
0 replies
1d4h

Lots of languages, like Python, started off as random toy projects.

pornel
0 replies
1d4h

For a very long time there has been no viable alternative in the strictly-no-GC space. Every new systems-adjacent language concluded "GC is fine for 99% of programs" (which is true), and then excluded themselves from the no-GC niche. Rust almost did the same thing — early design assumed per-thread GC.

Fiahil
6 replies
1d4h

The answer is is the survey you linked :https://survey.stackoverflow.co/2022#technology-most-loved-d...

It's the most loved language, by a wide margin, and has been for several years.

PS: and to be honest, I love programming in Rust. So I will defend it tooth and nail even if I can't use it in my day-to-day work at the office.

nindalf
3 replies
1d4h

Thanks for writing the comment I was going to.

I'll echo your point about loving programming in Rust. I've programmed and continue to program in other languages (Java, PHP, Go), but nothing gives me the same joy as programming in Rust. I know it'll run the first time and likely work correctly as well. And what's more, it'll be faster than any codeIcould have written in another language.

Not everyone will feel this way, certainly. They, like GP, might come to this bizarre conclusion thatnoonecould like Rust this much, and therefore there is a shadowy cabal promoting Rust for unknown reasons. I don't think there's much we can say to convince them otherwise.

But one thing I do when I see folks complaining that they've never seen such promotion on HN ever. I click on their profile to check how old their account is. Invariably it's 2016 or later. Which means they never saw the cycles of Ruby, JS and Go promotion. This will come and go. In a few years we'll be complaining about, I don't know, the relentless promotion of Mojo.

papaver-somnamb
2 replies
1d4h

| I click on their profile to check how old their account is. Invariably it's 2016 or later. Which means they never saw the cycles of Ruby, JS and Go promotion.

Provided that was the account they've been using since they started on HN ;)

nindalf
0 replies
1d4h

And their memory isn't failing them :)

lesuorac
0 replies
1d2h

Not that I really paid attention during those other waves but weren't they justified? Sure in today's time you might have a bunch of alternatives you prefer to do but it wouldn't surprise me if somebody made a new language 20 years in the future based on Rust but without XYZ mistakes.

i.e. JS allow webpages to do stuff without needing to reload the page every time. It basically lets you make mods for the browser! Like JQuery was really useful at the time and not now because everybody browser implemented their methods.

i.e. Ruby (Rails), IIUC it was super easy to do CRUD sites without spending as much time on the tedious plumbing/infrastructure stuff.

mplanchard
0 replies
1d3h

I do use rust professionally, and I also love it! I’m 2.5 years in, and I would be sad if I had to switch jobs and mostly use a different language.

Nothing is perfect, but Rust is definitely more pleasant to work with (for me at least) than C++, JS, TS, Python, Go, or anything else I’m likely to get a job using. I do think it’d be fun to work in a Clojure or Elixir shop, but I’m still hobby-level on those, so probably it’s just the perception of green grass.

cxr
0 replies
22h3m

[Rust is] the most loved language, by a wide margin, and has been for several years

"...among users that respond to StackOverflow surveys."

kibwen
4 replies
1d3h

> We didn't see this with Lua, Ruby (that was mainly RoR anyways), Python, Swift, C#, certainly not newer-spec C and C++, or any of the others, even Java back in the day.

I think this is an error of perspective. The hype for Python and Ruby in the mid-2000s was off the charts. And the corporate marketing blitz for Java in the 90s was so beyond extreme that it will never be replicated in the space of programming languages.

Rust has zero marketing budget. The vocal adulation is a result of a confluence of coincidental factors that will never be replicated: an open-source, volunteer-run project from perhaps the only company with the proper combination of funding, technical chops, and open-source cachet to pull it off; an industry landscape that has so long rejected some of the best ideas from academic languages (e.g. tagged unions and pattern matching) that any language who can successfully express them to a non-academic audience will be seen as visionary; a specific niche (safe systems programming) whose exemplar never took off in the realm of FOSS, and who has failed to appeal to most segments of industry for whatever reason, with a ripe potential audience eager for a modern champion; slow-moving competitors in the systems space who had become complacent from lack of competition, and who are prevented from effectively competing at the safety niche without breaking backwards compatibility; a relatively friendly production-ready compiler backend in LLVM that suddenly makes competing in the high-performance cross-platform systems space at all feasible; an audience of newly-minted web devs looking to dip their toes into the systems space without needing to offer the traditional pound of flesh; a focus on standardized tooling that makes onboarding easy and going back to other languages painful; and a totally fortuitous, somewhat accidental, fairly brilliant realization that safe systems programming could be possible in the first place, thanks to a novel combination of affine types and region-based memory management, that worked so well that it took even the creators by surprise. Rust is lightning in a bottle.

zozbot234
2 replies
1d3h

That wasn't just corporate marketing; Java was the first memory safe language to be widely used in enterprise code, and it led to a lot of C/C++ code being rewritten to address the issue of memory safety bugs. The same may happen with Rust, or perhaps C++ will add facilities for a memory safe subset of the language.

pjmlp
0 replies
1d2h

Java enterprise tooling is based on Smalltalk for a reason....

Likewise the famous Gang of Four book, target at the enterprise space, also uses Smalltalk alongside C++, for the same reason.

Also Visual Age for Smalltalk was the ".NET" of OS/2.

kibwen
0 replies
1d2h

I want to divorce the marketing from the implementation for a second here; Java was widely used in the enterprise as a C++ replacementbecauseof the marketing blitz, not really because of its technical prowess (with the benefit of hindsight, it's safe to say that Java's marketing vastly overpromised what Java could actually deliver for the first decade or so of the language's life). Note that PHP was the beneficiary of a similar (though smaller) enterprise marketing blitz, and PHP took far longer to get its act together than Java did; these companies were adopting based on what they read in magazines and saw on TV (yes, there wereTV commercialsfor Java!).

pohl
0 replies
1d1h

beautiful summary

neonsunset
3 replies
1d3h

It's simple - the praise is predicated not upon Rust being good, but how comparativelyshiteverything else in the same category is: C, C++, higher level languages for system programming (Go is extremely inadequate, and slow).

Rust gets a lot, a lot of small things right. The things you usually use as an excuse as to why one language or another is better - I found Rust does much more of them in a good way. In most other languages you can have let's say good package management but not fast iterator expressions, or you have compile-time iterator expressions but they are ass to write and package management does not exist, or you have both but all other features are missing, and etc.

Arguably, because Rust is also verbose and sometimes a bit ceremony-heavy, it's not a perfect language, which is why I use C# daily (which is similar and familiar enough with the tooling, package management and critical features like generics and async). But when I need lean and mean applications, there is simply no reason to pick anything but Rust except maybe out of curiosity.

saberd
0 replies
19h32m

I feel like shitting on golang is uncalled for if the systems you write can be done in C#.

I often see comments like this on hn, but In my experience golang works great and I'm writing real time systems with it.

papaver-somnamb
0 replies
1d3h

Thanks neonsunset -- so there is this relativistic argument to be made in favor of Rust.

Previously, I'd been involved in writing up samples of a CLI tool in each of Golang, Ruby, Haskell, C++, Python, Java, Clojure. From there we would select one language ecosystem to marry ourselves to and move forward. Every single one left us wanting in terms of either team capability/emotional levels, language expressiveness, distribution, speed, tooling, package management, etc. And I learned here that Rust seems to have each of these down pat to satisfactory degrees.

Next time I'm faced with birthing yet another CLI tool, I thinks I'm gonna try Rust first.

PartiallyTyped
0 replies
1d3h

With rust I can sleep at night. It's sooooo much easier to trust a compiler that is THAT pedantic and makes it its business to not let you do stupid things than something that can't even show if you will hit a nullptr.

mjw1007
2 replies
1d4h

I think a lot of it is that there was pent-up demand for a language like Rust.

Rust feels like the language that in 1990 I was promised I would have in the year 2000.

pjmlp
0 replies
1d4h

During those days I was quite happy with Turbo Pascal and Turbo Vision,...

At least Pascal syntax is fashionable again, across all newer languages.

ekidd
0 replies
1d4h

One of my more cherished scraps of notes dates from the 90s, when I tried (and miserably failed) to invent something much like Rust.

I've been waiting a long time for a language that's fast, safe and allows me to dabble in functional-style programming.

hobofan
1 replies
1d4h

Does it infect C/C++ programmers who've dared to sample it once

Yes, that is a big part of it.

From my very subjective impression, having attended many Rust meetups with a constant influx of newcomers, I would say the two biggest groups that are really longing for Rust are:

- C (and sometimes C++) programmers that are looking for a breath of fresh air with modern tooling (e.g. package management) that isn't the result of decades of patchwork upon patchwork

- People that would like to work "close to the metal", but in the past were too tormeted by C/C++/Go segfaults(/other memory issues) to approach the subject. (That's also the group that I fall in)

We didn't see this with Lua, Ruby (that was mainly RoR anyways), Python, Swift, C#, certainly not newer-spec C and C++, or any of the others, even Java back in the day.

I'm pretty sure we saw a similar hype with Ruby. If you go back ~10 years in the HN archives, you will see about as many "... in Ruby" posts as you see today with Rust. All the other languages listed are too old (I would guess "too old" means predating widespread social media), or have something obvious that alienates a big chunk of developers (e.g. Swift and .NET languages being essentially single-OS languages).

A hyperactive grassroots cheerleader squad?

If anything the opposite. In the early days of Rust there existed the self-aware inside joke of the "Rust Evangelism Strike Force". Once people actually tried to meme to much with that (e.g. brigading subreddits), that was strongly rejected from inside the "community".

papaver-somnamb
0 replies
1d4h

Thanks hobofan, this is the kind of insight I am looking for. Your input hit the spot!

coopierez
1 replies
1d3h

I think a lot of people are stuck with C++ due to the fact that there are a lot of legacy C++ codebases (many decades old), moreso than legacy Python or Javascript codebases.

Rust does a ton of things better than C++ as other people here are mentioning. For example, at my 20-man C++ shop, we have around 2 people's worth of full-time cmake work, that is, just maintaining the build system. This work would largely go away if it was a Rust codebase.

pjmlp
0 replies
1d2h

Only if it would be a pure Rust codebase.

tubthumper8
0 replies
1d2h

I guess I'm not sure what the "marketing machine" is here. The Rust team published an article _on their own blog_ and someone posted here on HN, then people discuss it if they're interested.

I mean every language has some amount of "marketing" - people speaking at conferences, etc., but it's not like you're getting product advertising here in the form of commercials on TV or advertisements on the side of web pages. I think what you're seeing here is genuine interest.

Unless you're implying that HN itself has some sort of algorithm bias to push Rust posts to the top?

tialaramex
0 replies
1d3h

People don't know what they want well enough to specify it. However, if you built something they like, they will recognise that even if they struggle to articulate why it's good.

I don't think that's part of marketing, unless you'd see for example a furniture company deciding to use a higher quality wood for their new tables as "marketing" because the tables will be nicer and customers will like that.

resource0x
0 replies
1d3h

Is Rust merely the It-Thing at the moment that people are mimetically/socially driven to latch onto?

I think so. Sociological phenomenon initially bootstrapped by a small number of "influencers". Same with golang 10 years back. Same with frameworks (angular, react). Do you remember the hype around "XML revolution" around 2000? It was arguably bigger than rust's.

Somebody has to write a book on the history of software cults - it will make a fun read. :-)

insanitybit
0 replies
1d4h

People really want Rust to be popular becausepeople really like writing Rust. There is no campaign, no background force. There's some intelligence to it - people know to post something like this on a Friday morning, and not a Thursday night, but that's it.

You're just seeing a project with genuine excitement behind it.

freeopinion
0 replies
1d2h

It seems you are not the type to run out and adopt the shiny new thing on day one. I commend you for this. You haven't bought a hydrogen car yet? That's ok.

You haven't tried a cordless drill? Well, alright. You don't have to. You can survive without one. But rather than ask others what the big deal is at this point, why don't you just try one the next time you have to screw a picture to the wall. Borrow one if you don't want to commit to owning. Figure out for yourself whether having a charger and having to swap batteries is worth it. Maybe you'll decide it isn't. Maybe you'll be disgusted that you have to buy a new drill every 10 years while your 40 year old corded model still works fine.

But don't be too shocked if everybody else makes the leap. Sometimes the shiny new fad really is the future.

And you can still keep your corded drill around for those times when it really is the better tool for the job.

eviks
0 replies
1d4h

Given the lack of knowledge and openness to "it's deserving of the adulation", what's wrong with it being the answer?

allo37
0 replies
1d4h

Does it infect C/C++ programmers who've dared to sample it once, turning them into noisy advocates, a la addictive drugs or parasitic fungi?

It definitely did that to me. I remember trying out Rust and was amazed at how much abuse I'd put up with from C++ for all these years. Now I just want to try out Rust in a large enterprise project to see if it will just be replaced with a different kind of abuse...

MSFT_Edging
0 replies
1d2h

Something I think helps it, is it has the allure of re-writing a project from scratch, thinking about your mistakes ahead of time, and lacking most of the technical debt.

This isn't to say it lacks any technical debt, but that the language feels very transparent and thoughtful. Contrary to say, a language built by a big company(.NET, Go) or built by strong opinions(Python). Rust in comparison feels modest in its delivery but ambitious in its design.

Its terrible reasoning for picking a language to write a project in, but Rust feels trustworthy I think.

darthrupert
9 replies
1d5h

I've been away from doing Rust semi-actively for a few years, and have been working in other environments like python and Typescript. Now I tried it for a project for a while again and the compilation speed is pretty much instant. It's always great when things get better, but things are pretty damned good already.

Also, these days it's possible to use cheat codes aka ChatGPT to flounder through almost all the difficult Rust problems that might have been show stoppers a few years ago. It's looking pretty great on that side of the fence.

mpartel
6 replies
1d5h

Compilation times depend very heavily on the amount/size/complexity of dependencies.

the_duke
2 replies
1d4h

And on having a fast CPU, RAM and disk.

It makes a huge difference.

For what it's worth, I'm working on a project with about 1000 dependencies and incremental debug build time is about 4 seconds on my laptop.

That's already pretty good.

epage
1 replies
1d3h

I know you said 4s is good but have you tried changing the linker?

Number of dependencies likely won't affect incremental build times except for linking and replacing it might offer some good gains for incremental builds.

the_duke
0 replies
1d2h

Oh that time is already with mold.

Default linker is quite a bit slower.

nicoburns
0 replies
1d4h

And also processor speed.

mpartel
0 replies
7h9m

To clarify, dependencies significantly affect incremental builds too. Seems loading information about compiled dependencies into the compiler and/or resolving stuff about them can take significant time.

darthrupert
0 replies
9h28m

My project isn't large or even mid-sized, but it has over a hundred dependencies. Building the dependencies certainly takes some time on my raspberry pi 4, but after that initial hit, every change to the project builds a release in about 15 seconds, and a debug build in about 10.

And on my Macbook Air M2, where I actually develop, these things happen fast enough to call them instant. Perhaps I'm a bit spoiled there due to the excellent hardware. As a comparison, a Typescript project I'm working on using a more powerful Macbook always takes about 5-10 seconds to build.

I don't doubt that actually large Rust projects take a long time to build, though, but even these small and mid-sized were rather slow to build a few years ago.

Hamuko
1 replies
1d5h

My compilation times are otherwise great except when I make cross-arch Docker images in GitHub Actions. Then I'm seeing Docker image builds take 60 to 90 minutes. Has made me quite aware of just how many dependencies my projects have in total.

madiele
0 replies
1d3h

If you are using buildx+QEMU for compiling your much better off cross-compiling inside the native architecture of github actions and the export the result to a build step that emulates the architecture: I went form cross compilation that took 2 hours to 10 minutes total

You can follow my Dockerfile of my project as an example on how to do it

rowanG077
7 replies
1d9h

I'm not sure this is an interesting direction. Isn't Rust compilation already highly-parallel at the file level? Sure if a single file compiles faster that nice. But won't this steal resources from the top-level file parallelism? I find it quite concerning that there are no numbers given for that.

jakubadamw
2 replies
1d8h

Today, Rust compilation is parallel in the front-end at the crate level, not file or even module level. Large crates will therefore not benefit from parallelism, and splitting a large crate into smaller ones comes with its own costs too.

This, however, doesn't apply to the code generation phase with LLVM which is already parallelizable at the codegen unit level (the number of codegen units is configurable), but that's the “back-end”, whereas this new parallelization applies to the “front-end” of the compiler.

littlestymaar
1 replies
1d6h

Today, Rust compilation is parallel in the front-end at the crate level, not file or even module level. Large crates will therefore not benefit from parallelism,

Sort of, it's managed at the crate level and not file or module indeed, but the compiler then splits crates into smaller chunks called“codegen units”.

See:https://doc.rust-lang.org/rustc/codegen-options/index.html#c...

This flag controls the maximum number of code generation units the crate is split into. […] When a crate is split into multiple codegen units, LLVM is able to process them in parallel. […] The default value is 16 for non-incremental builds. For incremental builds the default is 256 which allows caching to be more granular.
afdbcreid
0 replies
1d5h

Your parent already addresses this:

This, however, doesn't apply to the code generation phase with LLVM which is already parallelizable at the codegen unit level (the number of codegen units is configurable), but that's the “back-end”, whereas this new parallelization applies to the “front-end” of the compiler.
the_duke
0 replies
1d5h

Did you read the post?

mort96
0 replies
1d6h

Anyone who has tried to compile Rust projects on a system with many cores while running htop could tell you that it spends a whole lot of time using only a few cores. Remember, Rust is not like C where the interface definitions (header files) exist on disk so all files can be compiled completely independently.

mattrighetti
0 replies
1d6h

Existing interprocess parallelism

When you compile a Rust program, Cargo launches multiple rustc processes, compiling multiple crates in parallel.
goldsteinq
0 replies
1d8h

It’s not. Rust compilation is currently parallel at the _crate_ level (i.e. one crate is one translation unit). Speeding up compilation of large crates could lead to nice speedups.

But won't this steal resources from the top-level file parallelism?

It won’t. rustc uses jobserver protocol to coordinate parallelism with cargo, so the total amount of threads doing compilation of the whole project doesn’t exceed CPU count.

codeflo
5 replies
1d3h

Is there a way to make it use the number of CPU cores instead of hardcoding a fixed value into a config file that’s used on different machines?

wonrax
2 replies
1d3h

RUSTFLAGS environment variable, it's mentioned in the post too:

$ RUSTFLAGS="-Z threads=8" cargo build --release
seritools
1 replies
1d3h

that doen't answer parent's question.

sodality2
0 replies
1d2h

$(nproc) as the value might work. (On Linux)

lights0123
0 replies
23h9m

Because it uses the jobserver protocol which Cargo initializes to the number of cores by default, I'd imagine you could set the new flag to some unreasonably high number (e.g. 10000) and it should limit usage to free cores.

epage
0 replies
1d3h

Keep in mind, this is still an experiment and is restricted to nightly, so its not configured for general use.

Id expect the stabilized default to be number of cores. No idea where this effort is at but at one point they were going to use jobserver to coordinate across cargo's rustc invokations at which point cargo's job count will be used which defaults to number of cores (and supports counting down from that with negative numbers.

sfink
3 replies
22h6m

Stupid question: does the back-end have to wait for the front-end to do borrow checking? If so, why?

(I'm not suggesting that it's doing anything wrong. I'm just wondering if borrow checking establishes invariants that the back-end depends on for more than correctness, such that you couldn't do speculative back-end work that you would discard on a borrow checking error.)

hobofan
1 replies
21h36m

Well, there is mrustc[0], a Rust compiler that doesn't include a borrow-checker, so it's possible to compile (at least some versions of) Rust without a borrow checker, though it might not result in the most optimized code.

AFAIK there are some optimization like the infamous `noalias` optimization (which took several tries to get turned on[1]) that uses information established during borrow checking.

I'm also not sure what the relation with NLL (non-lexical lifetimes) is, where I would assume you would need at least a primitive borrow-checker to establish some information that the backend might be interested in. Then again, mrustc compiles Rust versions that have NLL features without a borrow-checker, so it's again probably more on the optimization side than being essential.

[0]:https://github.com/thepowersgang/mrustc

[1]:https://stackoverflow.com/a/57259339

pornel
0 replies
20h14m

I don’t think you need lifetimes for the noalias optimization, because it’s a guarantee of &mut, every single one of them, not their exact lifetimes.

Also the slowness is not because of lifetime checking. `cargo check` is very fast compared to `cargo build`.

PoignardAzur
0 replies
21h22m

I don't think it strictly does, no.

sylware
2 replies
1d3h

Come on, could we get minimal bootstraping rust compiler instead of 350MB on x86_64 linux? Namely, the rust-written compiler executable to generate simple .o ELF object and that's it. And bootstraping: namely astaticPIE ELF executable.

Rust has the opportunity to be serious, not lost like gcc.

the8472
1 replies
1d2h

procmacros are loaded as dylibs, so dynamic linking is necessary. if you want a bootstrap procedure take a look at what guix does, they go through mrustc.

sylware
0 replies
23h46m

I don't want all that kludge, a simple machine code generator, a rust-written rust compiler, on x86_64 linux, a static PIE ELF executable. How hard can it be? I give it a rust compiling unit and it outputs a ELF relocatable object.

nu11ptr
0 replies
1d4h

Nice! One thing I have noticed is that, unlike the library crate ecosystem, my binary crates by default would be large and monolithic (I now divide into multiple library crates). This means toward the tail end of compilation not only can compilation not be parallelized, but also that the largest crates tend to serialized, so this is a very welcome change!

denysvitali
0 replies
1d

Finally!!

boredumb
0 replies
1d2h

Hooray! I used rust eons ago when even toy examples were fairly slow to compile and after coming back recently I've started to really love rust and have been using it when ever possible without even thinking about compile times, but I do have one project that has grown a bit and I started getting deja vu when it takes 5+ seconds to compile a simple change I start thinking about not saving things to trigger my analyzer until I work out some other things to not try to work them out while my laptop turns into an aircraft engine.

Very exciting as this is the one pain point for me personally so any and all progress is much appreciated

Vecr
0 replies
13h42m

Is there any way I can disable the parallel compiler option without rebuilding the compiler? I don't need to use it (I have codegen units set to 1 anyway), and it's possible it's causing an ICE that I don't want to debug. To be clear, I know it's set to 1 thread by default, but I want it all the way off.

ReleaseCandidat
0 replies
1d4h

In multi-threaded mode there are some known bugs, including deadlocks. If compilation hangs, you have probably hit one of them.

Ok, I guess I wait a bit longer before using `-Z threads` ;)