return to table of content

Mako – fast, production-grade web bundler based on Rust

spankalee
26 replies
5d12h

This supports all kinds of non-platform-standard features that may tie your project to this specific bundler, but will tie them to bundlers in general.

It would be much better to have projects that work without bundlers, that can use them as an optimization step.

ollybee
9 replies
5d9h

Bundlers also tie clients to developers without them realizing. I work for a webhost and Many people still assume that if they have access to their hosting then they have their "source code". We see often that people migrate a sites after breaking ties with a developer only to find what they have may function, but is unmaintainable.

7bit
5 replies
5d8h

Sounds like a contractual thing, not like a bundler thing. The client should always include a clause into the contract that the client must hand all work over after closing the partnership.

satvikpendem
2 replies
5d2h

Yep, just ask for their source code, don't presume that the hosted work is sufficient.

SOLAR_FIELDS
0 replies
4d22h

Yeah this is no different than receiving a binary rather than the source, bundled code is close enough for this comparison (though it would be probably easier to unbundle code rather than decompile a binary, it’s still a fair amount of work)

8n4vidtmkvmk
0 replies
4d18h

Client should ask, but ideally this will be written into the contract beforehand. I don't mind sharing source code but not for re-use in other projects.

brailsafe
1 replies
4d21h

The client should always include a clause into the contract that the client must hand all work over after closing the partnership.

I think you might have meant that the developer should hand over all source material after the agreement has been fulfilled.

7bit
0 replies
2d23h

Indeed, thanks.

iamleppert
0 replies
5d3h

Source code costs extra, everyone knows that! Get the bag!

hypeatei
0 replies
5d5h

The same could be said for compilers, too. Deployed binaries and code have never been the "source" of your app.

evilduck
0 replies
5d3h

You'd struggle to extract a maintainable codebase from a C# or Golang web server after the fact too. As an industry we've been making simple websites on shared hosting for over a generation, clients who ignore the entire world of information about the dangers and pitfalls on this topic are squarely to blame as negligent. It ranks up there with not paying taxes and then acting shocked when the government comes knocking.

While Javascript could potentially be contractually mandated to be written in a way to facilitate production codebase recovery, if you knew enough to ask for that you wouldn't, you'd require them to use your source control and to provide build/deployment scripts instead.

interstice
9 replies
5d12h

As soon as the browser specs catch up to what the bundlers are doing I’d drop them in a heartbeat. Not holding my breath though

cornedor
7 replies
5d11h

You can get pretty far by using importmaps, you would not have treeshaking or a single bundled file, but it works pretty well. JSDoc can be used to add types to your project (that can be typechecked using typescript). I'm currently building a hobby project using preact, htm and jspm for packages. It's pretty nice to just start building without starting a build tool, having to wait for it to finish, make sure it's not crashed etc. But indeed, I won't use this for production.

The only thing I'm still missing is an offline JSPM/esm.sh.

spankalee
3 replies
5d

You do have tree shaking: the browser only loads the modules that are imported. Only import what you use (and don't use barrel files) and you're golden.

curtisblaine
2 replies
4d23h

If you don't have any external library (e.g. npm) dependency you're golden. Unfortunately, this means that you now have to write all your code from scratch, which is ok if you're writing a very light website, but it's unsustainable if you do anything non-trivial.

spankalee
1 replies
4d23h

Plenty of npm dependencies are published as browser-compatible standard JS modules.

curtisblaine
0 replies
4d19h

You mean EcmaScript Modules? The situation is quite complicated. Some libraries don't publish ESM at all (React doesn't iirc), and the ones that do often publish CJS and ESM side by side. In that case, you need to read the package.json and decide which file to use, which is not trivial (see Conditional Exports for example: https://nodejs.org/api/packages.html#conditional-exports). In almost any non-trivial case you need to write tooling to make it work, so you might as well use a bundler.

rty32
1 replies
5d6h

It works well for a small website. For anything that requires more than a few dependencies, the package management is hell and load time will be insufferable. Also, not everything you grab from npm can just run in the browser even if written in ESM -- things get complicated quickly.

threetonesun
0 replies
4d17h

Websites using lots of JavaScript were built pre-npm and bundlers, the load times were not insufferable. If anything today it would all be easier.

meiraleal
0 replies
5d8h

For offline esm.sh you can use service workers cache, no?

Also why not use this config in production? Http2 should give the same performance for multiple small files than a big bundle and it's much better to cache

spankalee
0 replies
5d

The browser specs largely have.

CSS has advanced enough that I haven't used Less or Sass in many years. Modules make loading easy. Import maps let you use bare module specifiers (or you could use a simple transform in a dev server). CSS modules let you import CSS into JavaScript.

I never use a bundler during development.

crabmusket
5 replies
5d5h

Imagine if you could just scp your source code tree onto a CDN and it would automatically bundle it based on how clients import it.

shepherdjerred
2 replies
5d1h

This sounds similar to https://esm.sh/

curtisblaine
0 replies
5d

Unfortunately singleton peer dependencies (like react) are quite complicated with esm.sh. When esm.sh rewrites a module to import react from the cdn, it kinda "decides" which version of react is it at the moment the module is built on the CDN for the first time. That's why react is "special" and gets a stable build in esm.sh (essentially pointing to a fixed version no matter which version you specify): to avoid the dreaded "two copies of react" error.

crabmusket
0 replies
4d21h

Yep definitely. There are a few of these like JSPM that help convert libraries into browser-importable URLs.

ericyd
1 replies
5d4h

Isn't this the idea behind CI/CD wotkflows?

crabmusket
0 replies
4d21h

Yes, the difference is I'd like to offload the work of having to think about optimising code delivery.

benrutter
23 replies
5d9h

I don't work in web, and possibly live under a rock. I'm a little confused around what bundlers actually do?

I'd sort of assumed it was a typescript build thing before, but Mako's page gives me enough info to make me realise I'm wrong, but seems to assume people are working with some base knowledge I don't have.

Any pointers to information of exactly what bundlers do? The emphasis on speed makes it sound like it's doing a whole bunch of stuff, what are the bottlenecks? Package version resolution?

throwAGIway
6 replies
5d8h

Bundlers take many - usually at least hundreds, often tens of thousands - individual source files (modules) and combine them into one or few files. During that, they also perform minification, dead code elimination and tree shaking (removal of unused module exports).

It's orthogonal to TypeScript - bundler will invoke a TS compiler during the process and also functions as a dev server, but that's just for nicer DX.

Package version resolution is done by package manager, not bundler.

benrutter
5 replies
5d4h

When you say dead code elimination, do you mean if I import some huge library just to use a single function, the bindler will shimmy things about so only the single function is being included in package and not the big library?

If so, that's amazingly helpful, I'm mostly over in python data land and I wish that existed for applications, although admittedly there's less need.

throwAGIway
2 replies
5d4h

Yes, exactly. Pulling a huge npm dependency is usually not a problem if they didn't go out of their way to make it super hard to analyze at build time.

This is tree shaking though, dead code elimination means it will find code that isn't used at all and remove it - for example you might have if (DEV) {...}, and DEV is static false at build time, the whole if is removed.

So first it performs dead code elimination, then it removes unused imports, and then it calculates what is actually needed for your imports and removes everything else.

benrutter
1 replies
4d10h

That's very cool! I already knew that this was something compilers did, but somehow never even considered you might do the same for an interpreted language like js.

Makes me wonder why some js bundles are still so big, am I over hyping what dead code elimination and tree shaking might achieve? Do some teams just not use it?

Either way, I've come away from my question with a pretty big reading list. This is exactly what I love about HN.

tubthumper8
0 replies
4d9h

I think it's not so much about interpreted vs. compiled but more about the delivery of client code to the user - every time any user visits any website the browser may have to download the code (if not cached), then parse it, then execute. The less code that needs to be shipped, the faster time to interactivity and also less bandwidth usage.

Some bundles may still be big if teams don't use it, and some libraries are not structured in a way that facilitates dead code elimination.

Consider libraries that use `class`, such as moment.js, all functionality is made available as methods on the Moment class. If you only use 1 method, you still have to bring in the whole class. Whereas if a library is structured as free functions and you only use one, then only that gets included and the rest is eliminated.

ahzhou
1 replies
5d3h

Conditionally yes. There are many libraries that cannot be tree shaken for various reasons. Libraries typically need to stick to a subset of full JS to ensure that the code can be statically analyzed.

throwAGIway
0 replies
5d3h

Basically the only forbidden thing is dynamically calculating import paths, or dynamically generating the module.exports object.

brabel
6 replies
5d8h

Are you familiar with Java?

If so, a web bundler is like a build tool which creates a single fat jar from all your source code and dependencies, so all you have to "deploy" is a single file... except the fat jar is just a (usually minified) js file (and sometimes other resources like a css output file that is the "bundled" version of multiple input CSS files, and other formats that "compile" to CSS, like SCSS [1] which used to be common because CSS lacked lots of features, like variables for example, but today is not as much needed).

Without a bundler, when you write your application in multiple JS files that use npm dependencies (99.9% of web developers), how do you get the HTML to include links to everything? It's a bit tricky to do by hand, so you get a bundler to take one or more "entry points" and then anything that it refers to gets "bundled" together in a single output file that gets minified and "tree-shaken" (dead code elimination, i.e if you don't use some functions of a lib you imported, those functions are removed from the output).

Bundlers also process the JS code to replace stuff like CommonJS module imports/exports with ESM (the now standard module system that browsers support) and may even translate usages of newer features to code that uses old, less convenient APIs (so that your code runs in older browsers). And of course, if you're writing code in Typescript (or another language that compiles down to JS) your bundler may automatically "compile" that to JS as well.

I've been learning a lot about this because I am writing a project that is built on top of esbuild[2], a web bundler written in Go (I believe Vite uses it, and Vite is included in the benchmarks in this post). It's extremely fast, so fast I don't know why bother writing something in Rust to go even faster, I get all my code compiled in a few milliseconds with esbuild!

Hope that helps.

[1] https://sass-lang.com/documentation/syntax/

[2] https://esbuild.github.io/

bovermyer
2 replies
5d7h

I'll admit to being a little outdated on front-end design evolution. Sass/SCSS is no longer needed? Does CSS support nested blocks now?

dartos
0 replies
5d6h

Very recent addition, but yes!

At long last

iJohnDoe
0 replies
5d8h

Thank you. Extremely helpful.

benrutter
0 replies
5d4h

Thanks! I really appreciate the detailed explanation- makes a whole lot of sense.

alex_suzuki
0 replies
5d5h

I already knew what bundlers do, but I’ll just say thank you anyway for writing such an approachable explanation. I might refer to it in the future when someone asks ME what a bundler does :-)

flohofwoe
4 replies
5d8h

Bundling is the equivalent of static linking, typically combined with dead code elimination (which is called "tree shaking" in the web world) plus optionally other optimizations and code transformations.

throwAGIway
3 replies
5d7h

Dead code elimination is related to but distinct from tree shaking - it also means that unused code branches get removed, for example constants like NODE_ENV get replaced with a static value, and if you have a static condition that always results to true, the else branch is removed.

flohofwoe
2 replies
5d6h

In my book that's all covered by the term 'dead code elimination', e.g. removing (or not including in the first place) any code that can be statically proven to be unreachable at runtime. Some JS minifiers (like Google's Closure) can do the same thing in Javascript on the AST level (AFAIK Closure essentially breaks the input code down into an AST, then does static control flow analysis on the AST, removes any unreachable parts, and finally compiles the AST back into minified Javascript). Tree-shaking without this static control flow analysis doesn't make much sense IMHO since it wouldn't be able to remove things like a dynamic import inside an if that always resolves to true or false.

throwAGIway
0 replies
5d5h

Yep, that's how it works - you first perform dead code elimination and then tree shaking exactly because it wouldn't remove everything otherwise. Agreed that you need both done one after another in most cases; however you can usually disable either one in bundler configuration and it's a separate step.

pshu
0 replies
5d3h

https://makojs.dev/blog/mako-tree-shaking explains how mako do the tree shaking stuff, but in Chinese.

In my two cents, the tree shaking is more focus on removing unused exports in ES module at top level. it's a mixing with Dead code elimination and link time optimization.

aaaaaaabbbbbb
1 replies
5d6h

If you are also looking for broader context beyond what a bundler is, I have written a broader exposition on frontend builds here, which may be useful in understanding how bundlers compare to adjacent build tools: https://sunsetglow.net/posts/frontend-build-systems.html.

benrutter
0 replies
5d4h

Thanks, that's actually exactly what I was after without realising it!

darby_nine
0 replies
5d8h

I've always liked the analogy of a compiler/linker for web assets, personally.

acemarke
0 replies
5d

Yes, here's a few excellent articles that explain what problems build tools solve and why they exist:

- https://sunsetglow.net/posts/frontend-build-systems.html

- https://www.innoq.com/en/articles/2021/12/what-does-a-bundle...

- https://www.swyx.io/jobs-of-js-build-tools

Loosely put, they're the equivalent of all of `gcc` or `rustc`: compile the source code, run type checking, output object files, transform into the final combined executable output format.

pjmlp
15 replies
5d11h

Everyone is making the point that using JavaScript on the server was a BIG mistake, with this ongoing rewrites.

We already had our bundlers in Java and .NET land, before nodejs came to be, and life was good.

berkes
11 replies
5d11h

The fact that people keep releasing new bundlers, minifiers, transpilers, package managers and so on, for JavaScript is a loud and clear warning that something is amiss.

People (re)write such tools either for fun or to solve a problem (or best: both). Apparently after so much re-writes the problems haven't been solved. To me, this indicates fundamental problems. I'm not familiar enough with the ecosystem to know what those would be, let alone how to truly solve them.

But the very fact that we see new builders, transpilers, bundlers every few months is enough to conclude we aren't solving the problems on the correct level or that maybe it cannot be solved at all. Because otherwise one of the many attempts would've solved the problem and "everyone" would be using that.

samaltmanfried
6 replies
5d10h

The situation with the tooling constantly changing isn't nearly as bad as the front-end frameworks themselves. I've been updating my knowledge of front-end, and it's an absolute shambles. The official React documentation(https://react.dev/learn/start-a-new-react-project) is telling me that in order to use their framework, I need to use another framework to solve (quote)"common problems such as code-splitting, routing, data fetching, and generating HTML"... At their suggestion I've picked NextJS, which is a "full-stack" React framework. This means that it has its own back-end which does most of the heavy lifting. So not only will our company have a traditional back-end, we'll also have a BFF (another thing the kids nowadays want), and a back-end that is actually our front-end application. At this point I've forgotten what problem we set out to solve.

NextJS' documentation is also *terrible*. This situation is made all the worse by any material online about NextJS that's more than 3 months old being totally inapplicable because the framework changes so often.

meiraleal
4 replies
5d7h

Nextjs is the trojan horse sent to destroy React and it worked

samaltmanfried
2 replies
5d6h

Is this how other people feel about NextJS? I've been trying to keep an open mind about it, but its entire design seems so antithetical to what I'm trying to accomplish. Is there a better mainstream alternative? From what I've seen NextJS is pretty commonly used.

meiraleal
1 replies
5d4h

The mainstream alternative is still to not have a "backend-for-the-frontend". If you use something like Rails, django, nodejs, use React connected to them. Or directly to something like supabase. NextJS is the extra complexity nobody needs.

It is marketed as the solution to slow starts but React is slow so the solution is terrible over-engineered.

A much better fix is to remove React and use something that is already fast like solidjs or Lit. There are much better UI Kits in Lit that I have seen in React and in the end it is just JS so the same people that can could React, can code Lit and SolidJS.

samaltmanfried
0 replies
4d19h

Thank you for the suggestions. Unfortunately, the only reason I'm writing React at all is because so many companies want React experience, and I figure I'd better stay up to date. If I was given the opportunity to choose technologies for myself all of the time, I'd steer clear of React at this point!

anonzzzies
0 replies
5d5h

Yep, it’s terrible and everyone is going for it.

aaaaaaabbbbbb
0 replies
5d6h

Next.js et al. provides a set of opinionated packages designed to enable a specific paradigm. For Next.js, that's server-side rendering. For Remix, that's progressive enhancement.

If you are happy with client-side rendering and do not desire React on the server, there is not a strong reason to use Next.js; it introduces complexity and churn.

dgb23
1 replies
5d9h

Esbuild did solve a major problem, which was very slow builds.

Vite wrapps esbuild. Not sure what it provides itself.

Then there came several specialized (Rust) tools. For the same reason esbuild was made.

I think ultimately they try to solve the same issue:

JS is supposed to be a productivity gain over compiled languages. But with ultra slow builds that goes out of the window.

berkes
0 replies
3d7h

JS is supposed to be a productivity gain over compiled languages. But with ultra slow builds that goes out of the window.

But what business problem are we solving? Why do we need compiles, transpiles and so, in a dynamic language in the first place¹? And if so, is compiling the right solution to that problem?

My point was mostly to question if we are solving the right problem. And if the direction in which we are solving it, is the right one. After some 20+ years we still haven't converged around a single solution. But instead we keep firing out "new" solutions and solutions to the problems that those solutions then introduce on an almost weekly basis.

To me that shows we are either simply looking in the wrong place, or have a much deeper, fundamental problem that simply cannot be solved. And should probably either stop looking for the solution or just abandon the whole stack.

¹ I'm not looking for an answer to this question. I know several reasons why we build, compile, transpile, minify and whatnot in JS. But all those are also solutions to deeper problems. Problem's that can be solved in several ways, only one of which is "compile pipelines".

pjmlp
0 replies
5d10h

Definitely, otherwise Netscape LiveWire would have been a huge commercial success.

goosejuice
0 replies
5d3h

I believe we see such diversity because of the unique environment in which JS has come to exist. Folks who are writing all these tools are trying too solve problems from the outside in for their small corner of an incredibly large ecosystem.

It's a snow ball rolling down an infinitely long mountain. I believe this may never settle.

Mainstream browsers have already coalesced on a no build solution but it's profitable, by fame or fortune, to continue building solutions that require bundle and compile steps. Then others use those because off the shelf libs require them and save time and money.

zoul
2 replies
5d10h

Having the same language on the client and on the server is a huge productivity booster for me. I can’t imagine writing so many things twice again. Have you tried it?

pjmlp
0 replies
5d9h

Unfortunely I have to, thanks to the Next.js and React only SDKs, that many SaaS products now have as extension mechanism.

Also if it is such a great experience, people wouldn't be rewriting the node ecosystem in Dart, Go and Rust.

dgb23
0 replies
5d8h

It’s not necessary to use the same language for that.

In many cases you can achieve the same with a clearer separation, with data driven methods and by generally not running so much JS.

The typical example is input validation. You want immediate feedback on the client, but obviously you validate on the server.

But instead of running literally the same specific code twice, you can use json-schema or your own general data description of what valid input is. You move the specifics from code into data.

phplovesong
14 replies
5d12h

How does it compare to esbuild or swc? Its good we have alternatives, and im still mentally scarred from the javascript ecosystem, where almost everything is slow and buggy. But when you compare to an already native tool (like esbuild) you start getting diminishing returns.

megaman821
5 replies
5d3h

...or Turbopack or RSpack or Rolldown? Too many choices. I will be sitting this round out until a winner emerges.

pshu
4 replies
5d3h

according the current situation in bundlers written in JS,there is no "really" winner in my opinion。 webpack or rullup,which one is winner is a very personal thought。 So i think there maybe some similar situation in bundlers written in Rust.

satvikpendem
2 replies
5d2h

Are you Japanese? Those period symbols are interesting.

pshu
1 replies
5d2h

'。' is a Chinese full stop, equivalent to a period in English

satvikpendem
0 replies
4d21h

Ah that's cool. So it includes spacing inherently in the character it looks like, rather than English's ". " which are two characters.

8n4vidtmkvmk
0 replies
4d18h

Wepack for web apps, rollup for libraries. Very much depends on what you're doing, the tools usually aren't good at all of them. There's 1 or 2 other use cases I'm forgetting.

throwAGIway
4 replies
5d8h

SWC doesn't bundle at all. Esbuild is a pretty good bundler but works well only if your code and dependencies use ESM, it's not as good as other options with CommonJS.

oefrha
1 replies
5d7h

That’s not the biggest problem of esbuild. Esbuild has poor support for code splitting (it’s the first priority on their roadmap[1]) and limited plugin interface which makes it a poor choice for complex projects. These are the reasons that Vite for instance can’t use esbuild for production builds.

While I haven’t tried Mako, it seems to have support for advanced code splitting[2]. No idea how powerful its plugin system is.

[1] https://esbuild.github.io/faq/#upcoming-roadmap

[2] https://makojs.dev/docs/features#code-splitting

kurtextrem
0 replies
5d3h

Also, the vite team in collab with a few others is building https://rolldown.rs/, to replace esbuild and rollup in vite. It's goal is to be faster than esbuild, with extended chunking options and so on.

mikojan
1 replies
4d8h

Esbuild [...] works well only if your code and dependencies use ESM

I cannot attest to that. We are using Esbuild plus CJS at $DAYJOB no problem. Why would that be an issue?

throwAGIway
0 replies
4d7h

It's an issue because CommonJS allows stuff that's forbidden in static ESM imports/exports, and it was normal to use. Newer code is usually fine, but there are many older backend libraries that can cause issues with Esbuild. Webpack had to learn how to deal with it because it existed at the time CommonJS was most popular, Esbuild didn't.

tinco
1 replies
5d8h

This is built on swc, and they compare themselves to vite, which is built on esbuild. So the answer to your question is that they claim to be roughly twice as fast as esbuild (-based bundlers) in the benchmark in this article.

kurtextrem
0 replies
5d3h

I'm not entirely sure if we can really tell anything about esbuild from that comparison, as vite's production build time is 1300ms (which uses rollup), but dev startup time 1100 (uses esbuild to prebundle). It seems like vite itself has overhead.

The only bench I'm aware of was presented in November 2023: https://x.com/boshen_c/status/1719596594985681275?t=x8FaB9Aw..., where esbuild was faster.

yuzuquat
0 replies
5d11h

looking at the docs, this uses swc under the hood

cjpearson
13 replies
5d11h

The old joke is that there's a new JavaScript framework every month. That's not really true — we've had the same big three for a decade — but there has been an explosion of new bundlers: vite, esbuild, turbopack, farm, swc, rome/biome, rspack, rolldown, mako. And of course plenty of projects are still using rollup and webpack.

Some competition is a good thing, and it seems to have led to a drive for performance in all these projects which I'm not complaining about, but I wonder if more could be gained by working together. Does every major company or framework need their own bundler?

bilekas
5 replies
5d10h

Haven't you heard ? Rebuilding everything in rust is the new meta. To be quite honest, call me old fashioned but the fact we need so many bundlers that we are considering which are more performant is a symptom and and not a blessing.

norman784
2 replies
5d9h

For me is the fact that we need a bundler is the underlying issue. I would love that bundlers became first class citizens and come already with the Javascript runtime, similar on how Bun and in some degree Deno does (AFAIK their bundler is intended to use to bundle apps to use in the server and not in the browser).

vmfunction
1 replies
5d9h

Or change the specs of ES/JS to introduce types. It will eliminate use of many projects and even typescript.

Seems like something to bring to WinterCG ? [1]

[1]https://wintercg.org/

mickael-kerjean
0 replies
5d9h

That was the initial promise of dart when it was first release but somehow it never got really there

LunaSea
1 replies
5d10h

Aren't we doing the same with compilers?

bilekas
0 replies
5d10h

I would say not really, at least compilers are an essential component of a compiled language in my eyes. Javascript is transpiled, and I know you can say the same for all compiled also in a roundabout way.

Thinking about it only recently, Go fits in nicely with fast compile times for ' 'builders' esbuild comes to mind. But Rust.. Crazy

koito17
4 replies
5d9h

The old joke is that there's a new JavaScript framework every month. That's not really true — we've had the same big three for a decade

Yup. I know a few people who were using React 10 years ago and still use it today. What has changed frequently is the tooling. e.g. Bower going away in favor of NPM; Gulp/Grunt going away in favor of Webpack, which is slowly going away in favor of Vite; CoffeeScript going away in favor of TypeScript; AMD/CJS/UMD going away in favor of ES modules, and so on.

ClojureScript has a great deal of stability in both the language itself and tooling, but nowadays I can't give up the developer experience of TypeScript and Vite. The churn in the tooling of the JS/TS ecosystem is wild, but since about 2021 I have found ESM + TypeScript + Vite to provide fast compile times, fearless refactoring, and a similar level of hot-reloading that I enjoyed in Clojure(Script). Can't say I miss Webpack, though!

ReleaseCandidat
3 replies
5d8h

ClojureScript has a great deal of stability in both the language itself and tooling

Does it still use Google's Closure (they've chosen it just for the name, right?) compiler? Is that still supported by Google?

koito17
2 replies
4d23h

Major parts of the compiler have been unchanged since its original public release. It still uses Google Closure Compiler (GCC), but the community understands that was the wrong choice of technology in retrospect. The compiler is still actively developed and used internally by Google. What is going away is the Google Closure Library (GCL), since modern JavaScript now has most of what GCL offered, and it's become easier to consume third party libraries that offer the rest of GCL's functionality.

The reason ClojureScript has not moved away from GCC has to do with the fact it performs optimizations -- like inlining, peephole ops, object pruning, etc. -- that ensure ClojureScript's compiler output becomes relatively fast JavaScript code. The closest alternative to GCC's full-program optimization would be Uglify-JS, but it doesn't perform nearly as much optimizations as GCC does.

For a concrete example, consider the following code. I am intentionally using raw JS values so that the JS output is minimal and can be pasted easily.

  (ns cljs.user)

  (defn f [x]
    (let [foo 42
          bar (- foo x)
          baz (+ foo bar)]
      #js {:bar bar
           :baz baz}))

  (defn g [x]
    (let [result (f x)]
      (when (pos? (.-bar result))
        (js/console.log "It works"))))

  (g 0)
The ClojureScript compiler will compile this code to something like this

  var cljs = cljs || {};
  cljs.user = cljs.user || {};
  cljs.user.f = (function cljs$user$f(x){
    var foo = (42);
    var bar = (foo - x);
    var baz = (foo + bar);
    return ({"bar": bar, "baz": baz});
  });
  cljs.user.g = (function cljs$user$g(x){
    var result = cljs.user.f.call(null,x);
    if((result.bar > (0))){
      return console.log("It works");
    } else {
      return null;
    }
  });
  cljs.user.g.call(null,(0));
Paste this into `npx google-closure-compiler -O ADVANCED` and the output is simply

  console.log("It works");
On the other hand, `npx uglify-js --compress unsafe` gives us

  var cljs=cljs||{};cljs.user=cljs.user||{},cljs.user.f=function(x){x=42-x;return{bar:x,baz:42+x}},cljs.user.g=function(x){return 0<cljs.user.f.call(null,x).bar?console.log("It works"):null},cljs.user.g.call(null,0);
This is quite larger, and possibly slower, than the output of GCC.

ReleaseCandidat
1 replies
3d1h

Thanks for your reply! Btw. my question was really out of interest and not to criticise ClojureScript.

koito17
0 replies
2d20h

You're welcome. I am not sure why you were downvoted, but I think your question was valid.

My response can be summarized as follows:

- Google indeed uses and supports the compiler

- Google is moving away from the library that shipped with their compiler

- ClojureScript made a wrong bet on technology

- The design of ClojureScript necessitates the full-program optimization of Google's compiler

norman784
0 replies
5d9h

By all of them being written in Rust they could reuse other's packages (crates), AFAIK the vite team is writing rolldown (to replace rollup) and they are using crates from the oxc project, not sure about the others.

emadabdulrahim
0 replies
5d2h

You forgot Parcel, which is working on v3 https://parceljs.org/

rty32
12 replies
5d6h

Every bundler these days boasts "Rust" and "fast". What people really want is webpack feature parity. For a large enough organization with complex use cases and resource management, I yet need to see a real webpack equivalent.

(Meanwhile, swc parser can't yet pass all tests in test262 according to their website: https://docs.rs/swc_ecma_parser/latest/swc_ecma_parser/ )

rtpg
8 replies
5d5h

how’s esbuild? I’ve yet to hit something I had in webpack that isn’t a couple line plugin away from being present in esbuild

jampekka
7 replies
5d5h

Intrestingly esbuild isn't included in their benchmarks.

Esbuild is the only current build-tool that keeps one sane. The serve-mode is excellent and elegant with no brittle constantly breaking hacks like HMR or file watching.

Sadly configuring especially the serve-mode is a bit badly documented, and not usable via CLI flags if one needs plugins.

inbx0
5 replies
5d4h

To each their own. I can't imagine doing UI development without HMR anymore. Makes it so much faster to iterate.

chuckadams
2 replies
5d3h

That's what Vite is for: all the zip of esbuild plus HMR that works. Usually works anyway... looking at one project where I never have to reload, and another that I'm doing that every ten saves or so. Much sloppier legacy sources in the second tho, Vite really pays off when you write more modern code from the start.

jampekka
1 replies
5d2h

I rather hit Ctrl-R on each iteration than worry about whether I've hit the 1/10 buggy state on every change. With esbuild the reload is practically instant.

chuckadams
0 replies
4d22h

It was pretty infuriating wondering why my changes weren't taking effect, but like I said, it only hit me for that one project, and now I'm ready for it (I had to manually refresh every time beforehand anyway). It's a legacy codebase, I'm already used to intermittent nonsense like that. HMR never fails on the other project -- but now I've jinxed it for sure!

jampekka
1 replies
5d2h

I don't find hitting Ctrl-R or F5 to be much of a hinderance for iteration. Especially when you don't have to worry whether the system has been left to some incorrect state by HMR.

8n4vidtmkvmk
0 replies
4d18h

You mustn't be working on forms then. Or anything state heavy.

zem
0 replies
4d17h

Intrestingly esbuild isn't included in their benchmarks.

i noticed, and was very surprised by that. surely esbuild is the "standard" fast bundler these days; everyone knows webpack is slow so doing better, even significantly better, than it isn't a very large claim.

jokethrowaway
0 replies
5d5h

What features do they want? In my experience people always welcome webpack alternatives and not having CPU starvation issues or having to wait for minutes to webpack to work.

The problem is that we have already n-thousands alternatives, so it's a slightly different setup everytime - but generally as long as it's not webpack, it's all good.

Recently someone disabled turbopack on a next.js project because one new dependency wasn't supported and the developers started complaining right away the app was unbearably slow. The team couldn't work on latest for a week, they were just reverting the latest changes breaking turbopack support, working and then pushing.

dpoljak
0 replies
5d6h

Isn't rspack[0] trying to handle full feature parity? I've just come across it earlier today so I'm not an expert but I'm looking forward to the full 1.0 release

[0] https://www.rspack.dev/

chaosprint
0 replies
5d5h

no one mentions rolldown by the author of vue and vitejs?

berkes
11 replies
5d11h

I was confused by the "Rust" denotion in the title and presumed it was an alternative builder to compile rust for web (wasm?). It's "yet another" bundler for javascript. Built in rust.

padjo
10 replies
5d9h

If they didn’t tell us it was built in Rust how would we ever know how smart the developers are?

michaelmior
5 replies
5d7h

Personally, I appreciate knowing when something is written in Rust. I know it is very likely I can easily install it and try it out immediately and that it is likely faster than any non-native tool I'm currently using. However, I do find "based on Rust" instead of "written in Rust" to be an odd choice of terms.

necovek
4 replies
5d5h

Just looking at their benchmarks, it's not particularly fast. es-build looks much better in benchmarks, but it's not written in Rust. It seems they wanted a tool in Rust just-because (experience on the team, preference foe the language...), and then only compared against those.

As for the language "based on Rust", it's likely bad wording due to them not being native English speakers.

IshKebab
3 replies
4d23h

esbuild is written in Go, which has similar "probably quite fast and easy to install" properties to Rust.

Compare that to the expected experience if it was written in C++ or JavaScript or Python or Java or ... All of those are either likely to be slow or painful to use.

pas
0 replies
4d6h

A native executable includes only ... the language runtime, and ...

How small is that compared to the JRE? Also I guess this means the program cannot load arbitrary classes?

michaelmior
0 replies
4d2h

One of the reasons few people do that is because the build process becomes much more complicated. It's also much more complicated to do any sort of dynamic loading which is not terribly uncommon.

jokethrowaway
3 replies
5d5h

Yeah, that's how I ended up using prisma... until I realised they didn't have joins

All this aside, knowing something is in Rust tells me: - It's fast - It's maintainable (imagine the same project but in C)

hu3
2 replies
5d

prisma not having SQL JOINS for a ling time is how I know I should just ship it when it comes to my projects.

efilife
1 replies
4d20h

It's probably just me, but I read this sentence over 3 times and I still don't understand what you meant. Care to explain?

hu3
0 replies
4d19h

Sorry I could be much clearer. Was typing amidst cooking.

I meant that Prisma got so much traction despite not supporting JOINs early on.

And then there's me postponing projects because 80/20 doesn't cut it for me. I need to get each and every feature completed before launching.

ecmascript
9 replies
5d11h

Can't people figure out some other tooling besides bundlers? I mean, how many do we really need?

It's probably fine, but so are all the others as well. The authors have probably spent a fair amount on time on this project so I don't want to be negative but it's just hard to be excited when it brings nothing new to the table.

Why should I use this over Vite or esbuild? Because it's written in Rust? I don't understand why that even matters. Even if it was 10 times faster I wouldn't use it because Vite is fast enough and have all the plugins I would ever need.

rty32
1 replies
5d6h

None of those tools you quoted are production ready based on my investigation, in the sense that if you manage the JS infrastructure of a company of 2000 developers, you would stick with webpack. Lots of Rust based tooling is still half baked and missing things here and there, so much that you wish these people work together to create one (or at most two) tool that is comparable to webpack.

dsff3f3f3f
0 replies
5d3h

None of those tools you quoted are production ready based on my investigation

This is very true and almost all of them are taking far longer to develop than they initially thought. swc/turbopack is being pushed by Vercel and it has been a huge ongoing disaster.

ecmascript
0 replies
5d9h

Yeah okay, but that's not the reason why people write it in the title. They write it in the title because they know that many engineers like Rust and think people will immedietly be drawn to it.

But the language itself is not a goal or at least shouldn't be IMO. Thus it have the opposite effect on me, who do not care about what language my bundler is written in.

If I did, it still wouldn't have any competitive advantage since as you point out Vite will soon also be based on Rust.

jack_riminton
3 replies
5d10h

You'll get downvoted but I completely agree, it seems rewriting things in Rust and tinkering with bundlers is the new in-vogue thing to do. Lord knows why

timrichard
2 replies
5d8h

I didn't enjoy the Rust hype on here in years past, but I'm always glad of any better tooling. Just an example from the other week... I swapped out NVM for FNM (Rust) and now I don't have to put up with performance issues, especially slow shell startup times.

ecmascript
1 replies
5d8h

Just me being curious since I have used nvm for years without any issues. What do you mean by slow shell startup times? In what way do you use nvm in order to experience any slowness?

timrichard
0 replies
5d7h

I followed the standard nvm install process, to get it loaded from my .zshrc

I noticed a second or two in lag between launching the terminal and getting a shell prompt. Commenting out the nvm load as a test removed the delay. I installed fnm, aliased it to be nvm, and everything is snappy. Also nicer if you use tooling to 'nvm use' when changing into a project directory.

There are a few issue threads such as this one : https://github.com/nvm-sh/nvm/issues/2724

BTW, this blog post was great for finding the culprit if there is zsh startup latency : https://stevenvanbael.com/profiling-zsh-startup

podgorniy
0 replies
5d9h

I've read some faq and docs.

Their reasons are to have fast builder with flexisbility needed for business cases. If other words they are making internal tooling publically available.

Being faster than es-build is not a goal, get people excited about speed is not a goal. Have control over tooling, flexibility, be fast-enough, be opensources are the goals.

wellyeahkinda
1 replies
5d9h

There is no way this isn't the thing they're both referencing.

cpburns2009
0 replies
5d6h

Mako templates uses a shark, presumably a mako shark, in its logo. I doubt it's referring to the mako, magic light, of FF7.

opan
0 replies
5d10h

This is what I thought of right away.

qmmmur
0 replies
5d7h

It's also a shark. What is your point? I see these droll takes every time someone announces something. Overlaps are going to happen.

mgaunard
6 replies
4d23h

I'm not a web developer, though I still develop web apps regularly.

What exactly is the point of a bundler in the rapid development cycle? If you want your web app to load up fast, it's better if you only need to redownload the parts that actually changed, so you're better off not bundling them.

mikojan
2 replies
4d23h

You need to use some kind of automation to fingerprint your files for optimal caching.

Where applicable there simply does not exist a better caching strategy than fingerprint plus Cache-Control: immutable

mgaunard
1 replies
4d22h

Well but I might have a hundred files, only one of which changed.

The 99 other ones are still in the browser cache and don't need to be re-(down)loaded.

If I bundle everything, then I have to scrap and reload everything, which is probably great for the final user, but not the developer actively modifying it.

mikojan
0 replies
4d20h

A bundler does not necessarily produce a single file. I have not tried Mako. But from the docs it appears to do code splitting just like the others.

chuckadams
0 replies
4d13h

In a dev environment you can use the Vite dev server, which serves every module separately, compiles them on the fly as they’re requested, and hot-reloads them when they change. All at the granularity of single files. Bundling then only happens when building the final output.

aaaaaaabbbbbb
0 replies
4d23h

If you have a lot of files, the initial (dev server) page load times increases linearly with the number of files you have.

With a slow bundler, that tradeoff made sense, but with a fast bundler, it is suboptimal.

Also, typically the application is split into multiple smaller bundles, so only a slice of the application is rebundled on change.

8n4vidtmkvmk
0 replies
4d18h

The best solution (as always) is a hybrid. You want to bundle up a bunch of the small files, and split usually along loading boundaries such as page navs.

For development, you don't need to "bundle" at all but you still need to transpile.

jauntywundrkind
6 replies
5d12h

Rspack (ByteDance) just shipped 1.0. There's Farm too. This is from Ant Group. Major influx of build tools all built in Rust, made in China.

Turbopack is supposed to be coming, as a total rebuild of bundling. Rolldown seems solid, as a Rust roll-up redo.

aaronlinzx
1 replies
5d12h

they need to create new tracks for promotions and KPI, recreating a wheel in rust will achieve just that. It's referred to as technology investment, but it's really speculation.

pshu
0 replies
5d2h

it's maybe a Nash Equilibrium to investing in Rust tools in big tech productivity races.

spoiler
0 replies
5d10h

Rsbuild has been really nice to use. I migrated a bunch of webpack projects to Rsbuild and it reduced config, and improved DX.

One of my favourite features is probably that it understands a tsconfig file: https://rsbuild.dev/config/source/tsconfig-path

hardwaresofton
0 replies
5d11h

Clearly Rust is catching on as a more approachable, safe and performant C/C++.

I personally also think about it as a more-likely-to-make-it-to-production Haskell, with how robust the type system, tooling, and other things are (not to rag on Haskell -- it's a fantastic language and there's lots of overlap in the communities).

csomar
0 replies
4d13h

I wonder if this is part of the Chinese de-risking/de-coupling. Major Chinese tech companies seems to be spawning their own open source developer tools.

alvincodes
0 replies
5d7h

I hadn't heard of any of these, apart from rolldown, thanks!

Hopefully docusaurus gets on that train soon

BaculumMeumEst
6 replies
5d3h

As someone who highly values minimalism and simplicity in software, seeing another web bundler paraded around as if it's something to celebrate does not spark joy.

chuckadams
5 replies
5d3h

We have bundlers because for a long time we didn't have a module standard due to browsers hanging on to their minimal and simple model of `script src=`. Even now modules are pretty minimal fare. Plus there's all the transpiling and asset transformation, but hey we should all be using document.write and not those "bloated" frameworks on top of JS, right? Maybe jQuery if we want to get really bougie?

Culonavirus
3 replies
4d21h

A bundler is necessary evil and should be thought of and developed that way. Not celebrated. There should be like two or three flavors, e.g. like Cpp compilers (gcc/msvc/intel), ideally with a big corp backing and they should be rock solid and not change much.

The amount of bundlers I've seen in my time is borderline obscene. Nowadays it's even worse, as every javascript framework developer's actual secret fetish is to build their own bundler. Ideally in Rust because that's hip I guess.

Webpack, Snowpack, Parcel, Rollup, Esbuild, Vite, Turbopack... just stop. Enough.

spoiler
2 replies
4d19h

All of these are lessons learned from previous iterations. Also, some of these were probably in development for a while. If you're close to finishing a product, do you just stop and abandon months of work just because a challenger appeared? I wouldn't! Especially if you think you're doing something better than the competition

I've managed to cut down build times from ~1min (sometimes up to 3, but I couldn't even tell you why) when using Webpack and Babel to less than 200ms using just Rsbuild.

So, I welcome the improvement! The fact multiple people/orgs felt the need for this clearly means they felt the pain of slow builds too.

8n4vidtmkvmk
1 replies
4d18h

I've abandoned lots of projects when a worthy competitor appeared. They can take the burden of maintenance, yes please.

spoiler
0 replies
4d6h

Ha, I think I know what you mean, but I'm not sure we're talking about the same thing. For me personally, I'm glad that Rsbuild didn't stop decekopem when something like esbuild popped up.

I'm sure people tied up in the roll-up ecosystem think the same about Vite and rolldown!

All these do things subtly differently in ways they think is the correct way. Maybe one will come on top or maybe something else comes along and integrates lessons from both and that eventually wins and all meintence moves there eventually.

The JS ecosystem is complex (for better or worse), and bundling for it isn't a simple as people believe. So it makes sense there's multiple things trying to tackle the same problem!

chrstphrknwtn
0 replies
4d22h

Import maps and type="module" are pretty good.

I prefer to spend my time building against that instead of another bundler.

dluan
5 replies
5d10h

What happens when we reach the tip of bundling? Once you're in ms territory (like esbuild is), then what are the really creative things you can do if say every browser had a little WASM mako or some bundler in it?

It's very cool though and seems like a lot of effort went into this.

tinco
4 replies
5d8h

It's in the ms for a small projects. These improvements are not to shave a couple ms off some small codebase, but would shave seconds off of really large projects. The codebase I'm working on right now isn't really large, about 5 years of development with on average 2-3 developers working on it and in vite (esbuild) the build time is 20.78 seconds on my M1 MBP. This project claims to be twice as fast as vite, so it would shave off 10 seconds, that's a significant gain. It would probably have a nice impact on our CI/CD pipeline, if the benchmark is representative of real world codebases.

dluan
1 replies
5d7h

I ripped out webpacker and replaced it with esbuild in a big legacy rails app for the front end, probably 2-3 years ago, and its been fantastic and I haven't looked back. It's more or less made front end bundling an afterthought. Going from 3s to 1.5s on my M2 (esbuild to mako) isn't a gamechanger, so for me it feels like it's already getting close to the peak, whatever that might mean.

But I was more just asking what's the theoretical limit for this kind of optimization, and at the very least with rust. O(n)?

tinco
0 replies
5d7h

A that would be hard to say. A lower limit would be reading your entire project and the dependencies that are used once, and writing the bundled/minified code once. Possibly some parts of that could be done at the same time as you determine the bounds of the dependencies as you read in the code. So O(n) where n is in operations over lines of code in your project at least.

There's probably trade-offs too. Like do you bother with tree shaking to make your end product smaller, or do you not to make your build performance closer to that optimal read-once write-once lower bound.

aaaaaaabbbbbb
0 replies
5d6h

Note that while Vite transpiles with esbuild, it bundles with Rollup, which is single-threaded JS.

Vite also uses esbuild to prebundle dependencies for the dev server, but this is separate from production builds.

8n4vidtmkvmk
0 replies
4d18h

20 to 10 seconds for a production build sounds very insignificant. How often are you building for prod?

For dev, you definitely want subsecond recompiles but prod can take a few minutes.

garbanz0
4 replies
4d19h

Feel the need to push back against the predictable nay-saying in here.

Announcing with Rust in the title is not because of a hype train, it's a way to communicate that this bundler is in the new wave of transpilers/bundlers which is much faster than the old one (Webpack, Rollup) which were traditionally written in Javascript and painfully slow on large codebases.

While the JS ecosystem continues to be a huge mess, the solution to the problem is not LESS software development ("Just stop making more bundlers & stop trying to solve the problem - give up!"). Or even worse - solve the problem internally, but don't make me hear about it by open sourcing your code.

The huge amount of churn and development in this space has a good reason... it's a desperate attempt to solve the problems that browsers have created for web developers. Fact is that most business has moved to the web, a huge amount of web development is needed, but vanilla javascript can compounds and compound in complexity in the absence of a UI framework and strict typing. So now you've added transpilation and dependency management into the mix - and the user needs to download it in less than a second when they open a web page. And your code needs to work on at least 3 independent browser engines with varying versions.

SwiftUI devs are not a more advanced breed of developer than web developers. So why don't you see a huge amount SwiftUI churn and framework/compilation hell with native iOS development? The answer should be obvious. These problems are handed down from on high

The browser/internet/javascript ecosystem despite its glaring warts is actually one of the most amazing things humanity has created... a shareable document engine grew into a global distributed computing platform where you can script an application that can run on any device in less than a second with no installation. Not bad.

spoiler
3 replies
4d19h

I fully agree with you, and want to add: JS/TS due to it's accessibility is one of the largest eco systems. Hell, whether you are or aren't a devekoper you're part of it through using a browser.

People often scoff at complexity in frontend projects, but they need to handle various types of accessibility, internationalisation, routing and state including storage of those, due to its popularity it's also very frequently an attack surface. With advent of newer technologies (I don't just mean web Dev ones), that's been put into the browser as well, which compounds complexity even more. There's various authentication and authorisation standards most things need to handle as well (not isolated to JS, but it's also not free of it either). Not to mention the versatility and complexity of DOM and CSS that are some of the the most complex rendering engines with layers of backward compatible standards. Like you mentioned already, these engines are all subtly different. Also you have to handle bizarre OS+browser quirks. And things can move between displays with different DPIs, which can cause changes in antialiasing. There's browser extensions that fuck with your code too. Then there's also the possibility that the whole viewport can change. Networks change. People want things to work online and offline so they don't lose work while on a train... While working in an environment that wasn't explicitly designed to support that.

Christ, I'm exhausted just typing this. Most these people complaining probably barely understand what they're complaining about

skydhash
2 replies
4d14h

People often scoff at complexity in frontend projects

The complexity is there because everyone is trying to reinvent everything.

accessibility, internationalisation, routing and state including storage of those

Do multi-pages apps and most of these are really trivial due to the amount of solutions that exists.

There's various authentication and authorization standards

That's also more of a server concerns than the browser.

these engines are all subtly different

It isn't the old IE days (which Chrome is trying to replicate). More often than not, I hear this often when people expect to implement native-like features inside a web app. It's a web browser. The apps I trust, I download them.

People want things to work online and offline so they don't lose work while on a train

Build a desktop app.

Most these people complaining probably barely understand what they're complaining about

Because it's like watching Sisyphus pushing the stone up again and again. The same problem is being solved again and again and if you want to use the latest, you have to redo everything.

spoiler
0 replies
4d4h

I think you're hand waving a lot of problems away without giving them the thought and attention they deserve. And sometimes using arguments that aren't really unique to the JS ecosystem.

The complexity is there because everyone is trying to reinvent everything.

That's not just JS. That's literally everywhere. People reinvent ideas in every codebase I've seen. Sometimes it's a boon, sometimes it's a detriment. But again, not something that's unique to JS.

Do multi-pages apps and most of these are really trivial due to the amount of solutions that exists.

None of these are trivial, even with existing solutions. They're only trivial for trivial cases. Like, I'm sure we both understand people aren't building to-do demos.

It isn't the old IE days

Probably happened accidently, but it kinda misconstructs what I'm saying. There are issues between renderin engines and variety in how much/quickly they adopt some features. Hell, you still need code branches just for Safari in some cases becaus of how it handles things like private browsing.

Build a desktop app.

You're trading one world of complexity for another world of complexity (or I guess we could say it's trading one set of platform quirks for a larger set of platform quirks)

Because it's like watching Sisyphus pushing the stone up again and again. The same problem is being solved again and again and if you want to use the latest, you have to redo everything.

I understand where you're coming from, but just because Svelte was released it doesn't make React (and spin-offs) or Vue less relevant. You're not force to use them.

Regarding the bundling topic, again you're not forced to awirch to a different bundler if you're happy with your existing one, or the project isn't at a scale where it matters.

I think the pressure is internal, not external.

spoiler
0 replies
4d6h

I think you're hand waving a lot of problems away without giving them the attention they deserve.

cmrdporcupine
4 replies
5d5h

Can I kick it off programmatically inside a Cargo build via build.rs? I tried to go down this road with SWC and ... failed.

To be clear: I have JS/HTML artifacts in my repo alongside Rust source. I want to bundle them then ship them inside a produced Rust binary, or at least with it. With one build step using Cargo.

cmrdporcupine
0 replies
5d1h

Well yes I'm using something familiar, to embed the HTML and JS directly. But want to embed a webpacked entity, and have it run through a typescript compiler. But would like something driven from build.rs

brabel
1 replies
5d

I was looking if it provided a Rust crate as a lib, similar to how esbuild is just a Go lib (if you want to use it like that) but no luck.

cmrdporcupine
0 replies
4d22h

Found the same thing with swc. They have all this tooling, written in Rust, but no way to invoke it as a lib so it could be used inside a Cargo build.rs. Not easily at least. I made some progress then gave up.

rascul
0 replies
5d4h

A couple more:

https://wayland.emersion.fr/mako/

https://makoframework.com/

It can be hard sometimes to come up with names that aren't already in use. I think as long as it's clear in the description what it is, and the same name isn't shared for two projects that do approximately the same thing, maybe it's not so bad. There could also be an issue where command names might be the same so one would have to be changed. I recall this may have been a small issue when the Go language was new, as there was also a game of go available in some distro repositories. I believe that's generally solved now.

frenchman99
0 replies
5d5h

That's what happens when you give your project a common name as a name.

ToJans
2 replies
5d11h

I've recently taken a legacy Typescript clientside codebase that was using webpack to generate tens of js packages from minutes to seconds by using bun [0] for both dev and build:

For dev I replaced webpack with "bun vite": it loads scripts ad hoc and is thus super fast at startup, and it still supports hot reloading.

For build I use "bun build". I've created a small script where I don't specify the output folder, but just intercept it, and do some simple search & replace things. It works perfectly, although it's a bit of a hack (simplified code):

   const result = await Bun.build({
     entrypoints: [srcName],
     root: "./",
     minify: true,
     naming: `${configuratorName}.[hash].[ext]`,
   });
   for (const r of result.outputs) {
     let jsSource = await r.text()
     jsSource = jsSource.replaceAll("import.meta.hot","false")
     Bun.write(outdir + r.path.substring(1), jsSource);
   }
It might not be pretty, but it works super fast, and it only took me a couple of hours to convert the whole thing...

Update:

For the record, the real script also uses the HtmlRewriter from cloudflare (included by default in bun) to alter some basic things in HTML templates as well...

[0] https://bun.sh

tombl
1 replies
5d9h

Bun.build actually has a `define:` option that does the same thing as your replace. If you use it, it'll even propagate the value, and treeshake away any `if(import.meta.hot)` you have.

ToJans
0 replies
5d5h

Very good tip; thank you!

rk06
1 replies
5d4h

NOTICE: Plugin system is still under development, and the API may change in the future.

Killer feature of vite is to leverage existing plugins system of roll up.

Do you have plans to build a compat layer for existing ecosystem?

Other build tools are doing it. Eg: rspack can use webpack plugins, farm can use vite plugin

giancarlostoro
0 replies
4d23h

Mako is in Python, so I would be surprised if it is in any way related.

mark38848
0 replies
5d

Why not use a fast language like C, Odin, Hare or Zig?

efilife
0 replies
4d20h

Another one?

bartimus
0 replies
4d22h

This would be super interesting if I were a state actor.

aleksandrh
0 replies
5d2h

Time to reset the clock. 0 days since a new web bundler was released (in Rust!!).

So tired of this ecosystem and its ceaseless tide of churn and rewrites and hypechasing.