return to table of content

Oxlint – JavaScript linter written in Rust

Aissen
44 replies
5d5h

50-100 Times Faster than ESLint

Our previous linting setup took 75 minutes to run, so we were fanning it out across 40+ workers in CI. By comparison, oxlint takes around 10 seconds to lint the same codebase on a single worker[…]

So it's in fact 18000 times faster on this embarrassingly parallel problem (but doing less for now).

msoad
27 replies
5d5h

in a very large codebase, how common it is to run the linter for the entire repo? Is this an optimization worth spending time on?

sapiogram
16 replies
5d5h

Yes, because you lint everything in CI. Otherwise, linter warnings will start creeping into your codebase immediately, and the tool becomes much less useful.

mathverse
8 replies
5d4h

Would not you lint only on files that changed?

erikaww
3 replies
5d4h

I'm not sure if Eslint has this, but there could be cross-file lints (eg. unused variables). If some file changes, you may need to relint dependencies and dependent files. This could recursively trickle.

I'm not sure if Eslint does this either, but indices or some incremental static analysis sounds like it could help the linter minimize rechecks or use previous state.

msoad
1 replies
5d4h

if you have one file that every single file across the repo imports in a way and you make changes to that file, you might run the linter for the entire repo. But again, how likely is this scenario?

erikaww
0 replies
5d1h

If the index or incremental static analysis object was designed well enough, I don't think you would need to lint every file, you would just need to look at files that consume that variable. Maybe you would look at every index?

I'm not sure how well this could scale across (600- 1000?) different lints though. I should look into static analysis a bit more.

ehutch79
0 replies
5d3h

You can tell eslint about globals in it's config. But if you're using variables that arn't declared in the file somehow, that might be an issue you want to look at in general. That's a potential foot gun a linter should be balking at.

indymike
2 replies
5d3h

74 minutes of linting vs 1.3 seconds of linting?

If a file has been linted, is unchanged since it was linted, there's literally no need to lint it again. Much like if you only need to process one record, you don't query the whole table to get the record.

AndrewDucker
1 replies
5d3h

File A depends on File B. File B moves. File A is now wrong, because it is unchanged.

indymike
0 replies
1d3h

Static analysis != linter.

rwilsonperkin
0 replies
5d3h

As the sibling comment mentions, you may have lint rules that depend on checking for the existence of, or properties of, another file. A popular set of rules comes from https://www.npmjs.com/package/eslint-plugin-import which validates imports, prevents circular dependencies, etc

sanitycheck
5 replies
5d4h

I think if my CI was taking 45 mins to lint I'd look at linting only the files changed since the previous build instead of splitting it across 40+ workers. Or writing a new linter in Rust.

But I'm generally working in a (human & financially) resource-constrained environment.

throwup238
3 replies
5d3h

Typescript lints are type-aware so you can’t just lint changed files, you have to relint the entire codebase to check if any type changes have impacted the unchanged code.

pcthrowaway
1 replies
5d3h

Wouldn't an issue with a type change be caught at typescript compile/check steps?

I'm not aware of eslint rules which would complain about some other untouched file if types have changed in ways such that the program still compiles

anamexis
0 replies
5d2h
Too
0 replies
4d10h

Is there no incremental lint mode? When developing you need that for instant feedback, same mechanism should work for CI.

arp242
0 replies
5d1h

One problem is that a change in a.js may trigger a new error in b.js.

ESLint could also cache things fairly trivially:

  hash = hash_file_contents()
  if previously_seen_hashes.contains(hash)
      report_previous_results()
  else
      run_lint_and_cache_results()
  end
maybe that already exists. But that has the same problem.

When you've got enough hardware to throw at it, then "just run it on the full code" is the safest.

msoad
0 replies
5d4h

I thought it would be obvious that in large codebases you only lint changed files in CI

HelloNurse
9 replies
5d5h

Do you have some source files that are somehow exempt from bugs and would be a waste of the linter's time?

Probably not, but it's a trick question: if you try to look for exceptions to the rule, you have already wasted so much time that running a linter on all files would be faster.

thfuran
8 replies
5d4h

Do you have some source files that are somehow exempt from bugs and would be a waste of the linter's time?

Every file not touched in any given diff

Shish2k
4 replies
5d3h

If I change a function signature, then my code is fine - but all the other files which import and use my function will break

zdragnar
3 replies
5d3h

That's a job for TypeScript, not eslint.

kristiandupont
2 replies
5d

Linter rules can rely on the type system

zdragnar
1 replies
4d23h

What eslint rule would apply to the caller of a function after that function's signature changes that wouldn't also be picked up by TypeScript?

In particular, the call site itself hasn't changed, as this thread assumes the linter is only run on changed files

kristiandupont
0 replies
4d22h

Anamexis has a couple of examples in this response: https://news.ycombinator.com/item?id=38655101

OJFord
1 replies
5d4h

What if the diff adds a new linter rule, should we only run it on the linter config file?

What if the linter uses more context than a single file, a type-checker for example or even just checking the correct number of arguments (regardless of type) are passed to an imported function - or that that symbol is indeed even callable? Should we only run the linter on the caller's file, or the callee's, when they haven't both changed?

ehutch79
0 replies
5d4h

Run the linter on the code base then, when you make the change? Not every check-in on the off chance a rule changed. Or, add some logic that the CI runs it against the whole code base only when the rules changed, otherwise just the relevant files to the commit/pr

Also, ESLint doesn't do type checking. That's typescripts job, and apparently typescripts runtime isn't an issue.

spenczar5
0 replies
5d3h

If a different (unchanged) file depends on the one you changed, you could have changed the API in a way that makes the unchanged file unacceptable to your linter.

davedx
9 replies
5d4h

75 minutes to generate a bunch of mostly irrelevant nitpicks.

What a colossal waste of compute resources. (1)

IME if you’re using TypeScript then ESlint’s real value mostly approaches zero. For pure JS projects it’s useful for finding nullref type bugs.

(1) > Our previous linting setup took 75 minutes to run, so we were fanning it out across 40+ workers in CI

This. Is. Insane.

padjo
3 replies
5d3h

“rules of hooks” linting alone prevents a ton of bugs in your average React codebase and TS will provide no help there

davedx
2 replies
5d2h

Ah yes the “exhaustive dependencies” rule that can trigger huge unnecessary refactors for absolutely zero value.

Linting has some value, it’s just that in my professional experience its costs outweigh its benefits

padjo
1 replies
4d8h

If you leave dependencies off a hook by mistake you’re creating bugs, unintended behaviour and making your code a nightmare to modify. That’s always going to be worth linting for. Can you over lint a codebase, sure I guess so, if your lint stage is taking hours it probably needs optimization, but your assertion that type checking is enough is incorrect.

davedx
0 replies
4d

Having to specify every single closure in dependencies does nothing. Not every dependency has to be specified. Not specifying a dependency is not always a bug.

Honestly if not exhaustively putting some closures in your useEffect deps list is your idea of a “maintenance nightmare” then maybe you should stay away from real production code bases? There are plenty of hairier mistakes and patterns out there than that.

clarkdave
1 replies
5d2h

I think the typescript-eslint plugin in particular has some high value eslint rules that complement TypeScript.

For example, the no-floating-promise[0] rule catches some easily-made mistakes involving promises in a way that TypeScript doesn't on its own.

Other rules can be used to increase type safety further. There are various rules relating to `any`, like no-unsafe-argument[1], which can be helpful to prevent such types sneaking into your code without realising it; TS has `noImplicitAny`, but it'll still let you run something like `JSON.parse()` and pass the resulting any-typed value around without checking it.

[0] https://typescript-eslint.io/rules/no-floating-promises [1] https://typescript-eslint.io/rules/no-unsafe-argument

seanwilson
0 replies
5d2h

For example, the no-floating-promise[0] rule catches some easily-made mistakes involving promises in a way that TypeScript doesn't on its own.

Is there a fast linter that checks for this? I find this error easy to make as well, and it usually causes weird runtime behaviour that's hard to track down.

smt88
0 replies
5d2h

TypeScript is still permissive because it has to maintain compatibility with JS.

We use eslint for formatting and other legal-but-likely-a-mistake behavior and it does catch bugs.

eyelidlessness
0 replies
5d1h

I get a ton of value from ESLint with TypeScript, and in particular from @typescript-eslint. And yes, 75 minutes is absolutely bonkers. It would have me rethinking a lot of things well short of that time. But automated quality checks wouldn’t be anywhere near the top of that rethinking list. And partly, but not only, because of irrelevant nitpicks. Having humans do those nitpicks is vastly worse in time elapsed, and likely in compute time in many scenarios as well. The more human time is spent on the things linters help with, the more that time is not spent on reviewing and ensuring correctness, performance, design, maintainability, user- and business-implications, etc.

ehutch79
0 replies
5d4h

Disagree on ESLint vs Typescript. ESLint and TyepScripts jobs should have minimal overlap.

ESLint primary job is linting. It should be finding 'foot guns' and code style issues. Things that are absolutely valid in the language, but could lead to potential issues. Because of that, it's totally valid that you're not finding as much value in it. It depends on the rules you enable in it, etc. And yeah, it can feel super nitpicky when it's yelling at you for not having a radix in parseInt().

Typescript's 'compile' step or whatever, it doing type checking and making sure your code is valid. If you're using bare JS, your IDE should be doing this job, not eslint.

(but yes, anything more than a few minutes to lint even a large code base is insane.)

joeldo
4 replies
5d5h

75 / (1/6) = 450. Still very exciting!

Aissen
3 replies
5d4h

You forgot the 40 workers vs 1 worker.

anamexis
2 replies
5d3h

The way I read it, it was taking that amount of time before they split it into workers.

rwilsonperkin
1 replies
5d3h

Correct, it was 75 minutes total compute time. That was spread across workers to make the walltime more reasonable

Aissen
0 replies
4d22h

Indeed, I really need to improve my reading comprehension.

Kyro38
0 replies
5d3h

How much of those 75min are due to @typescript-eslint ?

Requiring the TS AST adds a massive overhead.

pzmarzly
42 replies
5d5h

If I understand it right, we have 3 large projects that aim to replace most of JS tools on their own: Bun[0], Oxc[1] and Biome[2]. Bun's package manager is great, Biome formatter recently reached 96% compatibility with Prettier, and now Oxlint is apparently good enough to replace ESLint at Shopify. Exciting times ahead.

But it's giving the impression that these projects perhaps could be better off collaborating instead of each of them aiming to eat the world on their own?

EDIT: I'm not saying it's wrong to write competing tools, it's open source anyway, so please do whatever you like with your time and have fun. But it looks like out of these 3 projects, 1 has a startup behind it, and 1 receives funding from bigger company. I assume that money will stop coming in if these tools don't gain adoption fast enough, and nobody would want to see that happen, especially with so much potential here.

[0] https://bun.sh/

[1] https://oxc-project.github.io/

[2] https://biomejs.dev/

wg0
31 replies
5d5h

More like JS folks are discovering compiled languages.

Now instead of a new JS framework daily, it's going to be new reimplementation of an existing tool daily. For a while.

shzhdbi09gv8ioi
20 replies
5d5h

About time, js cli apps were never a good idea.

maccard
19 replies
5d4h

They exist because it's significantly easier to distribute js apps than it is to distribute a compiled app. npm install works on Linux, Mac and windows, regardless of what libc, Msys, crt, you have installed. It could be python, but pip is a usability nightmare

bluejekyll
11 replies
5d4h

Cargo and crates.io is easily as simple as npm for installation and distribution. I find it to be more reliable than npm in general. Generally it’s very easy to write system agnostic software in Rust, as most of the foundational libraries abstract that away.

So when you say “compiled app” you might be referring instead to C or C++ apps, which don’t generally have as simple and common a distribution model. Rust is entirely different, and incorporated a lot of design decisions about how to package software from npm and other languages.

andygeorge
10 replies
5d4h

Cargo is still a dev tool and isn't a great distribution solution.

bluejekyll
9 replies
5d3h

I disagree. Cargo is a great distribution tool, for Rust projects. I just tell people, first install rust, then just `cargo install`

Second, this was in response to an npm is simpler comment; npm and cargo are absolutely the same category of tool.

andygeorge
5 replies
5d3h

I just tell people, first install rust, then just `cargo install`

local compilation may work for you and other individuals, but "just cargo install" can immediately run into issues if you're trying to deploy something to things that aren't dev workstations

npm and cargo are absolutely the same category of tool

as a dev tool? absolutely. as a production distribution solution? definitely not

arp242
2 replies
5d1h

The overlap between people who want to run something like ESLint and people with dev workstations is very close to 100%.

maccard
1 replies
4d5h

There's a significant difference between a machine that compiles rust quickly and a machine that can execute JS.

arp242
0 replies
4d5h

You just need to compile it once in a while. It's slow, yes, but really not that big of a deal.

bluejekyll
1 replies
5d

as a production distribution solution? definitely not

If you’re talking about distributing Rust projects, sure it’s fine. Generally though, if you’re orchestrating a bunch of other things outside the rust software itself, I’d turn to just.

npm is still mainly used in JavaScript and Typescript scenarios, so I think you’re kinda splitting hairs if you’re suggesting it’s a general purpose tool.

andygeorge
0 replies
5d

there's a reason `cargo install` is usually the last distribution option that maintainers of rust software provide ¯\_(ツ)_/¯

satvikpendem
2 replies
5d3h

I actually recommend cargo install cargo-binstall first, then cargo install <crate>. This is because it is quite annoying to compile packages every time you want to install something new whereas binstall distributes binaries instead, much faster.

bluejekyll
1 replies
5d1h

Feels like we need a single command for that, I have two goals for my workflow (like maybe bininstall should be included in Cargo):

1) what’s the easiest way to give people access to a tool I just wrote, `cargo publish`

2) what’s the easiest way for someone to use it, as few steps as possible, right now it’s `install rust` && `cargo install`.

Once I get to three or more steps on 2 I tend to turn to just or make depending on the context.

shzhdbi09gv8ioi
0 replies
4d10h

You should combine step 1 and 2 in CI. Just tag a version in your git, push to remote and have CI auto build a release for you.

Use github actions or other setup for other backends.

(this is language agnostic and a reasonable thing to learn as a dev).

Or if you must live in the cargo command, go nuts with cargo-release.

https://github.com/crate-ci/cargo-release

https://github.com/cargo-bins/release-pr

boredumb
3 replies
5d4h

Not particularly true especially in this case. You can get a rust binary and run it anywhere regardless of libc or having cargo installed on the users machine. A Javascript CLI requires nodejs and npm to be installed before running it.

maccard
1 replies
5d4h

In this particular case, you wouldn't be installing oxlint unless you had npm installed already?

boredumb
0 replies
5d1h

For their main use case they do package it up for npm, but the crates folder have each portion available to build/distribute as a stand alone binary you can run against javascript without node or npm installed.

rob74
0 replies
5d4h

Same goes for Go BTW. I even find it easier to install Go (haven't done it for Rust that often yet) and compile a binary (of a "pure" project that doesn't involve C libraries or other complications) than installing node/npm/nvm/whatever to get something up and running...

nsonha
0 replies
3d5h

I think more like tools for a language tend to be written in that language. Obviously the author needs to care enough about the target language and if they support plugins then it’s also desirable for the plugins to be written in the target lang.

lixy
0 replies
5d4h

I wish the nix programming language wasn't so rough because it can be pretty great at this problem. Being able to compile from source while just listing out package dependencies is powerful.

kbknapp
0 replies
5d4h

I've had significantly fewer issues with `cargo [b]install`ed compiled Rust programs than `npm install`ed ones. Getting nodejs/npm installed (and at an appropriate version) is not always trivial, especially when programs require different versions.

OOTH, Precompiled Rust binaries have the libc version issue only if you're distributing binaries to unknown/all distribtuions, but that's pretty trivially solved by just compiling using an old glibc (or MUSL). Whereas `cargo install' (and targetting specific distributions) does the actual compiling and uses the current glibc so it's not an issue.

nonethewiser
4 replies
5d4h

Can you elaborate? Typescript has existed for a long time and has been the standard over vanilla js for a long time. Bun, oxlint, and biome are all replacing existing tools with build steps. How could it be that their popularity signifies some new appreciation of compiled languages?

wg0
3 replies
5d2h

Typescript is not a compiled language. It is a "transpiled" language. Transpiled to another interpreted language Javascript which in turn again is not a compiled language.

dragonwriter
2 replies
5d2h

Typescript is not a compiled language.

Compilation or not isn't a feature of languages but of language implementations, but, yes, the primary TypeScript implementation is compiled.

It is a "transpiled" language.

Transpilation is a subset of compilation.

It's not compiled to native machine code for the target system, but that doesn't make it not-compiled.

wg0
1 replies
5d2h

If going with that lax definition and concept wrangling, Python is also a compiled language. Python source code can be compiled and byte code can be cached and then Python runtime can load it.

Just like Typescript compiles the source to Javascript which is then loaded by the V8/Node etc.

And thus programming languages can be only of one type - Compiled.

recursive
0 replies
5d

Being compiled or not isn't a property of the language. It's a property of whether you compile it or not. Pure interpreters can exist. They're not very common for "practical" languages. Parse to AST, then call evaluate(ast). No target language necessary.

natrys
3 replies
5d4h

If only we are so lucky. Still waiting for a faster typescript compiler.

satvikpendem
1 replies
5d3h

STC by the SWC author should be coming along, I hear. It still will take a while though.

robinson7d
0 replies
5d2h

Semantic nit: STC is a type checker, SWC already compiles TypeScript well. TSC does both (unless flagged to do one or the other) so it depends on what needs replacing.

Why it matters: in GP’s case it sounds like compiling is the problem, so migrating to using SWC as the compiler but keeping TSC as the checker (noEmit flag) in a lint step may ease that pain a bit. Though it might be nicer to migrate both in parallel.

_fat_santa
0 replies
5d3h

Bun, Oxc and Biome are all great but a typescript rust compiler is something I'm really looking forward to. Right now my web application I've been building just crossed 25k lines of TS code and running `tsc` is becoming a pain point. What used to take 2-3 seconds now takes upwards to 10s, even with incremental compilation enabled in some cases.

scotty79
0 replies
5d3h

JS seems to be like a great language for discovering what's worth to be written. I think rewriting stuff in some compiled language is a sweet spot of "build first one to throw away".

djbusby
3 replies
5d5h

Happens loads of times. There is some in-built human condition that folk basically see a thing that they could improve but then decide to go off and build their own moon-base rather than work on someone elses project.

vintermann
1 replies
5d5h

Well, when they pretty much succeed at building their moon base, I say good on them.

bluGill
0 replies
5d4h

Except that this is not as good as the original by their own admittance. If they had collaborated they could likely get more done in the same amount of time. (not twice as much, but more)

Maybe this is a better design than the other projects. Maybe people cannot get along and so they are forced to fork. There are many other good reasons to not contribute to an existing project. However we should always look at skepticism on such claims: it is easy to start you own project and you are in control so the amount of work you get done is higher. However working together, while it makes everyone slower normally results in many more features and higher quality code over the long term.

So please when you have an itch technology can solve look to see if you can contribute to someone else's project first. It won't be as fun, but the world and you will be better for it.

throwaway894345
0 replies
5d5h

In my experience, project maintainers are frequently uninterested in changes to their project, especially if those changes are a significant departure from their current vision or if it involves pivoting away from tools that they like. You're often expected to make years of contributions to the project to earn the rapport to bring significant suggestions before the maintainers. It's often just easier to 'build your own moonbase' instead of politicking.

Just a couple days ago, the curl maintainer published a blog post about why he wouldn't rewrite curl in Rust and a big part of the reason was that he and the other maintainers weren't good at it and weren't the right people to lead a project that used it--he said that he encouraged other people to start their own project in Rust. But then when people follow that advise, they're chided for not contributing to the more established project! To be clear, I'm not a "just rewrite it in Rust" guy, but I think people underestimate the difficulty and frustration involved in petitioning an established project to make the reforms necessary for significant improvements.

pzmarzly
2 replies
5d4h

To clarify: I'm also not advocating for merging the codebases together, that would be mostly counterproductive (especially since Bun is in Zig, and Oxc and Biome in Rust).

When I think why Rust was successful at establishing community-accepted standard tooling (clippy, rust-lsp), 2 things come to mind:

- Project developers were always promoting each other's tools, pointing them out in docs or blog posts

- Good tools were being pulled into rust-lang GH org (for visibility) and rustup CLI distribution (for ease of system-wide installation)

Both of these things are not technical challenges, they are rather more "political" (require agreements between parties). In JS ecosystem, what would it take for Oxc to say on their website "we are not writing a formatter, please install Biome" and for Biome homepage to say "we are not writing a linter, please install Oxlint"?

thatxliner
0 replies
5d4h

Except Biome can also function as a linter

conaclos
0 replies
4d3h

Biome is the continuation of Rome Tools. It exists since several years and always featured a linter and a formatter.

If I remember correctly, OXC was born out of its author's desire to learn Rust and his feeling that Rome Tools/Biome had made complex technological decisions (mainly the use of CST instead of an AST). Rome Tools/Biome chose a CST to bring first-class IDE support: you can format and lint malformed code as you are writing it.

I hope more collaboration between Biome and OXC in the future. However, the inherent difference comes from technological choices.

conartist6
1 replies
5d3h

[3] https://github.com/bablr-lang

I'm its author and focus solely on the collaboration picture. I don't generate much press because I only build internal APIs for tooling and language authors, where the projects you've shared all opted to prioritize fulfilling specific real use cases over generalizing their core technology.

Cruel as it is, I think all of them have planted the seeds of their own failure by failing to protect their organization's mission and day-to-day work from being jailed by a set of specific opinions about code style, which cannot possibly be "right" or "wrong" but must instead by argued about forever.

I see the core challenge as shifting all editors and tools to share a common DOM representation and be interoperable in a per-node way, where the current solution is to use siloed and reimplemented tools which interoperate mostly in a per-file way, with each tool parsing the text, doing some work, then emitting text for some other tool to parse...

conartist6
0 replies
5d3h

For example:

"The Oxc AST differs slightly from the estree AST by removing ambiguous nodes and introducing distinct types. For example, instead of using a generic estree Identifier, the Oxc AST provides specific types such as BindingIdentifier, IdentifierReference, and IdentifierName."

Already this is getting into matters of style! It is one style, yes, but Javascript's shorthand syntax `({ foo })` already breaks the mental model: the identifier `foo` is technically doing the work of both an IdentifierName and an IdentifierReference. OXC chooses IdentifierReference so any system built on top of it would need to contain additional logic in order to be able to identify all sites in code that are used as identifier names.

PoignardAzur
0 replies
5d4h

There's also Deno:

https://deno.com/

lucideer
12 replies
5d4h

The spate of rewrites of JS tools in compiled languages continues. Here's my problems with them:

1. The need for a 50-100x perf bump is indicative of average projects reaching a level of complexity and abstraction that's statistically likely to be tech debt. This community needs complexity analysis tools (and performant alternative libraries) more than it needs accelerated parsers that sweep the complexity problem under a rug.

2. (more oft cited) The most commonly and deeply understood language in any language community is that language. By extension, any tools written in that language are going to be considerably more accessible for a broader range of would be contributors. Learning new languages is cool but diverging on language choices for core language tooling is a recipe for maintainer burnout.

apantel
3 replies
5d4h

The need for a 50-100x perf bump is indicative of average projects reaching a level of complexity and abstraction that's statistically likely to be tech debt.

I don’t think this is the right way to look at it. The issue is that JavaScript developers have been writing servers, build tools, dev ops tools, etc, in JavaScript because that’s the language they are expert in, but JavaScript was never the right choice of language for those types of programs. The whole industry is caught in a giant case of “If all you have is a hammer…”.

I do web development in JavaScript because JavaScript is the language of the browser. But I write all of my own build and devops tools in Java, including SaSS compiling, bundling, whatever you want. There’s no contest between the Java runtime vs the JavaScript runtime for that kind of work.

I think it’s backwards to see this as a 50-100x performance boost because Rust was used. That same performance increase could be had in a number of languages. The real issue is a 50-100x performance hit was taken at the outset simply by using JavaScript to write tooling.

Edit: just to put it in perspective, a 50-100x speed up in build time means that what would currently take a minute and a half using JS tooling could be accomplished in a second using a fast runtime. A minute and a half of webpack in the blink of an eye.

lucideer
1 replies
5d2h

There’s no contest between the Java runtime vs the JavaScript runtime for that kind of work.

I don't mean to be facetious here, but... citation needed.

There are a lot of assumptions about language performance being made throughout comments threads on this page that seem more based on age-old mythology rather than being grounded in reality.

apantel
0 replies
5d

Here is a presentation by a team that did benchmarking of different runtimes:

https://youtu.be/sRCgu1ng6Bo?si=SV_Mcinuqh_c-nuX

JavaScript is ~8x slower and Python ~30x slower on average vs Java / Go / C++ that are all quite close.

A funny aside: I always believed that Java is slow because I heard it repeated so many times. I internalized that bit of age-old mythology. But lately as I’ve gotten more focused on performance, I’ve come across a lot of hints in various talks and articles that Java has become one of the go-to languages for high-performance programming (e.g. high frequency trading). So, I hear you about the mythology point.

jerf
0 replies
5d2h

As I almost always think to myself whenever I see some program braying about its 25x speed improvement in some task, the reason you can have a 25x speed improvement is because you left that much on the table in the first place.

I don't want to be too hard on such projects; nobody writes perfect code the first time, and stuff happens. But this does in my mind tend to tune down my amazement level for such announcements.

And your last edit is really the important point. That level of performance improvement means that you are virtually certain to move up in the UI latency numbers: https://slhenty.medium.com/ui-response-times-acec744f3157 Unless everything you were doing is already in the highest tier, this kind of move is significant.

the_duke
2 replies
5d4h

The rewrites mostly are for tools that run for a short amount of time and do lots of AST processing.

Javscript is just inherently suboptimal for this.

* The JIT needs to warm up

* AST data structures can be implemented much more efficiently with better control over memory layout

lucideer
0 replies
5d3h

I get what you're saying but you've missed my point.

You're optimising your execution but there's trade-offs: you need to think about optimising your software development model holistically. There's little point in having the most efficient abandonware.

A JS tool may be technically suboptimal but that's not a problem unless AST size is a bottleneck.

AST data structures can be implemented much more efficiently with better control over memory layout

I assume you're right but I'm not sure I fully understand why this is the case - can you give examples of how a data structure can be implemented in ways that aren't possible in JS?

conartist6
0 replies
5d3h

To be fair the AST structure can also be implemented more efficiently without better control over memory layout. The JS ecosystem standardized on polymorphic ASTs, which in retrospect seems dumb, but is not a result of any fundamental limitation in JS.

E.g. in ESTree evaluating such a common expression as `node.type` is actually really expensive -- it incurs the costs of a hashmap lookup (more or less) where you'd expect it to be able to be implemented using pointer arithmetic.

rafaelmn
2 replies
5d4h

How often does an average X developer delve down to compiler details and contribute to static analysis tooling ?

Metaprogramming and compilers/language analysis tooling is a jump above your run of the mill frontend code or CRUD backends.

Sort of elitist, but IMO devs capable of tackling that complexity level won't be hindered by a different language much.

And Rust is really tame compared say C/C++. Borrow checker is a PITA, but it's also really good at providing guardrails in the manual memory management land, and the build tooling is really good. Don't know enough about Zig but I get the impression that rust guardrails would help developers without C/C++ background contribute safe code.

You could argue Go is an alternative for this use case (and similar languages) but it brings it's own runtime/GC, which complicates things significantly when you're dealing with multi language projects. There's real value in having simple C FFI and minimal dependencies.

lucideer
0 replies
5d2h

Sort of elitist, but IMO devs capable of tackling that complexity level won't be hindered by a different language much.

Not elitism, just an honest appraisal, though I think flawed as competency isn't linear it's heterogeneous - you'll find the most surprising limitations accompanying the most monumental talent. Language fixation is a common enough one, but even beyond that, the beginner-expert curve on each language shouldn't be underestimated regardless of talent or experience.

In particular when it comes to Javascript there's a tendency to believe the above by virtue of the community being very large & accessible - bringing in a lot of in-expert contributors, especially from the web design field. This isn't fully representative of the whole though: there are significant solid minorities of hard JS experts in most areas.

arp242
0 replies
5d1h

How often does an average X developer delve down to compiler details and contribute to static analysis tooling?

I've done this a few times for Go. One of the nice things about Go is that this is actually pretty easy. I've written some pretty useful things with this and gotten good mileage out of it. Any competent Go programmer could do this in an afternoon.

I don't really know what the state of JS tooling on this is, but my impression is that it's a lot harder, partly because JS is just so much more complex of a language, even just on the syntax/AST level. And TypeScript is even more complex.

davedx
1 replies
5d4h

Disagree with 1. Most large JS projects I’ve worked on have been relatively high in necessary complexity; probably because many JS projects are relatively simple applications and relatively new (by the standards of enterprise software).

There is also abundant complexity analysis tooling for JS too. When I worked as an architect at a large telco we had this tooling in CI. It revealed some code smells and areas needing refactoring but didn’t really signal anything especially terrible.

Software tooling is more productive than ever and product requirements have grown to use that capacity. It’s definitely not a load of tech debt.

lucideer
0 replies
5d2h

Not sure where you've worked or what you've worked with but everything you've described is the opposite of the JS projects I've encountered (multiple companies, multiple 100s JS projects).

There is also abundant complexity analysis tooling for JS too.

I would highly appreciate recommendations here; I wonder does your review indicate the projects being analysed had little wrong, or that the tools were not very good at identifying problems.

d3w4s9
10 replies
5d5h

it serves as an enhancement when ESLint's slowness becomes a bottleneck in your workflow

Well, when I need to batch fix errors in files, yes it can take a while to run eslint. But that almost never happens. I have the plugin and fix errors as I go (which I believe is what most people do), and I never feel performance is an issue in this workflow. I really doubt how (actually) useful this is.

sapiogram
5 replies
5d4h

Their main motivation seems to be CI, where people often lint the entire repo on every PR.

msoad
4 replies
5d4h

which is a really weird problem to have. Only lint files that have changed? How hard that is? our monorepo is 3m lines of code and running lint is not a bottleneck by any means...

And once in a while that we have to run lint for entire repo (ESLint upgrade for example) we can afford to wait 1 hour ONCE

ForkMeOnTinder
1 replies
5d3h

Only lint files that have changed? How hard that is?

Quite hard, especially since type-aware rules from e.g. https://typescript-eslint.io/ mean that changing the type of a variable in file A can break your code in file B, even if file B hasn't changed.

msoad
0 replies
4d8h

Well, this solution won't help with those rules since your bottleneck is now tsc.

mrkeen
0 replies
5d3h

we can afford to wait 1 hour ONCE

If you lint the entire repo, fix every issue in one try on the first go, and then lint the entire repo to double-check, that's two hours.

But my workflow is usually: lint the repo -> fix one thing -> repeat

Aeolos
0 replies
5d4h

50-100x faster would turn that 1 hour into 1 minute.

It's not that you can't wait 1 hour, it's that you don't have to wait. Think of all the wasted cycles that could be put to better use...

zanellato19
1 replies
5d3h

Are you kidding? Having someone run faster on the editor is a huge gain. I can't believe people are saying this isn't useful.

d3w4s9
0 replies
4d21h

Faster by how much in absolute time? Currently I'm not feeling ANY delay in the IDE, so I assume for a regular size file linting takes less than 50ms -- likely much shorter than that. Let's say it reduces 50ms to 2ms. Guess what? It still has absolutely no effect on my everyday work.

jcelerier
1 replies
5d5h

It's still using more battery

d3w4s9
0 replies
5d4h

True, but eslint energy use would be one of the last things I worry about if I am looking for a longer battery life. Chances are that TypeScript service used for Intellisense costs more electricity.

austin-cheney
10 replies
5d7h

It will be awesome when this gains support for custom rules as I have a bunch of custom ESLint rules. The thing that annoys me the most about ESLint is that it has too many NPM dependencies.

kristiandupont
4 replies
5d6h

This feels like the most important thing about new linters (including the one Bun has and others).

If you just use linting for checking a bit of stylistic policy, any replacement might be fine. However, linting is much more than that and if you are depending on third party rules or writing your own (https://kristiandupont.medium.com/are-you-using-types-when-y...), there is no way around ESLint.

WhitneyLand
3 replies
5d

Not sure if I’d be comfortable taking as far as your example.

Adding logic into linters blurs separation of concerns, adding unnecessary complexity akin to an extra programming language.

Linting in essence should be orthogonal to development — a layer that enhances code quality without being fundamental to the code’s functionality. By overextending linting, we risk creating a maintenance burden and an additional learning curve for developers.

Linting is a great tool and but as with any great hammer it’s easy for lots of things to start to look like nails.

kristiandupont
0 replies
4d22h

Linting in essence should be orthogonal to development

I guess that's what I disagree with. Yes, it adds complexity of its own, just like types do. And I still favor solutions that are based on types for most things, but more and more I try to go this route. They are surprisingly easy to write.

jitl
0 replies
3d18h

I use custom linters as a “continuous codemod” that transform old code and engineer habit in form X to new form Y. Combined with a way to “ratchet” the number of rule violations towards zero, we can gradually and relatively painlessly roll out any number of whole codebase migrations in parallel over weeks or months.

Two examples:

- we have an API like dbModel.getValue() that subscribes the current view to any change in an entire database row. We noticed this lead to UI performance issues from components over-rendering. To deprecate, I wrote a rule to transform dbModel.getValue().specificProp to dbModel.getSpecificProp(). We can’t remove the getValue method since there’s times you really do need it, but we can automatically switch new code written to a more performant specific call for many cases.

- We use a lint rule to enforce that API endpoints and queue worker jobs have ownership and monitoring rules specified. We could use the type system to strictly enforce this, but we want to support gradual migration as well as suggest some inferred values based on the identity of the author. Using a lint with ratcheting means newly added cases are enforced but easy to add, and old cases can/will adopt over time.

I want to find the time to write a blog post about this, I think it’s a pretty handy pattern.

bakkoting
0 replies
4d23h

eslint and typescript are the de-facto static analysis tools for JavaScript. TypeScript isn't extensible. So if you want to do any custom static analysis, you're doing it as a custom eslint plugin.

It might be better to have some other tool to do pluggable static analysis, but the fact is that there isn't one. And eschewing project-specific static analysis entirely would be giving up far too much.

dm33tri
1 replies
5d5h

I think they have custom rules in the works, using `trustfall` query engine and yaml definitions

https://github.com/oxc-project/oxc/tree/main/crates/oxc_quer...

obi1kenobi
0 replies
5d3h

Trustfall queries are also how the Rust semver linter `cargo-semver-checks` works. It's cool to see more projects putting it the engine to good use!

I'm the Trustfall maintainer, happy to answer questions about the query engine or how oxlint or cargo-semver-checks use it.

I also recently gave a talk at P99 CONF on how cargo-semver-checks used Trustfall's optimizations API to get a 2000x speedup: https://www.youtube.com/watch?v=Fqo8r4bInsk

DanielHB
1 replies
5d5h

Not only that, but you also need deps to get eslint to support your specific flavour of pre-transpiled JS. Not only typescript, but new standard JS syntax (like ?. or ??) often requires updating the eslint parser.

robertlagrant
0 replies
5d5h

I'm not sure there's a way around that.

crossroadsguy
0 replies
5d1h

Isn’t that an NPM/Node thing? I mean I sometimes look at two React Native projects and a web project. The dependency situation there is downright anxiety inducing and that I am saying as an Android developer so please know that I am kind acquainted with dependency mess.

frou_dh
8 replies
5d6h

"ruff" for Python which is displacing the flake8 linter (and in fact the "black" code formatter too) shows that this kind of thing can work fantastically well.

drexlspivey
5 replies
5d5h

I am hoping that ruff goes after type checking next to replace mypy which is pretty slow. One tool to rule them all

C-Saunders
1 replies
5d5h

Have you checked out Pyright[1]? It's not one tool to rule then all, but it is nice and fast.

[1]https://github.com/microsoft/pyright

sztomi
0 replies
5d4h

Pyright is neat but the CLI output makes me want to poke my eyes.

simicd
0 replies
5d5h

Have you by any chance used Pyright? If not, I can highly recommend it. The VS Code extension makes writing Python almost as if it's a statically typed language (+ there is a CLI if you want to check types in CI). The docs are claiming that it's 3-5x faster than mypy - I haven't run performance benchmarks myself, all I can say is that for all my code bases it is very fast after the first cold start.

Comparison to mypy: https://github.com/microsoft/pyright/blob/main/docs/mypy-com...

imron
0 replies
5d5h

dmypy [0] (installed when you install mypy) will give you a x10 speedup when running mypy after small regular edits (e.g. during general development).

But yeah, I'm also looking forward to the day when I only need a single speedy tool for python linting, type-checking and formatting.

0: https://mypy.readthedocs.io/en/stable/mypy_daemon.html

VeejayRampay
0 replies
5d5h

they've successfully replace pylama and black so yeah I really hope it's their next target (though handling types is a whole different beast altogether)

ehutch79
0 replies
5d3h

Ruff is better than flake8 for reasons other than speed.

1) it works better as an lsp/vscode plugin, so I don't need to save to get errors popping up. 2) it respects pyproject.toml and doesn't need to liter my root dir with another dot file. 3) as an intangible, its errors just feel better.

agumonkey
0 replies
5d5h

rust gravity field is getting stronger

Alifatisk
8 replies
5d5h

Did anyone notice? We now have 5 different ways to install this package.

thiht
3 replies
5d4h

So? You don’t have to use, or even know all five.

conartist6
2 replies
5d3h

Sure, but if you don't know what all the ways are you'll be prone to "just follow instructions" and you may not notice that a few years apart your followed instructions regarding different ways of installing or uninstalling things and now your system is a mess

ramon156
1 replies
4d18h

I can't fathom why you would argue availability is bad. You're right about keeping things implicit for devs, but if all five work I don't see an issue

shpx
0 replies
4d16h

Becoming aware of different ways to do things costs time (to read about it and form opinions on things like which ones are/might be useful to you) and space (in your brain to remember these options and opinions). It's not necessarily bad, but it's a cost.

jussij
1 replies
5d5h

So, which one of those five options is the simple download?

Alifatisk
0 replies
4d23h

I’d go with pnpm

never_inline
0 replies
5d3h

These appear to be same packaging format but different installers. Not uncommon these days.

I am a junior developer writing a side project in Golang which probably no one on the internet sees, and there are 3 ways to install it already.

1. Compiled executable

2. Language package manager (go install)

3. Docker image (super trivial to create a distroless docker image).

Same can be said of many python tools (pip, pipx, docker image, homebrew or whatever)

It's not that we are doing more work. It's just that we have more tools these days. :)

Waterluvian
0 replies
5d5h

I want to complain but this richness in developer availability to implement the same thing many times is why we get a linter that’s 100x faster or a webpack alternative that’s 50x faster.

msoad
7 replies
5d4h

This is HackerNews so brace for criticism!

Why would a team of talented engineers focus on solving ESLint performance issue? Where is the value in this? If your project is small, ESLint is fast enough. If it's super large like ours (3 millions LOC) then you spend a little time making local and CI linters smarter to run only on changed files. Rewriting in Rust seems cool and novel but now you lost the entire wealth of ESLint plugin ecosystem, you have to keep up with maintaining this new linter which has to be updated very frequently for at least new syntaxes and so on...

We could put this effort into looking into why ESLint is not fast enough and fix the bottlenecks IF we had extra time in our hand...

If it was my team, I would not let them spend time on this. I don't see the value to be honest.

Trufa
6 replies
5d4h

They developed a new tool that reduces their CI from 75 minutes to 10 seconds and are offering it for free and open source and you really don’t see the value? I know you warned this is HN but I find this posture ridiculous. If you don’t find value for yourself, that’s one thing but I honestly don’t get this place sometimes.

msoad
4 replies
5d4h

I'm not complaining about free software offered to world for free. I'm curious how a leader would justify an investment like this? I have engineers reaching to me and asking me to all sort of things. My job is to justify it for the business. Making this costs more than a million dollar if their engineers are paid like ours. Then how you do this? How do you get budget for this?

gregsadetsky
1 replies
5d4h

75 minutes to 10 seconds is no joke in terms of speedups. Imagine that this time is saved for a small team of 4? 10? people who can then inspect/qa/iterate on the build in a PR-preview staging environment. Imagine this kind of time saving across many teams at Shopify’s scale.

Imagine that your pushes to production can happen an hour faster. At Shopify’s scale.

Do you see the pure economic value?

3836293648
0 replies
4d13h

That's not the point. It has value, but it's in comparison to rewriting the actual codebase in an actually appropriate language rather than fixing tooling for a language that was a mistake to use in the first place

kbknapp
0 replies
5d4h

I'm not a fan of trying to put hard numbers on unknowns like this because it biases against uncertainty, but if they shaved ~74 minutes off their CI time and assuming it runs multiple times a day that very quick equates to a small teams cost savings over a year.

However, I think trying to find the actual numbers is dumb because there's also the intangibles such as marketing and brand recognition bump by doing this both for the company and individuals involved.

That's not to say all greenfield endeavors should be actioned, but ones with substantial gains like this seem fine given the company is big enough to absorb the initial up front cost of development.

jerf
0 replies
5d2h

How big is your business? Facebook has poured immense resources into speeding up PHP. It makes sense for them. It doesn't even remotely make sense for me.

However, people tend to underestimate this sort of thing in general. Even since before programming... we have adages about the importance of sharpening the axe precisely because people have been hacking away with metaphorical and literal dull axes hoping to avoid needing to sharpen them for a long time. Sometimes you just won't be able to convince a business person of the importance of stopping work for a moment to sharpen the axe, because all they see is the work stopping. I don't have a solution to that level of lack of wisdom in a leader. These are the people who save $200 per programmer on computer hardware at the cost of 5 hours of productivity lost... per week. Some battles just come pre-lost.

wredue
0 replies
5d4h

It’s because everyone time someone says “performance”, something clicks in most developers brains, forcing them to respond with weird things like “being 50x slower is actually fast enough”.

lloydatkinson
6 replies
5d5h

This can only be good news. Normally I, like anyone else experienced with the JS ecosystem, despair when new tools come out like this. However, consider:

- setting up eslint isn't actually that simple

- if you're using typescript you need eslint-typescript too

- there are sets of rules in both eslint and eslint-typescript that conflict with each other, so I have countless rules in my config like this:

        'comma-dangle': 'off',
        '@typescript-eslint/comma-dangle': ['error', 'always-multiline'],
- then if you're doing React there's another set of JS and TS rules to apply, I still never figured out how to correctly apply airbnb rules

- this is a pretty garbage developer experience

- you can quite literally spend hours or days getting a "good" linting/formatting configuration setup, and you often can only use pieces of the configs you wrote for other repos because over time the rules and settings seem to change

- I hope this will eventually support things such as .astro files which is actually a combination of TypeScript and TSX blocks

At this stage, oxlint is not intended to fully replace ESLint; it serves as an enhancement when ESLint's slowness becomes a bottleneck in your workflow.

I also hope that eventually it does become a full replacement. I like eslint, but holy shit, I cannot bring myself to create a new config from scratch that wrestles all the required extras and the frequently changing dependencies.

Also, wanted to give a sort of shout out to Deno here. Deno comes with a linter/formatter built in that is barely configurable (just double vs single quote, 2 or 4 space indentation, minor things) and it too is very fast and simply "just works".

---

Update: I just gave it a quick try and I am immediately impressed by it. Not only was it incredibly fast like it claims, it appears to already have all of the rules I was complaining about built in.

   eslint-plugin-react(jsx-no-useless-fragment): Fragments should contain more than one child.
    ╭─[src/design/site/preact/MobileNavigationMenu.tsx:18:1]
 18 │     return (
 19 │         <>
    ·         ──
 20 │             <MenuButton isOpen={isOpen} onChange={setIsOpen} />

    Finished in 17ms on 90 files with 70 rules using 16 threads.
    Found 13 warnings and 0 errors.

namtab00
2 replies
5d3h

I don't do JS/TS, but I have no idea why all this hasn't converged on editorconfig rules.

I write C# and do "linting" via editorconfig + ReSharper file layout formatting at dev time and via precommit hook with their CLI tool

I'm surely missing something crucial in that ecosystem that editorconfig can't handle...

lloydatkinson
0 replies
5d1h

I also write C# and well... good question. Much of it is linting though, not just formatting.

leipert
0 replies
5d2h

editorconfig is mostly about formatting. Parts of the JavaScript ecosystem have converged on prettier for that.

These linters do checks on the abstract syntax tree, and so they can statically analyze that e.g. you don’t use certain unsafe APIs or do things that might introduce performance issues or bugs.

lakpan
0 replies
3d15h

Anyone who picks and chooses from the entire list of rules is doing it wrong.

Pick a config that roughly matches your ideals and just use it. On older projects you’ll have to customize it a bit, on new ones you’ll probably just adapt to it.

I’ve been using eslint-config-xo-typescript for several years, plus some plugins with their “recommended” presets.

c-hendricks
0 replies
5d5h

You're right that initially setting up the rules takes time, that won't go away with any linter though. But once I set up my company's rules 4 years ago, it's just been adding the odd rule every year or so, and upgrading various dependencies, then publish. I use it across work and personal projects, never really noticed "only use pieces of the configs you wrote for other repos because over time the rules and settings seem to change"

Fragments should contain more than one child.

What an annoying rule

ahuth
0 replies
5d5h

I am defending eslint/JS's honor in other replies, but you're right... setting up eslint is too complicated (and more complicated in TS).

art0rz
5 replies
5d5h

I would really like to speed up my workflow with a faster ESLint alternative, but my ESLint configs are often very customized, with rules and plugins that are not available (yet) in the alternative solutions, making them a non-starter for me. It'll take a while for these alternatives to reach plugin/rule parity.

maccard
3 replies
5d4h

Would you consider removing your customisations to be closer to the workflows supported by these tools? One of the great things about go is that you're free to have an opinion, but if you disagree with go fmt or go build, your opinion is wrong.

bsnnkv
1 replies
5d1h

This is one of the real productivity superpowers of ecosystems like Go and Rust imo

IshKebab
0 replies
4d19h

Python and JavaScript have similarly good formatters (as long as your idiot colleagues don't insist on using yapf instead of Black, despite yapf producing non-deterministic output!). In fact I would say Rust is probably behind Prettier in terms of auto formatting. The rustfmt output is less pretty (subjective I know), the devs have made several strange decisions and it seems to be semi-abandoned (maybe partly because the devs were ... shall we say not as friendly and welcoming as the Rust community likes to bleat on about).

There are a couple of alternative formatters:

* https://github.com/andrewbaxter/genemichaels * https://github.com/jinxdash/prettier-plugin-rust

Still, all of them are better than clang-format!

art0rz
0 replies
5d2h

No. A linter does more than formatting. Besides, some rules may simply not be relevant to what I'm working on while other rules are. Prettier works well enough for most people because it only covers syntax, and not whether or not you can use await in a loop, or should add tracks to your video element, or if jsx should be in scope, etc.

brundolf
0 replies
5d3h

Yeah. There have been lots of Rust or Go linters popping up with impressive benchmarks, but I don't think any will take over the world until they have drop-in parity

silverwind
3 replies
5d5h

Likely not worth using currently as it only has like 200 rules, while typical eslint setups have 600 or more.

leipert
2 replies
5d1h

Why not run both? Run the 200 rules from this one and the 400 other rules with eslint.

recursive
0 replies
5d

Now you have 2 problems.

lakpan
0 replies
3d15h

I don’t think that’s practical at all.

IMO oxlint currently only fits the niche “I’m starting a new project and I don’t want to install 100 dependencies and configure eslint”.

Without ts-eslint and unicorn rules this is DOA for me otherwise (but I’m hopeful)

thatxliner
2 replies
5d3h

I don’t understand how this is better than Biome. Does it support more rules than Biome?

romanhotsiy
1 replies
5d3h

Compatibility with eslint. They implement the most common eslint rules and looks like esling config support is WIP.

conaclos
0 replies
4d3h

Biome implements more ESLint rules than OXC: Biome implements about 90 ESLint rules [0], while OXC implements about 60 rules. This brings Biome closer to parity with ESLint. However, Biome has changed some rule names, uses camel-case rule names instead of kebab-case names, doesn't provide some rule configurations (to avoid configuration nightmares), and slightly changes some rule behavior (as OCV does).

[0] https://github.com/biomejs/biome/discussions/3

klageveen
2 replies
5d5h

This is cool of course. But so was Rome. Which only existed for about two years. It’s one thing to build a cool tool, it’s something else entirely sustain one over time. I need a bit more proof that this is sustainable before I rebuild our toolchain, _again_.

JimDabell
1 replies
5d4h

The Rome project continued as Biome:

https://biomejs.dev/blog/annoucing-biome/

klageveen
0 replies
5d2h

Ok, wow, why did I miss that. Thanks!

hexmiles
2 replies
5d3h

Say what you want about the "rewrite in rust" meme, but it really seems that rust started a trend of really caring about performance in everyday tools.

bfrog
1 replies
5d3h

I think its amazing that all these tools being rewritten in rust are likely being done by people that likely do not typically code in C/C++ but did code something like this in rust.

To be that says Rust is more easily accessible as a language than C/C++ and that's great for our environment (speed is green) and our joy in computing (speed is happiness).

ku1ik
0 replies
5d2h

This.

WhereIsTheTruth
0 replies
5d4h

I would be surprised and worried if a native tool would be slower than a javascript one, even with JIT, wich is useless for short lived programs