return to table of content

The Bun Shell

jchw
20 replies
17h32m

We've implemented many common commands and features like globbing, environment variables, redirection, piping, and more.

Of course on paper that sounds fine. However, something that is missing from here is some assurances of how compatible it actually is with existing shells and coreutils implementations. Is it aiming to be POSIX-compliant/compatible with Bourne shell? I am going to assume that not all GNU extensions are available; probably something like mkdir -p is, but I'd be surprised if GNU find with all of its odds and ends are there. This might be good enough, but this is a bit light on the details I think. What happens when the system has GNU coreutils? If more builtin commands are added in the future, will they magically change into the Bun implementation instead of the GNU coreutils implementation unexpectedly? I'm sure it is/will be documented...

Also, it's probably obvious but you likely would not want to surprise-replace a Bourne-compatible shell like ZShell with this in most contexts. This only makes sense in the JS ecosystem because there is already a location where you have to write commands that are going to be compatible with all of these shells anyways, so just standardizing on some more-useful subset of Bourne-compatible shell is mostly an upgrade, since that'll be a lot more uniform and your new subset is still going to be nearly 100% compatible with anything that worked across most platforms before, except it will work across all of the platforms as-intended. (And having the nifty ability to use it inside of JS scripts in an ergonomic way is a plus too, although plenty of JS libraries do similar things, so that's not too new.)

abhinavk
12 replies
11h34m

I have recently switched to using Nushell as my default shell. They were also writing their own but recently decided instead to begin incorporating github.com/uutils/coreutils (Rust rewrite of GNU coreutils). They target uutils to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.

pdimitar
6 replies
6h59m

A commendable effort but to me they are not going far enough. I'd honestly just start over, implement what seems to make sense and only add extra stuff on top if there's a huge demand for it + that demand is well-argumented for.

I get why they don't want to do that and I respect their project a lot. But to me imitating this ancient toolchain is just perpetuating a problem.

appplication
3 replies
4h9m

Agree, I had the same thought reading the above comments. GNU is not holy correctness, it’s a first draft that worked well. Opinionated reimplementation with divergence isn’t a bad thing.

jchw
2 replies
2h42m

Trust me, if we were all starting from scratch, I would agree. However, I am not ready to drop compatibility with GNU coreutils at the moment.

pdimitar
1 replies
2h31m

Nobody is forcing you to. We can have alternative stacks for as long as we like. Any new stack is strictly opt-in.

jchw
0 replies
2h25m

I mention "GNU compatibility" for Bun Shell specifically because there are some incredibly commonly used GNU extensions even in the JS ecosystem like mkdir -p, and yes, even the GNU specific find extensions. I don't think we need total compatibility for everything. However, OTOH, Nushell is targeting being the default system shell, not just something off to the side. They could decide to be not GNU compatible and it's not like I'd complain, but I agree with their choice to be GNU compatible 100%, and it makes me more likely to consider it on my own machines.

I don't feel as though anyone is forcing me to do anything though, that's definitely not the tone I intended to convey.

ktm5j
1 replies
14m

I get where you're coming from, but there's an enormous ecosystem of software written for posix. You wouldn't just be starting over with new standards.. you'd be tossing out a whole world of software that we already have.

pdimitar
0 replies
7m

Well, I was more talking about just having an extra terminal program that launched an alternative shell (like oilshell / nushell etc.) and occasionally migrate one of your legacy scripts to that and see if it fits.

I am definitely not advocating for a switch overnight. That would of course be too disruptive and is not a realistic scenario.

In terms of POSIX I'd start with just removing some of the quirkiest command line switches and function arguments. Just remove one and give it 3 months. Monitor feedback. Rinse and repeat.

That's what I would do.

Aerbil313
4 replies
8h45m

Yeah, this is nice but also sad. GNU coreutils is ancient at this point. I know this is probably critical to get user share for nushell and not enough dev resources etc. but I’d wish they were innovating on this front too with simpler and less bloated coreutils, as they are already completely changing the shell paradigm.

lf-non
2 replies
6h16m

It doesn't have to be an either-or proposition Yes?

People are free to experiment with alternative cli utils which are not burdened by backward compatibility while nushell also remains easily adoptable by users who are accustomed to coreutils.

hnlmorg
0 replies
4h45m

I agree. I’ve written about this before but this is what murex (1) does. It reimplements some of coreutils where there are benefits in doing so (eg sed, grep etc -like parsing of lists that are in formats other than flat lines of text. Such as JSON arrays)

Mutex does this by having these utilities named slightly different to their POSIX counterparts. So you can use all of the existing CLI tools completely but additionally have a bunch of new stuff too.

Far too many alt shells these days try to replace coreutils and that just creates friction in my opinion.

1. https://murex.rocks

Aerbil313
0 replies
4h50m

Well yeah, but there’s always something to forcing people to move on to the next thing. I know I’m asking for too much.

abhinavk
0 replies
4h53m

I agree. I also increasingly find myself using bat, ripgrep, eza etc even with zsh.

sureglymop
2 replies
11h18m

Wait but find isn't a builtin command right? What do you mean by the odds and ends of GNU find not being there? That doesn't depend on the shell, it's an external program being called.

jackhalford
1 replies
8h57m

It does depend on the shell here, bun is reimplementing basic commands to make them cross platform, like rm -rf is not running the rm binary because that doesn’t work on windows

sureglymop
0 replies
8h24m

Ahh I see now! I thought they were only doing what you're describing for shell builtins. That does seem like a big effort though, now that you mention it...

airstrike
2 replies
4h37m

Is it aiming to be POSIX-compliant/compatible with Bourne shell?

No? It never claimed to be aiming to be POSIX-compliant. It seems like it's just making it easier to write "scripts", or do the equivalent of writing a script, in JS.

And if you're NOT using this, then you're also not guaranteed to have a POSIX-compliant shell since you may be on Windows, for example

jchw
1 replies
2h16m

To be honest, it absolutely should aim to be at least a strictly compatible subset of POSIX, even if it doesn't implement everything. There is really no good reason to XKCD 927 this on purpose, but the mentality regarding this is not written anywhere that I saw. I think the mentality regarding compatibility ought to be documented in more detail. What is considered a "bug"?

avgcorrection
0 replies
1h21m

Silence is equivalent to a “no” (posix comp.). That’s documentation enough.

netghost
0 replies
15h45m

Scanning to the bottom, it seems like the most likely use is to improve the ergonomics of simple scripts that need to shell out in some cases and also to streamline some of the more mundane package.json scripts, like deleting a directory when cleaning.

Personally, I think it seems like a nice tool blending JavaScript and shell scripting.

Jarred
17 replies
17h28m

I work on Bun - happy to answer any questions/feedback

helsontaveras18
6 replies
17h22m

How do you ensure cross platform compatibility under the hood?

Jarred
5 replies
17h12m

We implement a handful of the most common commands like cd, rm, ls, which, pwd, mv. Instead of using the system-provided ones, it uses ours.

Unlike zx/execa, we have our own shell instead of relying on a system-installed one.

reactordev
2 replies
17h5m

Is this done in zig in the core bun runtime or is it implemented as part of the standard bun lib? How much perf is there? Small commands like cd or ls I'm less interested in. You say you provide your own shell... bsh, zigsh, what?

Jarred
1 replies
16h54m

It’s nearly all in Zig.

The parser, lexer, interpreter, process execution and builtin commands are all implemented in Zig.

There’s a JS wrapper that extends Promise but JS doesn’t do much else.

The performance of the interpreter probably isn’t as good as bash, but the builtin commands should be competitive with GNU coreutils. We have spent a lot of time optimizing our node:fs implementation and this code is implemented similarly. We expect most scripts to be simple one-liners since you can use JS for anything more complicated.

reactordev
0 replies
15h42m

Good to hear you guys made the right decisions. Bun is awesome and the more performance you guys can squeeze with zig, the better. Keep it up! Bang up job already.

natrys
1 replies
11h29m

Think this should be highlighted in the article, because that's actually pretty cool but the article gave me impression that it's a simple sugar over child_process only.

airstrike
0 replies
4h36m

I had the exact opposite impression, FWIW. I understood implicitly (assumed?) that it was implementing its own commands

oblio
3 replies
14h2m

I think Bun is written in Zig. Is Zig stable as in 1.0.0, LTS?

laserbeam
2 replies
12h36m

Nope. Zig is still changing. In my understanding bun is generally quick to adapt to these changes, and one of the projects zig is keeping an eye on when breaking changes are introduced.

brabel
1 replies
9h50m

I've played around with Zig a few times and quickly ran into compiler bugs, things that should work but are not yet implemented, lots and lots of things completely absent in the stdlib (and good luck finding custom zig libraries for most things)... given all that, I just can't fathom how they managed to write Bun mostly in Zig (I see in their repo they do use many C libs too - so it's not just Zig, but still it's a lot of Zig)... and I wonder how horrible it must've been to go from 0.10 to 0.11 with the numerous breaking changes (even my toy project was a lot of work to migrate).

cassepipe
0 replies
8h55m

Probably because they are the kind of people that don't rely on libraries and are able to fix compiler bugs

bryzaguy
2 replies
13h56m

This is so cool! Is it too late to change the import name? I immediately thought of jquery when seeing "$".

mst
0 replies
4h52m

    import { $ as sh } from "bun";
(if you're coming from old school client side javascript I can see the momentary confusion (and in fact I had to blink myself), but in a shell script $ making you think of a shell prompt nonetheless seems like a pretty reasonable default to me)

geuis
0 replies
13h42m

Hmm you have a point but I don't think it's a problem since this is for the cli instead of the browser. Plus $ is pretty common in the shell to indicate the prompt.

yard2010
0 replies
8h48m

Jarred thank you for making the ecosystem better in your own special way like nobody else does

IshKebab
0 replies
9h45m

Do you worry that people will use this in their actual programs, rather than just in development scripts? You've at least quoted variables, which is several steps better than most Bash scripts, but even so Bash tends to be hacky and fragile. The original JavaScript code using native APIs is more verbose but better code.

8n4vidtmkvmk
0 replies
10h1m

This is super cool, but can you fix bun -i so that it actually auto installs missing libs? That would really help with having self-contained scripts. Then I can finally start replacing my shell scripts with bun.

1vuio0pswjnm7
13 replies
15h17m

"JavaScript is the world's most popular scripting language."

Perhaps, based on usage.

But shell must be the world's most ubiquitous scripting language.

Not every computer has a Javascript engine but most have a shell.

Many, many computers have no browser, let alone a GUI. Some small form factor computers might have embedded Javascript engine but that's a minority.

No browser on the router.

pasc1878
3 replies
9h49m

Not true - shell does not run on Windows, iPhones etc.

Even macOS has issues with those who assume Linux is the only Unix, Apple's bash is very old and does not run many scripts.

Unfortunately Javascript does get installed everywhere,

aragilar
2 replies
9h37m

Uh, MacOS has zsh (which has far more features than most other POSIX shells), and even old bash has POSIX compatibility.

Your TV/IoT device likely has busybox, as does your router.

You install git on windows, it's got a (POSIX) shell.

The number of places that lack a shell is tiny.

Node/deno/bun are rare, and browsers whilst being more common, still require the device to have some kind of GUI.

pasc1878
0 replies
6h21m

Read my comment on macOS please - it explicitly is about dealing with Linux people who write scripts assuming new versions of Bash.

Now if Linux used zsh then your comment is valid.

Bash shell scripts do not necessarily run in zsh.

Most Windows users do not install git. iPhones don't have a shell.

marwis
0 replies
9h32m

You install git on windows, it's got a (POSIX) shell.

I don't think it's in the path by default so if some program like npm calls exec("rm") it's still going to fail I think.

dumbo-octopus
3 replies
15h6m

I'd argue the opposite: more computers have an end-user accessible JavaScript engine (a browser) than an end-user accessible shell.

leptons
2 replies
13h35m

It really depends on how you define "computer".

dumbo-octopus
0 replies
51m

Can you give an example of a device that has an end-user accessible shell, but not an end-user accessible browser? Every iOS device is the opposite.

SCUSKU
0 replies
10h15m

Let's not forget, "3 billion devices run Java"

kamikaz1k
2 replies
14h17m

Lots and lots of IoT devices running node…you might be shocked

leptons
1 replies
13h34m

"Lots and lots" sounds like a guess.

brabel
0 replies
9h55m

I have no idea in general, but based on error messages from my Ikea Dirigera Hub, at least its REST API is implemented in node!

hnlmorg
0 replies
5h26m

But shell must be the world's most ubiquitous scripting language.

“Shell” isn’t a language. It’s a collection of languages. And not even a consistent one:

- Most BSDs don’t ship Bash as part of base. They default to ksh

- macOS does ship bash but an ancient version and defaults to Zsh

- Some Linux distorts don’t ship sh, instead symlinking to dash or bash.

- Windows doesn’t have any of the above as part of its base install.

IshKebab
0 replies
9h47m

Basically all computers have a shell, but "shell" is not a language so that is irrelevant.

I assume you are actually talking about Bash or maybe POSIX shell? That's only available on ~20% of desktop computers.

viraptor
11 replies
15h39m

I really have a "scientists asked if they could not if they should" feeling about this one. I've seen and tried lots of solutions like that in different languages, but believe now it's a wrong level of abstraction. If you want to provide some crossplatform way to execute ls, providing an "ls()" function is much cleaner. Otherwise you start accumulating issues like: which flags are supported, does it support streaming, what about newlines in file names, how do you deal with non-utf filenames, what happens with colour output, is tty attached, etc. These are new problems which you didn't have when using the native JS filesystem functions. And when they bite you, it's not trivial to see where / why.

None of the examples really look that hard to replace either. The current solutions are not great. But shell-in-js is putting a familiar lipstick on a pig without addressing the real issues.

Also, the clock is ticking for the first "string got interpolated instead of templated" security issue. It's inevitable.

skybrian
8 replies
15h21m

There have been many bad templating languages, but I think JSX is ok. There were many bad markup languages before Markdown, and many bad config file formats before JSON.

None of those are perfect, but they're good enough for many purposes.

Similarly, maybe it's not this one, but I suspect that someone will eventually get this right. I do think it does need to be properly standardized, as CommonMark did for Markdown.

tipiirai
6 replies
14h57m

JSON is a terrible configuration file format. Property names must be quoted, tons of brackets and commas, a mistake comma breaks it, no comments allowed, etc..

andyfleming
4 replies
14h54m

JSON5 is a more reasonable format for config files, in my opinion.

tipiirai
3 replies
14h51m

I prefer YAML on my Markdown front matter. It's more readable because of no brackets, quotes, or commas.

jerbear4328
1 replies
14h39m

Seconding the sibling, YAML may look nice but it's absolutely full of awful confusing behavior. If you don't like JSON for human-written stuff, see TOML or the like. I think JSON is great for serialization, it's so simple, but I agree we need something more readable like TOML for human-written data.

https://ruudvanasseldonk.com/2023/01/11/the-yaml-document-fr...

tipiirai
0 replies
12h34m

Do you convert your Markdown front matter to TOML? Also for your clients?

andyfleming
0 replies
14h44m

I prefer YAML on my Markdown front matter. It's more readable because of no brackets, quotes, or commas.

YAML is full of pitfalls. I think the brackets/braces and quotes are worth giving up a small amount readability to eliminate the ambiguity.

skybrian
0 replies
14h52m

That makes it mediocre, not terrible. There are workarounds. For terrible, see sendmail.

zer00eyz
0 replies
14h0m

>> There have been many bad templating languages, but I think JSX is ok.

A bad templating language would be worlds better than JSX.... "JSX may remind you of a template language, but it comes with the full power of JavaScript".

JSX is javascript.

This is the very sin that PHP spent its early years getting thrown under the bus for.

IF you build a rich react app, and then figure out later that you need a bunch of static, partial static late hydration pages your going to be running node/bun in production to generate those because its not like you can hand JSX to another, performant, language.

And yes im aware of things like packed. The problem is JSX templates, to a large degree, are not compatible.

Code in templates was bad when PHP did it, when Perl did it, it's bad now.

8n4vidtmkvmk
1 replies
9h31m

I wouldn't ls in a random js script either. Use readdir exactly like shown in the article. But to hack something quickly in package.json? Yes, absolutely. I'm not turning all my 1 liners into standalone scripts to potentially maybe avoid using an arg that never got implemented. And now it's cross platform too so I only have to test it on 1 system.

viraptor
0 replies
8h58m

And now it's cross platform too so I only have to test it on 1 system.

Not so fast. Did you uppercase the first letter of the file and tested on macos and windows? It will fail on linux. Did you create a file called con.js and test it on non-windows machine? It will fail on windows. Did you rely on sub-second precision timestamps? It will fail on some windows machines.

This is a leaky abstraction. People will run into problems.

brailsafe
8 replies
16h49m

Using Windows for development feels like using Linux for anything but server-side work or Macos for gaming, it'll probably work if you have light requirements and don't use the shell that often, but when I think about the last time I tried it, it almost makes me feel fine paying $500 for a ram upgrade on my next mac

LucasOe
2 replies
10h35m

If you want bash like syntax, you can always run msys2 / Cygwin / WSL on Windows. But 99% of the time I just need to run basic commands like git and maybe pipe them to ripgrep or fzf, and frankly the PowerShell is fine for that. For anything more complicated, I'll write a script in Python or maybe JavaScript anyway, so I don't really care what shell I use as long as I can customize it and it can run basic commands. And if you don't like PowerShell, there's Nushell.

marwis
0 replies
9h26m

Actually Powershell is terrible for piping anything native. It will damage whatever data you pipe.

That's because unlike other shells where piping just passes through binary stream, Powershell is based around the concept of piping streams of .NET objects so it will try to parse output of one native command into a list of .NET strings, one for each line, and then print them out to input of another command. Not only making it extremely slow but also changing new lines \n to \r\n and maybe other special characters.

RadiozRadioz
0 replies
9h52m

You could save yourself a lot of time by learning more bash so you didn't have to break out a programming language any time things get more complicated than piping into grep.

BirAdam
1 replies
15h40m

Plenty of people use Windows for development, Linux for development and gaming, and macOS for everything including servers. It’s all about preference.

Ygg2
0 replies
10h58m

Want to use a cool dev tool on Windows boils down to, can you host a Linux VM?

E.g. How do you profile Rust programs on Windows in RustRover/Clion? How do you run Coz on Windows? Basically WSL or a full VM.

slig
0 replies
9h31m

I had the same thought and had an Intel Mac, but then I tried WSL2 and it just works. Now my daily driver is a PC with specs that I wouldn't be able to afford if it was a Mac.

shzhdbi09gv8ioi
0 replies
5h10m

I been developing with windows using golang and rust for a couple years now.

I just use vscode and native toolchains.

I don't even use WSL2 but I have basically identical experience as I do on my Linux desktop or my macOS desktop.

Windows + Mac is the slower of the trinity but not by a huge margin.

With Windows you really must disable the Windows Defender stuff for your dev folder or performance will tank as it scans build artifacts for viruses all of the time.

I successfully developed a large number of cross-os apps and currently are working on a game.

I think the OS at this point is not relevant.

I mostly game in Linux these days, so the Windows install is used less and less.

cerved
0 replies
9h25m

I just do everything within WSL2 which works well enough for my needs

lucasyvas
7 replies
13h37m

This looks exactly like zx by Google. And that's probably a good thing.

https://github.com/google/zx

lambda
5 replies
10h37m

Being in the Google GitHub org doesn't mean "by Google", it means "by someone who works at Google."

ycuser2
3 replies
10h34m

But doesn't he get paid by Google to code this?

shubhamkrm
0 replies
4h19m

Not necessarily. You can write open source code in your own time and publish under Google org on GitHub. This is the recommended process if you don’t care about retaining the copyright to your code.

If someone does want to retain copyright, there’s another process for getting approval.

atorodius
0 replies
10h5m

They do, otherwise it won’t be in that repo

FiloSottile
0 replies
8h46m

No, something being under github.com/google means the person who started it was paid by Google, not paid by Google to code this. Google contracts (like most tech contracts in the US) have ridiculously broad IP assignment clauses, so unless you go through a lengthy process to request Google disown something, they own anything you code, and they insist you open source your things under github.com/google.

You decide your own definitions, but that's very different from "Gmail by Google" or even "Go by Google" in my book. Note how the main author has "Ex-Google" in their bio, too.

redder23
0 replies
8h58m

To me its the same thing, they are paid my Google to code stuff the is put in their org and not their private accounts/orgs so to me this IS in fact "by Google".

nateb2022
0 replies
13h21m
Self-Perfection
7 replies
10h22m

One of the selling points of this post is that bash is slow to start. But how fast is bun shell? Have anyone compared bash and bun shell start times?

cerved
5 replies
9h29m

An absolutely ludicrous point, shells have some of the fastest startup times of all processes

shzhdbi09gv8ioi
4 replies
5h21m

If you are mindful and optimize your shell config, yea.

But common stuff like zsh with oh-my-zsh is known to be rather slow, as in several hundred millisec to start.

Depending on you, of course, that might be considered fast. I consider it insanely slow.

My shell of preference, "nushell":

Startup Time: 24ms 448µs 147ns

Ideally it would launch in < 16ms (1 frame at 60hz), but I can live with this ;-)

cerved
2 replies
2h23m

Why would you need to optimize the config? I'm not talking about running an interactive shell.

shzhdbi09gv8ioi
1 replies
1h43m

You commented on someone mentioning bash being slow to start.

So your parent discussed interactive shell, and I assumed you did, since you didn't state otherwise.

cerved
0 replies
54m

They were quoting the article, which complains that "shells are too slow to start", with examples of running echo in non-interactive shells.

Nobody is talking about the startup time of interactive shells.

billywhizz
0 replies
1h5m

on my crappy old i5 dell laptop running ubuntu 22.04 i see ~1.5ms for bash and ~1ms for sh. i dunno where these really bad numbers are coming from tbh.

vojvod
0 replies
6h49m

They're not claiming bun is faster to start, only that for use cases where you might otherwise need to shell out hundreds of times bun only needs to start once.

worksonmine
5 replies
17h34m

It feels like the people behind bun are trying to differentiate from node so much that they sometimes don't stop to ask why.

I'm sure there's a use-case somewhere, but if I'm using js I will just use a regex instead of reaching for grep. If I want the shell I'll use the shell.

neongreen
2 replies
17h29m

The most common usecase is probably “I have `rm` in the scripts section of package.json, and it doesn’t work on windows.”

LudwigNagasena
1 replies
12h31m

Bun provides a limited, experimental native build for Windows.

# WARNING: No stability is guaranteed on the experimental Windows builds

Now your scripts simply will randomly break on Windows and you won't even know why!

8n4vidtmkvmk
0 replies
9h38m

I don't think that's permanent. Eventually they'll have a stable release on windows

afavour
0 replies
15h52m

It does feel like Bun is trying to do a lot. And when the company depends on VC funding I think it’s fair to question whether you want to rely on them for a core project functionality.

8n4vidtmkvmk
0 replies
9h36m

I'd personally rather write my long shell scripts in js for my js-based project. And I wouldn't bring in grep to run a regex either but I'd use it to run a myriad of other tools that aren't implemented in js.

nathan_phoenix
5 replies
21h55m

For something which works across all JS runtimes (Deno, Node) and achieves basically the same, check out the popular JS library Execa[1]. Works like a charm!

Another alternative is the ZX shell[2] JS library. Tho haven't tested it.

[1]: https://github.com/sindresorhus/execa

[2]: https://github.com/google/zx

qazxcvbnm
2 replies
15h17m

One thing that surprised me about Node was how slow the default way to shelling out (child_process) could be (probably https://github.com/nodejs/node/issues/14917).

Although according to the linked issue, it has been "fixed", I still ran into a problem during a batch script that was calling imagemagick through a shell for each file in a massive directory; profiling was telling me that starting (not completing) (yes, I was using the async version) the child process increasingly slows, from sub-millisecond for the first few spawns, to eventually hundreds of milliseconds or seconds... Eventually I had to resort to doing only single spawn of a bash script that in turn did all the shelling out.

It seems that the linked execa still relies on child_process and therefore has the same issue. It saddens me to see the only package for node that appears to actually fix this and provide a workaround seems to be https://github.com/TritonDataCenter/node-spawn-async and unmaintained.

kvakil
1 replies
13h10m

I worked on that Node.js issue. If you can share a repro, I'd love to take a look: https://github.com/nodejs/node/issues/new?assignees=&labels=...

qazxcvbnm
0 replies
6h30m

That's very kind of you - I tried making a dead-simple repro just now with Node 20, and it seemed to run without the problem. I'll try reproducing it in a bit with my original use case of imagemagick and see if the issue still exists.

neongreen
0 replies
17h33m

I’m using zx and the API seems very similar to what is described in the post.

Which bun also acknowledges here:

https://github.com/oven-sh/bun/blob/main/docs/runtime/shell....

I suppose one significant difference is that bun reimplements shell built-ins. I believe that zx simply executes bash or powershell and fails if neither is available.

dsherret
0 replies
14h11m

For Deno there is https://github.com/dsherret/dax which is also zx inspired and has a cross platform shell built-in.

gnarlouse
5 replies
22h3m

Can somebody explain why they’re attributing ZSH to macOS? It’s clearly cross platform

stephenr
2 replies
21h53m

The only guess I have is because it's the default interactive shell on macOS, while bash is probably more common on GNU systems.

But that also doesn't make much sense given that this is about non interactive scripts.

To be honest it's kind of crazy that for all the work that's gone into nodejs, it either doesn't have, or people don't know about, basic functionality that these examples are running a shell for.

gnarlouse
1 replies
21h43m

default interactive shell

Is that fairly new? I thought the default shell was bash.

stephenr
0 replies
21h10m

Since Catalina, released in 2019.

mejthemage
0 replies
1h31m

Thank you. As someone who avoids Apple at all costs but loves zsh, this really rubbed me the wrong way. Pretty sure MacOS used to use bash too.

chasil
0 replies
17h26m

It would be equally appropriate/wrong to say that mksh, the MirBSD Korn shell, is Android's system shell.

The manual page for mksh also mentions Android in the introduction for those who do not understand the role.

baudaux
5 replies
8h24m

Bash works well in https://exaequos.com. It is compiled in WebAssembly

hnlmorg
4 replies
8h14m

Cool project but it’s completely impossible to navigate back. Something on that page is spamming my browsers back button history

baudaux
3 replies
7h47m

Strange, history is not used. Which browser are you using ?

hnlmorg
1 replies
4h53m

Safari on iOS.

baudaux
0 replies
1h28m

Thank you for your feedback. I will check

OscarDC
0 replies
4h33m

I have the same back-button issues on both Firefox and Chrome (on linux if it matters) when going to this website. Multiple pages in history are e.g. just black screens.

askonomm
5 replies
17h52m

I didn't know, but apparently you can execute a function in JS without parentheses using upticks (`), e.g:

  functionName`param`
and whatever is inside of the upticks get sent to the function as an array. It's also what Bun is doing with it's $ (dollar sign) function for executing shell commands. There's so much weird syntax magic in JS.

n0w
4 replies
17h46m
MobiusHorizons
2 replies
17h20m

Tagged templates are really cool. They are a reasonably simple extension of template strings, which allow constructing strings very easily by allowing arbitrary code to be put inside a ${} block inside of template strings (ones that begin and end with backticks ` instead of single or double quotes).

So if you think about it template strings are like a tagged template who's function just calls .toString() and concatenates each argument it is given. There are some really nice safe sql libraries that use this for constructing queries. They are useful basically anywhere you might want string interpolation and a bit of type safety, or special handling of different types.

Lit Element is also a very clever usage of tagged templates.

8n4vidtmkvmk
1 replies
9h49m

I just wish they had something like pythons triple quotes or here docs or c++ r strings. A single backtick makes it hard to use backticks inside the string

mst
0 replies
4h26m

I especially like recent perls' support for <<~ as a << that strips indentation so you can keep your HERE doc contents indented along with the rest of the code.

(and everybody with a HERE doc implementation that doesn't have that yet should absolutely implement it, people who can't stand perl deserve access to that feature too ;)

askonomm
0 replies
17h43m

Ah! There's a lot more to this than just executing a mere function it seems. Consider me educated!

simonjgreen
4 replies
9h59m

Minor tangent, but plucked from that article, why is ‘rimraf’ downloaded 60m+ time a week?! Why is that a thing that need a library? (Asking as a systems guy, not a programmer)

mike_hearn
0 replies
4h19m

Which languages have a recursive delete in their standard library, other than shell? Do any? HShell (see other comments) also implements its own rm() function because the JDK standard library is too low level to support something like that.

httgp
0 replies
9h57m

It’s quite often used in npm scripts to cleanup stuff (say, between builds), and many developers prefer that over native solutions like `rm` and `del` as it gives them a cross-platform way of cleaning up files and folders.

goenning
0 replies
9h47m

And why is it not called `rmrf`

M4v3R
0 replies
9h56m

The OP already explained that - because people want their package.json scripts to be cross-platform, and „rm” does not exist on Windows. So instead you add rimraf to your dependencies and use that instead of rm in your scripts.

jftuga
4 replies
16h28m

Their info about "rm -rf" not working in Windows is slightly misleading. In PowerShell, you can accomplish this by running:

rm -r -fo my-folder-name

ARandomerDude
1 replies
16h10m

How many Linux/Mac devs know that? We live in a left-pad world, of course people will install a package to get a simple job done.

charrondev
0 replies
14h25m

This is mostly useful to developers that don’t develop on windows. Typically server side and browser based JavaScript programs are deployed on Linux systems in production.

Today if I want to reliably automate some scripting I do NOT use shell scripting because it makes a bunch of implicit dependencies on existing system.

Instead I write these utility scripts in either JavaScript or PHP depending on the project and this seems to give JavaScript a slightly nicer consistent interface to perform basic functionality, built directly into the runtime.

aarjithn
0 replies
15h12m

Well, it’s still not “rm -rf” right? Why is it misleading then?

A typical problem of this is when having to run a script (say rm -rf dist) in windows and mac systems not the command itself

Gabrys1
0 replies
9h31m

The reason I used rimraf was it was a way to make JS delete all files in the directory. Why would I need to think to shell out to "rm -rf dir" and be responsible for argument escaping, error handling, different shells, etc. If that's what the library does, ok, but it can do it in any way the library devs decided was best. I offloaded that decision to them (putting more trust in them to do it right than in myself).

hipadev23
4 replies
15h29m

On a Linux x64 Hetzner Arch Linux machine, it takes about 7ms:

    hyperfine --warmup 3 'bash -c "echo hello"' 'sh -c "echo hello"' -N
On my home machine and a mid-range AWS EC2 instance, the echoes run in ~0.5ms for bash and ~0.3ms for sh.

Next time don't run benchmarks on a garbage host like Hetzner. Their hardware is grossly oversold, their support is abysmal, and they null-route traffic anytime there's a blip.

jgalt212
3 replies
15h9m

It's a been a long time since I read a post where someone bashes Hetzner. Usually they are well received. We use their VMs as back up servers, so not really pushing them hard. The most negative things I've read about them is they have much stronger KYC than AWS.

antoniojtorres
1 replies
14h15m

Agreed, have been a hetzner customer for years running a myriad of services there without issues.

fijiaarone
0 replies
14h3m

People were running high power severs and databases on pentium 2s. Most cloud servers and programming frameworks) don’t exceed their performance.

hipadev23
0 replies
13h50m

Not trying to derail the thread, but having used a variety of dedicated, virtualized, and shared hosts since the mid 90's, Hetzner was hands-down the worst experience I've ever encountered. Their KYC process is indeed arduous but that's not my complaint, in fact I naively believed it meant they took things seriously.

They null-routed my server on launch day because their false-positive laden abuse detection thought it was being attacked. Despite filling out their attestation form and replying to support that my server was completely under my control and not being attacked, they still null-routed the box, and took ~8 hours to respond to my pleas (the first half of which was during normal CEST support hours) to re-enable traffic, along with an extremely patronizing tone when they did. After that event, looking at online review sites (e.g. trustpilot) and webhosting forums, these are common complaints when someone uses Hetzner and actually attempts to use the CPU, memory, or bandwidth resources included with their server.

After they killed my server, I quickly spun up the exact same services with a different provider and haven't had any issues since.

arrakeen
4 replies
15h55m

if you're writing "await" before every function call maybe that should be the default.

fijiaarone
1 replies
13h58m

Or maybe synchronous should be the default.

8n4vidtmkvmk
0 replies
9h30m

Isn't that what he's implying?

dumbo-octopus
0 replies
15h3m

No. Places where execution can be interrupted should be obvious and explicit.

cutler
0 replies
12h56m

Or maybe a wake-up call that something is off. Shell scripting is a domain where impertive/procedural code shines.

MuffinFlavored
4 replies
17h29m

This is like... eval? I thought eval was bad?

voiper1
0 replies
11h11m

Nope - there's at least one layer of safety:

For security, all template variables are escaped:

// This will run `ls 'foo.js; rm -rf /'` >const results = await $`ls ${filename}`; >console.log(results.stderr.toString()); // ls: cannot access 'foo.js; rm -rf /': No such file or directory
seniorsassycat
0 replies
10h47m

Potential User input is separated from code in the tagged template. $`rm ${"dir"}` is not the same as $`rm dir`

lioeters
0 replies
15h30m

Eval is bad if you're passing it untrusted input. It can be useful in some situations if you know what you're doing.

As for Bun Shell, it runs what you tell it to, just like a shell script or command line in the terminal. It's similar to running file system functions or spawning child processes. It will let you do some damage, sure, but that's your responsibility, "with great power", etc.

ants_everywhere
0 replies
15h43m

Eval with an uncanny valley shell whose commands behave similar to the way you expect, but not necessarily exactly the way you expect.

forrestthewoods
3 replies
11h48m

I’m increasingly fed up with all shell scripts.

Sure shell scripts are great when they’re small. Except then they become not small. But they don’t get rewritten.

Piping strings of instructed text between programs is an error prone nightmare.

I want full debugger support, strong typing, cross platform support, and libraries not programs.

Python isn’t my favorite language. But I’ll take a debugable Python script over bash hell 100% of the time.

shzhdbi09gv8ioi
0 replies
5h4m

I been using golang lately to replace shells.

Pros:

  * LSP
  * faster compile times than the node startup time
  * cross-platform
  * strong types
  * great std + many libs available
  * not bash script
  * fits easily in CI
Cons:

  * not bash script :-)

mike_hearn
0 replies
6h13m

Take a look at my comment above about hshell. It has all those things you ask for. Feedback would be useful!

The problem with this space is incentives. HShell exists because it was easy to build given the structure of our main product, and I wanted it for our own internal use. But making it a stable long term product on which anyone can rely requires signing up for long term maintenance, and nobody pays for shells (or do they?). So it's got to be a labor of love.

8n4vidtmkvmk
0 replies
9h42m

That's part of the beauty of bun though. You can write it in typescript instead and run it directly with bun. And now with this you can weave in a call to a binary very easily if you need

RadiozRadioz
3 replies
9h58m

Great, it's approaching the ergonomics of what Perl has offered for decades. And Perl still does it better.

Culonavirus
2 replies
6h3m

Um, what? Perl in 2024 is just (far) worse Php. Or why just not use Python at that point?

mst
0 replies
4h31m

I ... am not sure which of the three languages you're familiar with, but I don't think that's remotely correct.

perl has block based lexical scoping and compile time variable name checking.

python and PHP both have neither, which continues to make me sad because I actually -do- believe that explicit is often better than implicit.

perl has dynamic scoping (including for variables inside async functions using the newer 'dynamically' keyword rather than the classic 'local'), which I don't think PHP does at all and python context managers are -slowly- approaching the same level of features as.

perl gives you access to solid async/await support, a defer keyword, more powerful/flexible OO than PHP or python, and a database/ORM stack that really only sqlalchemy is a meaningful competitor to of those I've used in other dynamic languages.

Sure, if you're writing perl like it's still 2004, it -does- kinda suck. But so did PHP 4.

The "why not use" argument is probably better made with respect to modern javascript (I'm really enjoying bun when I have the choice and I can live with node when I don't), since "let" and "use strict;" give you -close- to the same level of variable scoping, plus usable lambdas (though the dynamic scoping still sucks, hence things like React context being ... well, like they are), and the modern JS JITs smoke most things performance-wise.

Oh, and a bunch of people who used perl for systems/sysadmin type stuff have switched to go, which also makes complete sense - but using python after using perl -properly- has a significant tendency to invoke "but where's the other half of the language?" type feelings, and I think that's only somewhat unfair.

(python is still awesome in its own right, and PHP these days is at least tolerable (and I continue to be amazingly impressed by the things people -write- in PHP), but "worse php" is just a -silly- thing to say)

NB: If anybody wants specific examples, please feel free to ask, but this comment already got long enough, I think.

Aerbil313
0 replies
4h40m

Raku (Perl 6) is a unique and great language for single developer productivity.

simplyinfinity
2 replies
7h18m

In the .net world, we have a namespace called System.IO, that houses cross platform implementations of functions to work with directories, files, searching for files, can't we just have a standard js library in the same spirit, than try to half ass emulate a shell just so someone can run rm - rf. All of this seems extremely unnecessary and a wasted time and energy to solving the wrong problem.

pc_edwin
0 replies
7h14m

There are so many minor (sometimes major) differences in how even macos(zsh/bash) and linux (bash) works, let alone windows (cmd, powershell)

A layer that abstracts these differences can be very useful for buildng CLI's and just apps with javscsript.

mst
0 replies
4h21m

The article starts by mentioning the programmatic interfaces, but the point here is to be better able to write quick, clear scripts, not full programs.

It's solving a -different- problem, and it may not be a problem that you personally have, but as I think the various excited comments rather demonstrate, it absolutely -is- a problem plenty of people -do- have and it's a really nice thing to have available for us.

38
2 replies
16h18m
oblio
1 replies
13h57m

This is super confusing considering the text in the article.

8n4vidtmkvmk
0 replies
9h44m

There's an unstable windows build of bun. I imagine they're working out the final few kinks but want to make sure this new lib is ready to go now

skybrian
1 replies
17h38m

I guess this is too new for there to be any language documentation yet? Or perhaps I missed it.

I'm wondering if it's picked up any ideas from oil shell [1].

[1] https://www.oilshell.org/

dharmab
0 replies
16h16m

There's a short doc here https://bun.sh/docs/runtime/shell but it notes that the shell is not yet feature-stable.

postepowanieadm
1 replies
10h39m

js is the new perl

8n4vidtmkvmk
0 replies
9h49m

I wish they'd adopt pcre

pjmlp
1 replies
10h27m

Perl and Python already went through this path without much uptake.

mst
0 replies
4h17m

I've seen quite a lot of "shell but in perl" and "shell but in python" in the wild, but also I think this is primarily aimed at "this particular utility that ships with $library would most naturally be a shell script but it's a lot more convenient overall to have a nice way to write something similar-ish-looking that shares the interpreter with everything else."

If nothing else it'll make development-side package.json commands easier and nicer, which is still IMO a net win.

o11c
1 replies
16h2m

Note that the `hyperfine` example is actually measuring two nested shells. Unless hyperfine implements a shell-parser of its own, of course.

Jarred
0 replies
15h48m

The -N flag tells hyperfine to not run it in a shell, which means it is not nested.

mythz
1 replies
13h56m

Looks good, will consider it next time I need to create a complex shell script. For creating cross-platform scripts in package.json I've settled on shx [1].

[1] https://www.npmjs.com/package/shx

notpushkin
0 replies
5h53m

Maybe give bsx a try instead? (disclaimer: I'm the maintainer)

    pnpm add --dev bsx

    {
        "scripts": {"cleanup": "bsx rm -rf some-cache"}
    }
https://npm.im/bsx

This would use busybox-w32 on Windows, and regular shell on other platforms. You do have your usual footguns like some *nixes not having some tools installed out of the box, but for the 95% cases this should be fine and it's only 536 kB (vs 1.5M for shx)!

cutler
1 replies
13h8m

JS everything. No thanks. Show me a one-liner in Bun which comes anywhere near your average bread & butter bash + Linux utils pipeline. Async my have its uses but shell scripts ain't one of them. Shell scripts are impreative/procedural for a reason - sequential processing.

8n4vidtmkvmk
0 replies
9h25m

That's literally what this is though. You can run your bash script using bun, and it might even run faster because it's actually implemented in zig.

This post isn't super clear but there's 2 things here. You can run your bash from inside js, or you can run it directly if that's what you prefer.

blackhaj7
1 replies
15h59m

Love that bun just implements anything that could be useful.

They are busy building useful stuff whilst others pontificate about what they should/shouldn’t build

rco8786
0 replies
14h34m

Seriously. Like it’s just one continuous hack week over there.

anon-3988
1 replies
12h18m

I am pretty Python already have all of these? One could just write Python CLI that wraps Python stdlib to do all of these.

Cpoll
0 replies
11h59m

Python's subprocess, os.system, etc. offload the work to your system's shell. Bun, on the other hand, is running the scripts in its own runtime.

tipiirai
0 replies
12h0m

I love Bun. I no longer use Node for development. Hardly any gotchas anymore. It's just faster all over. Especially `bun test`. Highly recommended. Thank you @Jarred!

natrys
0 replies
11h50m

$ hyperfine --warmup 3 'bash -c "echo hello"' 'sh -c "echo hello"' -N

Small nitpick but on Arch, /bin/sh is a symlink to bash so it's measuring the same thing.

On many systems like Debian, /bin/sh is dash instead (though default interactive shell remains bash) which is actually a few times faster, for start up and in general.

mjburgess
0 replies
10h19m
mike_hearn
0 replies
6h28m

It's a really good idea and one my company implemented on top of Kotlin Scripting as well. There's a lot of scope for competitors to bash. It's not really a public product (and not open source), but a while ago I uploaded a version and docsite to show some friends:

https://hshell.hydraulic.dev/13.0/

I'm not sure what to do with it, maintaining open source projects can be a lot of work but I doubt there's much of a market for such a tool. Still, Hshell has some neat features I hope to see in other bash competitors:

• Fully battle tested on Windows. The core code is the same as in Conveyor, a commercial product. The APIs abstract Win/UNIX differences like xattrs, permission bits, path delimiters, built in commands etc. The blog post talks about Windows but iirc Bun itself doesn't really work there yet.

• Fairly extensive shell API with commands like mv, cp, wget, find, hash and so on. The semantics deviate from POSIX in some places for convenience, for example, commands are recursive by default so there's no need for a separate "rm -rf" type command. Regular rm will do the right thing when applied to a directory. You can also do things like `sha256("directory")` and it'll recursively hash the directory contents. Operations execute in parallel by default which is a big win on SSDs.

• Run commands like this:

    val result = "foo --bar"()
Running commands has some nice features: you can redirect output to both files, the log and lambda functions, and the type of "result" is flexible. Declare it as List<String> and you get a list of lines, declare it as String and the stdout is all in one.

• Built in progress tracking for all long running operations, complete with a nice animated pulsing Unicode progress bar. You can also track sub-tasks and those get an equally nice rendering (see the Conveyor demo video for an example). There are extensions to collections and streams that let you iterate over them with automatic progress tracking.

• You can ssh to a remote machine and the shell API continues to work. Executing commands runs them remotely. If you use the built-in wget command it will run that download remotely too, but with progress callbacks and other settings propagated from the local script.

• You can define high quality CLIs by annotating top level variables. There are path/directory assertions that show spelling suggestions if they're not found.

• Can easily import any dependency from Maven Central.

And so on. We use it for all our scripting needs internally now and it's a real delight.

Compared to Bun Scripting there are a few downsides:

1. The kotlin compiler is slow, so editing a script it incurs a delay of several seconds (running is fast). JS doesn't have that issue and Bun is especially fast to start. JetBrains are making it faster, and I want to experiment with compiling kotlinc to a native image at some point, but we never got around to it.

2. Bun's automatic shell escaping is really nice! I think we'd have to wait for the equivalent string interpolation feature to ship in Java and then be exposed to Kotlin. It's being worked on at the moment.

3. Obviously, Bun Scripting aims to be a product, whereas hshell is more an internal thing that we're not sure whether to try and grow a userbase for or not. So Bun is more practically useful today. For example the full API docs for hshell are still internal, only the general user guide is public.

4. Editing Kotlin scripts works best in IntelliJ and IntelliJ is an IDE more than an editor. It really wants files to be organized into projects, which doesn't fit the more ad hoc nature of shell scripts. It's a minor irritant, but real.

I think with some more work these problems can be fixed. For now, hopefully hshell's feature set inspires some other people!

metaltyphoon
0 replies
17h21m

So is this akin to Powershell Core but having JS as a language!

jcadam
0 replies
14h53m

Javascript shell?! It's like c shell, only worse.

ianwalter
0 replies
4h32m

Really cool. How would you use config files from other shells like .zshrc? We use direnv and mise to scope binary versions to project directories and just wondering how stuff like that would work.

guax
0 replies
9h37m

Shells are a solved problem!!

frompdx
0 replies
17h14m

Interesting. It reminds me a bet of janet-sh. I can see the utility if you are working with JavaScript or TypeScript. It might even work with ClojureScript using shadow-cljs.

https://github.com/andrewchambers/janet-sh

floof
0 replies
12h7m

Node's `execSync` is pretty much this easy to use as well.

devnonymous
0 replies
9h25m

My understanding from reading the post is that this is a shell in the same way python or perl or php or pgsql or mysql prompt is a shell. This isn't an interactive shell afaict.

For instance, Haven't tried it out but could someone who has tried this on Linux tell me what happens when I type Ctrl-Z in Bun when it is in the middle of running a command or pipeline ? Do I get a Bun shell prompt?

binary132
0 replies
14h52m

This is neat, but a) it strikes me that what's powerful about shell scripting is that it lets you easily wrangle multiple independent utilities that don't need to be contained within the shell stdlib (maybe I'm missing something but I didn't see any emphasis on that), and b) that embedding a language as a string inside another language is very rarely a good UX. I like that it's a really portable shell though. Shell portability is actually a pretty big problem.

benpacker
0 replies
16h39m

I like this, and I like Bun, and I’m going to use this, but I’m nervous about whether Bun’s ultimate share of the server-side cloud Javascript will be big enough to sustain the maintenance surface area they are carving out for themselves.

Hope they succeed though!

TheAceOfHearts
0 replies
13h1m

When I need shell-like utilities from my JS scripts I've previously used shelljs [0]. It's neat that Bun is adding more built-in utilities though.

[0] https://github.com/shelljs/shelljs

CuriouslyC
0 replies
14h42m

This looks very cool on the surface. There are a lot of systems out there with a mishmash of javascript and shell, those systems are stitched together in arbitrary ways, and it can often make them hard to debug and test. This looks like it'll make it easier to write and test those integrations, which is a win.

My main concern is that when things don't work as expected, the added layer of complexity will make it harder to figure out why. Hopefully there aren't too many rough edges.