return to table of content

JC converts the output of popular command-line tools to JSON

kbknapp
44 replies
3d3h

Really cool idea but this gives me anxiety just thinking about how it has to be maintained. Taking into account versions, command flags changing output, etc. all seems like a nightmare to maintain to the point where I'm assuming actual usage of this will work great for a few cases but quickly lose it's novelty beyond basic cases. Not to mention using `--<CMD>` for the tool seems like a poor choice as your help/manpage will end up being thousands of lines long because each new parser will require a new flag.

verdverm
35 replies
3d3h

This is one of the better use cases for LLMs, which have shown good capability at turning unstructured text into structured objects

ninkendo
9 replies
3d2h

If LLM’s were local and cheap, sure. They’re just too heavyweight of a tool to use for simple CLI output manipulation today. I don’t want to send everything to the cloud (and pay a fee), and even if it was a local LLM, I don’t want it to eat all my RAM and battery to do simple text manipulation.

In 20 years, assuming some semblance of moore’s law still holds for storage/RAM/gpu, I’m right there with you.

d3nj4l
7 replies
3d1h

On my M1 Pro/16GB RAM mac I get decently fast, fully local LLMs which are good enough to do this sort of thing. I use them in scripts all the time. Granted, I haven’t checked the impact on the battery life I get, but I definitely haven’t noticed any differences in my regular use.

mosselman
6 replies
3d1h

Which models do you run and how?

verdverm
2 replies
3d

https://github.com/ggerganov/llama.cpp is a popular local first approach. LLaMa is a good place to start, though I typically use a model from Vertex AI via API

mosselman
1 replies
1d22h

Thanks. I have llama.cpp locally. How do you use it in scripts? As in how do you specifically, not how would one.

d3nj4l
0 replies
1d1h

I have ollama's server running, and I interact with it via the REST API. My preferred model right now is Intel's neural chat, but I'm going to experiment with a few more over the holidays.

_joel
1 replies
3d

not op, but this is handy https://lmstudio.ai/

mosselman
0 replies
1d22h

Thanks!

d3nj4l
0 replies
1d2h

I use ollama (https://ollama.ai/) which supports most of the big new local models you might've heard of: llama2, mistral vicuna etc. Since I have 16GB of RAM, I stick to the 7b models.

chongli
0 replies
3d

Yeah, it would be much better if you could send a sample of the input and desired output and have the LLM write a highly optimized shell script for you, which you could then run locally on your multi-gigabyte log files or whatever.

hnlmorg
9 replies
3d2h

As someone who maintains a solution that solves similar problems to jc, I can assure you that you don’t need a LLM to parse most human readable output.

verdverm
8 replies
3d2h

it's more about the maintenance cost, you don't have to write N parsers for M versions

Maybe the best middle ground is to have an LLM write the parser. Lowers the development cost and runtime performance, in theory

hnlmorg
7 replies
3d1h

You don’t have to write dozens of parsers. I didn’t.

verdverm
6 replies
3d1h

Part of the appeal is that people who don't know how to program or write parsers can use an LLM to solve their unstructured -> structured problem

tovej
2 replies
3d1h

this is a terrible idea, I can't think of a less efficient method with worse correctness guarantees. What invariants does the LLM enforce? How do you make sure it always does the right thing? How do you debug it when it fails? What kind of error messages will you get? How will it react to bad inputs, will it detect them (unlikely), will it hallicinate an interpretation (most likely)

This is not a serious suggestion

verdverm
1 replies
3d1h

I used to focus on the potential pitfalls and be overly negative. I've come to see that these tradeoffs are situational. After using them myself, I can definitely see upsides that outweigh the downsides

Developers make mistakes too, so there are no guarantees either way. Each of your questions can be asked of handwritten code too

smrq
0 replies
2d23h

You can ask those questions, but you won't get the same answers.

It's not a question of "is the output always correct". Nothing is so binary in the real world. A well hand-maintained solution will trend further towards correctness as bugs are caught, reported, fixed, regression tested, etc.

Conversely, you could parse an IP address by rolling 4d256 and praying. It, too, will sometimes be correct and sometimes be incorrect. Does that make it an equally valid solution?

hnlmorg
2 replies
3d1h

Sure. But we weren’t talking about non-programmers maintaining software.

verdverm
1 replies
3d

people who don't know how to program OR write parsers

there are plenty of programmers who do not know how to write lexers, parsers, and grammars

hnlmorg
0 replies
3d

We are chatting about maintaining a software project written in a software programming language. Not some theoretical strawman argument youve just dreamt up because others have rightly pointed out that you don’t need a LLM to parse the output of a 20KB command line program.

As I said before, I maintain a project like this. I also happen to work for a company that specialises in the use of generative AI. So I’m well aware of the power of LLMs as well as the problems of this very specific domain. The ideas you’ve expressed here are, at best, optimistic.

by the time you’ve solved all the little quirks of ML you’ll have likely invested far more time on your LLM then you would have if you’d just written a simple parser and, ironically, needed someone far more specialised to write the LLM than your average developer.

This simply isn’t a problem that needs a LLM chucked at it.

You don’t even need to write lexers and grammars to parse 99% of application output. Again, I know this because I’ve written such software.

keithalewis
8 replies
3d2h

Give a kid a hammer and he'll find something to fix.

verdverm
7 replies
3d2h

What value does this comment add?

keithalewis
5 replies
3d

Approximately the same amount as the comment I replied to.

verdverm
4 replies
2d23h

One attempts to nudge a user towards the comment guidelines of HN (https://news.ycombinator.com/newsguidelines.html)

Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

Eschew flamebait. Avoid generic tangents. Omit internet tropes.
leptons
3 replies
2d23h

The old saying "If a hammer is your only tool then everything is a nail" is absolutely pertinent to this comment thread.

verdverm
2 replies
2d20h

how so? what assumptions are you making to reach that conclusion?

leptons
0 replies
2d18h

This all should be obvious to any human with knowledge of common colloquialisms. You aren't an AI are you?

The latest "hammer" is AI.

Lots of commenters here are suggesting to use a complex AI to solve simple text parsing. Maybe you can't see the problem with that, but it's like using 1000 Watts of power to solve something that should take 1 microwatt, just because "new, shiny" AI is here to save us all from having to parse some text.

I'm not making assumptions about what people are commenting about in this thread. Your comment comes off like a subtle troll.

keithalewis
0 replies
2d18h

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

What rule applies when the initial comment is not thoughtful and substantive?

otteromkram
0 replies
3d1h

I got a kick out of it.

¯\_(ツ)_/¯

himinlomax
5 replies
3d2h

Problem: some crusty old tty command has dodgy output.

Solution: throw a high end GPU with 24GB RAM and a million dollar of training at it.

Yeah, great solution.

verdverm
4 replies
3d2h

With fine-tuning, you can get really good results on specific tasks that can run on regular cpu/mem. I'd suggest looking into the distillation research, where large model expertise can be transferred to much smaller models.

Also, an LLM trained to be good at this task has many more applications than just turning command output into structured data. It's actually one of the most compelling business use cases for LLMs

anonymous_sorry
3 replies
3d1h

The complaint is less whether it would work, and more a question of taste. Obviously taste can be a personal thing. My opinions are my own and not those of the BBC, etc.

You have a small C program that processes this data in memory, and dumps it to stdout in tabular text format.

Rather than simplify by stripping out the problematic bit (the text output), you suggest adding a large, cutting-edge, hard to inspect and verify piece of technology that transforms that text through uncountable floating point operations back into differently-formatted UTF8.

It might even work consistently (without you ever having 100% confidence it won't hallucinate at precisely the wrong moment).

You can certainly see it being justified for one-off tasks that aren't worth automating.

But to shove such byzantine inefficiency and complexity into an engineered system (rather than just modify the original program to give the format you want) offends my engineering sensibilities.

Maybe I'm just getting old!

verdverm
2 replies
3d1h

If you can modify the original program, then that is by far the best way to go. More often than not, you cannot change the program, and in relation to the broader applicability, most unstructured content is not produced by programs.

anonymous_sorry
0 replies
3d1h

Yes, makes sense. Although this was originally a post about output of common command-line tools. Some of these are built on C libraries that you can just use directly. They are usually open source.

Too
0 replies
2d11h

More often than not, you cannot change the program

I’d challenge that. Try working with your upstream. It’s easier than ever nowadays to submit issues and PRs on GitHub.

Building layers upon layers, just work around minor issues in a tool is not wise.

cproctor
3 replies
3d2h

Would it be fair to think about this as a shim whose scope of responsibility will (hopefully) shrink over time, as command line utilities increasingly support JSON output? Once a utility commits to handling JSON export on its own, this tool can delegate to that functionality going forward.

pydry
1 replies
3d

It would but I can still see somebody launching this with great enthusiasm and then losing the passion to fix Yet Another Parsing Bug introduced on a new version of dig

kbrazil
0 replies
3d

`jc` author here. I've been maintaining `jc` for nearly four years now. Most of the maintenance is choosing which new parsers to include. Old parsers don't seem to have too many problems (see the Github issues) and bugs are typically just corner cases that can be quickly addressed along with added tests. In fact there is a plugin architecture that allows users to get a quick fix so they don't need to wait for the next release for the fix. In practice it has worked out pretty well.

Most of the commands are pretty old and do not change anymore. Many parsers are not even commands but standard filetypes (YAML, CSV, XML, INI, X509 certs, JWT, etc.) and string types (IP addresses, URLs, email addresses, datetimes, etc.) which don't change or use standard libraries to parse.

Additionally, I get a lot of support from the community. Many new parsers are written and maintained by others, which spreads the load and accelerates development.

dan_quixote
0 replies
3d2h

I'd also assume that a CLI resisting JSON support is likely to have a very stable interface. Maybe wishful thinking...

zimmund
0 replies
2d4h

Not to mention using `--<CMD>`

If you read further down in the documentation, you can just prefix your command with `jc` (e.g. `jc ls`). The `--cmd` param is actually a good idea, since it allows you to mangle the data before converting it (e.g. you want to grep a list before converting it).

Regarding maintenance, most of the basic unix commands' output shouldn't change too much (they'd be breaking not only this tool but a lot of scripts). I wouldn't expect it to break as often as you imagine, at least not because of other binaries being updated.

zackmorris
0 replies
2d23h

Keep in mind that the maintenance responsibility you're anxious about is currently a cost imposed on all developers.

<rant>

Since I started programming in the 80s, I've noticed a trend where most software has adopted the Unix philosophy of "write programs that do one thing and do it well". Which is cool and everything, but has created an open source ecosystem of rugged individualism where the proceeds to the winners so vastly exceeds the crumbs left over for workers that there is no ecosystem to speak of, just exploitation. Reflected now in the wider economy.

But curated open source solutions like jc approach problems at the systems level so that the contributions of an individual become available to society in a way that might be called "getting real work done". Because they prevent that unnecessary effort being repeated 1000 or 1 million times by others. Which feels alien in our current task-focussed reality where most developers never really escape maintenance minutia.

So I'm all in favor of this inversion from "me" to "we". I also feel that open source is the tech analog of socialism. We just have it exactly backwards right now, that everyone has the freedom to contribute, but only a select few reap the rewards of those contributions.

We can imagine what a better system might look like, as it would start with UBI. And we can start to think about delivering software resources by rail instead of the labor of countless individual developer "truck drivers". Some low-hanging fruit might be: maybe we need distributions that provide everything and the kitchen sink, then we run our software and a compiler strips out the unused code, rather than relying on luck to decide what we need before we've started like with Arch or Nix. We could explore demand-side economics, where humans no longer install software, but dependencies are met internally on the fly, so no more include paths or headers to babysit (how early programming languages worked before C++ imposed headers onto us). We could use declarative programming more instead of brittle imperative (hard coded) techniques. We could filter data through stateless self-contained code modules communicating via FIFO streams like Unix executables. We could use more #nocode approaches borrowed from FileMaker, MS Access and Airtable (or something like it). We could write software from least-privileges, asking the user for permission to access files or the networks outside the module's memory space, and then curate known-good permissions policies instead of reinventing the wheel for every single program. We could (will) write test-driven software where we design the spec as a series of tests and then AI writes the business logic until all of the tests pass.

There's a lot here to unpack here and a wealth of experience available from older developers. But I sympathize with the cognitive dissonance, as that's how I feel every single day witnessing the frantic yak shaving of "modern" programming while having to suppress my desire to use these other proven techniques. Because there's simply no time to do so under the current status quo where FAANG has quite literally all of the trillions of dollars as the winners, so decides best practices while the open source community subsists on scraps in their parent's basement hoping to make rent someday.

majkinetor
0 replies
3d1h

This requires collaboration. People submitting parsing info for the tool they need, and people that use it to easily keep it up to date. That is the only way.

eichin
0 replies
3d

I'm sort of torn - yeah, one well-maintained "basket" beats having a bunch of ad-hoc output parsers all over the place, but I want direct json output because I'm doing something complicated and don't want parsing to add to the problem. (I suppose the right way to get comfortable with using this is to just make sure to submit PRs with additional test cases for everything I want to use it with, since I'd have to write those tests anyway...)

calvinmorrison
34 replies
3d3h

In a certain sense, files everywhere is great, that's the promise of unix, or plan9 to a further extent.

However, unstructured files, or files that all have their own formats, is also equally hampering. Trying to even parse an nginx log file can be annoying with just awk or some such.

One of the big disadvantages is that large system rewrites and design changes cannot be executed in the linux userland.

All to say, I'd love a smarter shell, I love files, I have my awk book sitting next to me, but I think it's high time to get some serious improvements on parsing data.

In the same way programs are smart enough to know to render colored output or not, I'd love it if it could dump structured output (or not)

da_chicken
10 replies
3d3h

There is always Powershell. The trouble there is that it's so rooted in .Net and objects that it's very difficult to integrate with existing native commands on any platform.

uxp8u61q
9 replies
3d3h

If these "native commands" had a sensible output format, integrating them with powershell would be as simple as putting ConvertFrom-Json or ConvertFrom-Csv in the middle of the pipeline. And let's be real, it's as poorly "integrated" with "native commands" as bash or zsh is poorly integrated with native commands.

da_chicken
8 replies
3d1h

You're not wrong, but the fix is then, "hey, let's extend the functionality of literally every *nix command". Which is hard to achieve. Building sensible serialized object-oriented output is the same problem as building sensible object-oriented output. That's why jc exists, I suppose.

There is always ConvertFrom-String. The problem is that it is one of the least intuitive and worst-performing commands I've used in Powershell. It's awful and I hate it. It's like writing sed and awk commands without the benefit of sed and awk's maturity and ubiquity. IMX, only Compare-Object has been worse.

uxp8u61q
7 replies
3d

"hey, let's extend the functionality of literally every *nix command". Which is hard to achieve.

It's pretty much what powershell did, though.

imtringued
3 replies
2d23h

It sounds like there needs to be a monolithic solution built from the ground up for interoperation.

Updating coreutils is a losing game at this point. Of course people will get angry when a well designed solution gradually takes over simply, because it is better.

zlg_codes
2 replies
2d18h

What would I use PowerShell to do, though? Most of Windows is a GUI.

That's a key problem with the developer story on Windows, especially coming from GNU/Linux. Where are my standard compilers and libraries? What use do I have for a terminal on Windows? In bash, I get powerful commands and a simple text-based pipeline hooked up to coreutils and literally anything else that runs on the command line, which is a metric ton of software in that ecosystem.

Back to Windows. What would I want PowerShell to do? What would I use PowerShell for that I wouldn't want to just use Bash instead? Windows doesn't have coreutils or nice command line software to do fun or powerful things with.

I see PowerShell as a "look we have a terminal!" from Windows, but nothing that I want to do in said terminal to motivate me to learn.

uxp8u61q
1 replies
2d7h

Windows doesn't have coreutils or nice command line software to do fun or powerful things with.

Sounds like you've never actually used windows for real tech stuff...

zlg_codes
0 replies
1d21h

What 'real tech stuff'? Windows is an OS for consumers, not makers.

da_chicken
2 replies
2d23h

Eh, not really. Powershell itself is fairly limited in terms of functionality. It's basically one step removed from a .Net REPL, and the .Net classes aren't necessarily written to do *nix admin tasks. It gives you all of .Net to build tools with, but sometimes you still run into Microsoftisms that are total nonsense. There's a reason every C# project was using Newtonsoft's JSON.Net instead of the wonky custom data representations that MS was trying to push left over from the embrace/extend/extinguish era.

uxp8u61q
1 replies
2d21h

I see a lot of buzzwords and attacks in this comment but nothing actually concrete.

da_chicken
0 replies
2d2h

If you think .Net is the best way to interact with a *nix environment for administration, by all means go ahead. I think you'll have a lot of fun discovering the problems with it. Lie most tasks involve a pipeline of native commands. Well, if you want a Powershell command in the middle, that means you have to run a native command, marshall the output to an object, do the Powershell commands, then serialize that output back into characters, and then output to a native command. And you potentially have to deal with Powershell thinking in UTF-16 and *nix thinking in UTF-8.

However this is social media, not an academic paper, and you didn't put much effort into backing your claims, either.

dale_glass
7 replies
3d3h

Yup. It really grinds my gears that people came up with fairly decent ideas half a century ago, and a large amount of people decided to take that as gospel rather than as something to improve on.

And it's like pulling teeth to get any improvement, because the moment somebody like Lennart tries to get rid of decades of old cruft, drama erupts.

And even JSON is still not quite there. JSON is an okay-ish idea, but to do this properly what we need is a format that can expose things like datatypes. More like PowerShell. So that we can do amazing feats like treating a number like a number, and calculating differences between dates by doing $a - $b.

zlg_codes
5 replies
2d18h

Yeah dude I totally love it when journald eats my logging and power interruptions create unrecoverable log loss! You can forget just mounting a drive and checking its journald logs -- your machine has to be started by systemd in order to read journald logs!

I totally love it when liars frontmanning the project say it's a project and an init system and a system layer that will replace everything. But it won't! Pay no attention to the inconsistent messaging, the "gentle pushes" to get other distros to use it, etc. Also, let's not mention he basically hoodwinked the entire community by switching teams to Microsoft. And people like you eat his work up! Are you sure you like free software?

dale_glass
4 replies
2d8h

Yeah dude I totally love it when journald eats my logging and power interruptions create unrecoverable log loss!

So does everything else, because of consumer drives. Fun fact: filesystems by default only guarantee the integrity of the filesystem's structure itself. The promise is that after power loss, the basic structures of the filesystem won't be corrupt, but makes no big claims of reliability about the data written. Blocks of random junk in your data, blocks of NULLs, even parts of other files (maybe deleted data) are all things I've seen happen.

Databases and the like take serious effort to ensure safety, but that greatly slows down performance. So it's kind of a hard sell for a log system that's not supposed to be a performance impact.

For this contingency, you can do log shipping, but in general, system logs shouldn't be expected to be reliable in the face of a crash, since to my knowledge all normal daemons (including rsyslogd) buffer data before flushing to disk.

You can forget just mounting a drive and checking its journald logs -- your machine has to be started by systemd in order to read journald logs!

No, it doesn't. There are file arguments to journalctl. Read the manpage, sheesh.

I totally love it when liars frontmanning the project say it's a project and an init system and a system layer that will replace everything. But it won't! Pay no attention to the inconsistent messaging, the "gentle pushes" to get other distros to use it, etc.

Meh. Paranoia.

Also, let's not mention he basically hoodwinked the entire community by switching teams to Microsoft. And people like you eat his work up! Are you sure you like free software?

Free Software is a licensing/distribution concept completely unrelated to whether one likes or not Microsoft's technical decisions. Some I really hate, and some are actually pretty cool.

I'm for Free Software because I like the licensing philosophy, not because I believe Unix is the best thing since sliced bread. In fact I believe Unix started as a bunch of good ideas but that have not kept up and so needs a bit of work to remain a good system to use.

zlg_codes
3 replies
1d19h

No, it doesn't. There are file arguments to journalctl. Read the manpage, sheesh.

... And how are you going to produce the journal file to read? ... with systemd. If you broke the system and it won't boot, you'll need to boot from systemd and check with journalctl because the journal can't be accessed otherwise. That usually requires a liveUSB running systemd to pull off. This is why you don't use binary logs.

Compared to `less /var/log/messages` from any Linux, I know which one I'm trusting.

I like controlling my system instead of having it controlled for me, thanks.

dale_glass
2 replies
1d19h

And how are you going to produce the journal file to read?

What do you mean "produce"? They were produced while it was running, they can be found in /var/log/journal

If you broke the system and it won't boot, you'll need to boot from systemd and check with journalctl because the journal can't be accessed otherwise.

Obviously? I'm not seeing the problem. It's not like you're getting anything from that system without having a way to mount your XFS/Ext4/BTRFS/LVM/luks/whatever setup. You need to boot a distro compatible with that to do it.

So of course you have to boot a Linux distro, which will easily have all the tooling available, including to deal with the journald stuff.

It's just a complete non-problem.

I like controlling my system instead of having it controlled for me, thanks.

I'm not sure what that means exactly.

zlg_codes
1 replies
1d16h

This style of argumentation is annoying because you're not even participating, you're looking for reasons to dismiss.

A sane system doesn't need a whole lot just to check logs. LVM and LUKS are different due to cryptographic needs. systemd meanwhile has little reason to store logs in binary format. The promises that are alleged are not concerns to anyone except enterprise.

dale_glass
0 replies
1d9h

This style of argumentation is annoying because you're not even participating, you're looking for reasons to dismiss.

I just don't buy the problem as legitimate. It's an aesthetic problem, not a real problem. Sysadmins clearly have no problem with the fact that XFS is not a human readable format, or I don't recall anybody making a stink about using Berkeley DB for a whole bunch of stuff.

systemd meanwhile has little reason to store logs in binary format.

Quite a few actually. Indexing, transparent compression, clear storage of arbitrary amounts of data with well delimited fields, quick seeking. Makes for a compact and very well performing system.

You can't quickly seek a .gz text file, while journald will tell you what happened a week ago at 3 AM in a few ms.

The promises that are alleged are not concerns to anyone except enterprise.

Or people who realize there's a bit more to logs than 'tail' and 'grep'.

Eg, journald trivially will give you a log from a given timeframe that interleaves the logs of a proxy, httpd, database and application server, actually producing a log in which a request can be logically followed through the different services it went through, with timestamps in microseconds.

In an application that's designed for it, you can actually ask for logs regarding to a given host, user, etc.

If you've ever done log parsing, well, now you don't need to ever write a regex to split a .log by fields, because that was already done for you, and you can have UNIX timestamps directly instead of doing date parsing.

CyberDildonics
0 replies
3d

If you want something more complex and restrictive you could easily make it out of JSON. JSON works because it is simple and isn't being constantly distorted into something more complicated to cover niche use cases.

mistercow
6 replies
3d3h

Part of the problem is that the output of commands is both a UI and an API, and because any text UI can be used as an API, the human readable text gets priority. Shell scripting is therefore kind of like building third party browser extensions. You look and you guess, and then you hack some parser up based on your guess, and hope for the best.

I actually wish there was just a third standard output for machine readable content, which your terminal doesn’t print by default. When you pipe, this output is what gets piped (unless you redirect), it’s expected to be jsonl, and the man page is expected to specify a contract. Then stdout can be for humans, and while you can parse it, you know what you’re doing is fragile.

Of course, that’s totally backwards incompatible, and as long as we’re unrealistically reinventing CLIs from the foundations to modernize them, I have a long list of changes I’d make.

chongli
2 replies
3d

I really want to agree because it seems to make so much sense in theory. It gets rid of the need to parse by standardizing on syntax. But it doesn't solve everything, so we still don't get to live the dream. Namely, it does not solve the issue of schemas, versioning, and migration.

And this is a really big issue that threatens to derail the whole project. If my script runs a pipeline `a | b | c` and utility b gets updated with a breaking change to the schema, it breaks the entire script. Now I've got to go deep into the weeds to figure out what happened, and the breakage might not be visible in the human-readable output. So to debug I'll have to pass the flag to each tool to get it to print all the json to stdout, and then sit there eyeballing the json to figure out how the schema changed and what I need to do to fix my script.

Seems like a big mess to me. Unless there's something I'm missing?

mistercow
0 replies
1d12h

So to debug I'll have to pass the flag to each tool to get it to print all the json to stdout

I'm not sure how that's a particular burden. If you have `a | b | c` and you want to debug the output of b, you already have to pull that out and debug it or `tee` it, because its stdout is being piped otherwise. This would be the same, except that you'd pipe (or tee) "machine out" to stdout.

Or, since we're making wild backwards incompatible changes anyway, add a directive to the shell that makes it dump all "machine out" to stdout, a la "set +x". Now you don't even have to change your code. Just wrap the line in `set -o dump-machine-out` and `set +o dump-machine-out`

kbrazil
0 replies
2d18h

There could be a major schema change that breaks the contract, but one of the nice things about JSON output is that it allows the creation of new fields without affecting downstream consumers.

That is, if I have a CLI program that spits out a list of IP addresses and one day I want to also output the corresponding dns names, I can simply add the "dns" field and existing pipelines will ignore the field and work just fine.

This is better than grep/awking/etc. unstructured text to STDOUT because, depending on how the author decides to add the new field, it can easily break existing pipelines that rely on the shape of the data to stay the same.

Izkata
1 replies
2d21h

I actually wish there was just a third standard output for machine readable content, which your terminal doesn’t print by default. When you pipe, this output is what gets piped (unless you redirect), it’s expected to be jsonl, and the man page is expected to specify a contract.

Except for that last jsonl part, various commands already do something like this with stdout by detecting what's on the other end.

https://unix.stackexchange.com/questions/515778/how-does-a-p...

mistercow
0 replies
1d12h

Yeah, but it's kind of a mess. You have multiple options when implementing it without a clear standard, and in practice, it's a lot trickier to make sure that all of your code does this consistently than it is to simply use different file descriptors for different output types.

And then if the user wants to debug their script, they need to know to pipe your command to `cat` or whatever to see what's actually getting passed through.

theblazehen
0 replies
3d2h

Have you seen some of the existing projects currently working on it? Most well known is https://www.nushell.sh/, amongst some others

hoherd
3 replies
3d3h

In the same way programs are smart enough to know to render colored output or not, I'd love it if it could dump structured output (or not)

The even lower hanging fruit is to implement json output as a command line argument in all cli tools. I would love to see this done for the gnu core utils.

teddyh
0 replies
3d3h

At least “du”, “env”, and “printenv” in Coreutils all support the “--null” option to separate output into being NUL-separated instead of being text lines.

mr_mitm
0 replies
3d2h

AFAIK the idea is that if you need that kind of interoperability in unix/gnu, you're supposed to write your tool in C and include some libraries. Clearly not realistic in many use cases.

bryanlarsen
0 replies
3d3h

It was a really pleasant surprise to find the "-j" option to do this for the "ip" command from the iproute2 project.

mikepurvis
1 replies
3d3h

When it comes to parsing server logs, it's too bad the functionality can't be extracted out of something like logstash, since that's already basically doing the same thing.

Though I guess the real endgame here is for upstream tools to eventually recognize the value and learn how to directly supply structured output.

dale_glass
0 replies
3d3h

You can get that out of journald.

    journalctl -o json
And applications using journald directly can provide their own custom fields.

numbsafari
0 replies
3d3h

Trying to even parse an nginx log file can be annoying with just awk or some such.

You probably already know this, but for those who do not, you can configure nginx to generate JSON log output.

Quite handy if you are aggregating structured logs across your stack.

imtringued
0 replies
2d23h

The problem is that nobody has built an actual ffi solution except maybe the GObject guys. C isn't an ffi, because you need a C compiler to make it work. By that I mean it is not an interface, but rather just C code whose calling part has been embedded into your application.

Mister_Snuggles
18 replies
3d3h

In FreeBSD, this problem was solved with libxo[0]:

    $ ps --libxo=json | jq
    {
      "process-information": {
        "process": [
          {
            "pid": "41389",
            "terminal-name": "0 ",
            "state": "Is",
            "cpu-time": "0:00.01",
            "command": "-bash (bash)"
          },
    [...]

It's not perfect though. ls had support, but it was removed for reasons[1]. It's not supported by all of the utilities, etc.

This seems to be a great stop-gap with parsers for a LOT of different commands, but it relies on parsing text output that's not necessarily designed to be parsed. It would be nice if utilities coalesced around a common flag to emit structured output.

In PowerShell, structured output is the default and it seems to work very well. This is probably too far for Unix/Linux, but a standard "--json" flag would go a long way to getting the same benefits.

[0] https://wiki.freebsd.org/LibXo

[1] https://reviews.freebsd.org/D13959

ekidd
3 replies
3d

In PowerShell, structured output is the default and it seems to work very well.

PowerShell goes a step beyond JSON, by supporting actual mutable objects. So instead of just passing through structured data, you effectively pass around opaque objects that allow you to go back to earlier pipeline stages, and invoke methods, if I understand correctly: https://learn.microsoft.com/en-us/powershell/module/microsof....

I'm rather fond of wrappers like jc and libxo, and experimental shells like https://www.nushell.sh/. These still focus on passing data, not objects with executable methods. On some level, I find this comfortable: Structured data still feels pretty Unix-like, if that makes sense? If I want actual objects, then it's probably time to fire up Python or Ruby.

Knowing when to switch from a shell script to a full-fledged programming language is important, even if your shell is basically awesome and has good programming features.

lukeschlather
1 replies
3d

Are executable methods really that bad? I mean, they're bad in some abstract sense but that seems more like an objection if we were talking about a "safe" language like Rust than talking about shell scripting. For a shell executable methods seem fine. If you don't make the method executable people are just going to use eval() anyway, might as well do the more predictable thing.

ekidd
0 replies
2d23h

It might be possible to design a good Unix shell based on objects, with the ability to "call back into" programs. But I haven't seen one yet that I'd prefer over Ruby or Python.

I do think objects make plenty of sense in languages like AppleScript, which essentially allowed users to script running GUI applications. And similarly, Powershell's objects might be right for Windows.

But nushell shows how far you can push "dumb" structured data. And it still feels "Unix-like", or at least "alternate universe Unix-like."

The other reason I'm suspicious of objects in shells is that shell pilelines are technically async coroutines operating over streams! That's already much further into the world of Haskell or async Rust than many people realize. And so allowing "downstream" portions of a pipeline to call back into "upstream" running programs and to randomly change things introduces all kinds of potential async bugs.

If you're going to have a async coroutines operating on streams, then having immutable data is often a good choice. Traditional Unix shells do exactly this. Nushell does it, too, but it replaces plain text with structured data.

munchbunny
0 replies
2d23h

PowerShell is basically an interactive C# shell with language ergonomics targeting actual shell usage instead of "you can use it as a shell" the way Python, Ruby, etc. approach their interactive shells. However, the language and built-in utilities work best when you are passing around data as opposed to using PowerShell as if you were writing C#.

It's true, you are indeed passing around full-blown .NET runtime objects. In fact your whole shell is running inside an instance of the .NET runtime, including the ability to dynamically load .NET DLL's and even directly invoke native API's.

It feels a bit like JS in the sense that you're best off sticking to "the good parts", where you get the power of structured input/output but you don't end up trying to, for example, implement high performance async code, even though you technically could.

nijave
2 replies
3d1h

Libxo is neat, in theory, but it seems like applications are left to implement their own logic for a given output format rather than being able to pass a structure to libxo and let it do the formatting.

I can't remember the exact utility--I think it was iostat--would use string interpolation to format output lines in JSON and combined with certains flags produced completely mangled output. Not sure if things have improved but I would have expected something like JSON lines when interval is provided.

Powershell and kubectl are miles ahead of libxo in useability imo

simias
0 replies
3d

Well I suspect that eventually you just run into hard limitations with C's introspection facilities, or lack thereof.

I like C a lot but one of the reasons I like Rust more these days is the ability to trivially implement complex serialization schemes without a ton of ad-hoc code and boilerplate.

gigatexal
0 replies
3d

far better for applications to be unaware of a such a utility and allow something like jc to grow in support with plugins or something so as to keep the utilities simple and move the logic and burden to the wrapping utility in this case jc.

evnp
2 replies
3d

> In PowerShell, structured output is the default and it seems to work very well. This is probably too far for Unix/Linux, but a standard "--json" flag would go a long way to getting the same benefits.

OP has a blog post[0] which describes exactly this. `jc` is described as a tool to fill this role "in the meantime" -- my reading is that it's intended to serve as a stepping stone towards widespread `-j`/`--json` support across unix tools.

[0] https://blog.kellybrazil.com/2019/11/26/bringing-the-unix-ph...

im3w1l
1 replies
2d9h

If there is to be a push to add structured data to all the unix tools, I wish they would use a format that allows embedding binary data. Yes base64 is an option, but it suffers from the issue that base64(base64(base64(data))) leads to exponential overhead.

pkkm
0 replies
2d4h

CBOR? It's based on JSON, so it should be pretty easy to add to a tool that already has JSON serialization.

supriyo-biswas
1 replies
3d2h

Similarly, in SerentityOS, stuff under /proc return JSON data rather than unstructured text files.

A better, structural way in which this could be fixed is to allow data structures to be exported in ELFs and have those data structures serialized into terminal output, which can then be outputted in the preferred format of the user, such as JSON, YAML, or processed accordingly.

crotchfire
0 replies
2d7h

I mean if you have a filesystem you've already got a way to tree-structure your data...

msla
1 replies
3d

It's not perfect though. ls had support, but it was removed for reasons

In specific:

https://svnweb.freebsd.org/base?view=revision&revision=32810...

libxo imposes a large burden on system utilities. In the case of ls, that burden is difficult to justify -- any language that can interact with json output can use readdir(3) and stat(2).

Which rather misses the point of being able to use JSON in shell scripts.

oh_sigh
0 replies
3d

I'd love to know what the burden was too. I hear comments like that in code reviews and commonly when you push for specifics about the burden, there is very little

throw0101b
0 replies
3d

In FreeBSD, this problem was solved with libxo[0]:

Libxo happens to be in the base system, but it is generally available:

* https://github.com/Juniper/libxo

* https://libxo.readthedocs.io/en/latest/

rezonant
0 replies
2d23h

This seems to be a great stop-gap with parsers for a LOT of different commands, but it relies on parsing text output that's not necessarily designed to be parsed

True, and yet it's extremely common to parse output in bash scripts and other automations, so in a sense it's just centralizing that effort. That being said at least when you do it yourself you can fix problems directly.

nerdponx
0 replies
2d21h

What I find weird about Powershell is that there's no "fixed-width column" parser, which is a widely used format for Unix-style CLI tools.

I don't know if NuShell has it, I haven't tried.

In any case, it's much better for tools to output more-parseable data in the first place. Whitespace-delimited columns are fine of course, but not so much when the data can contain whitespace, as in the output from `ps`.

I don't see much reason why JSONLines (https://jsonlines.org/) / NDJSON (https://ndjson.org/) can't be a standard output format from most tools, in addition to tables.

As for the reason of removal:

  any language that can interact with json output can use readdir(3) and stat(2).
Ugh. Any language of course can do it. But that's basically telling users that they need to reimplement ls(1) themselves if they want to use any of its output and features in scripts.

I understand if the maintenance burden is too high to put it in ls(1) itself, but it's a shame that no tool currently does this. The closest we have is a feature request in Eza: https://github.com/eza-community/eza/issues/472

imtringued
0 replies
2d23h

Now they only need to do the same thing for input and let the operating system or the shell handle the argument parsing so that it is consistent accross the entire operating system.

pushedx
12 replies
3d3h

I salute whoever chooses to maintain this

alex_suzuki
7 replies
3d3h

I wonder how they will address versions…

`aws s3 ls | jc —-aws=1.2.3`

What a nightmare.

dtech
3 replies
3d3h

Aws cli isn't the best example because it supports outputting json natively.

I'd expect this to not be a huge problem in practice because this is mostly for those well established unix cli tools of which the output has mostly ossified anyway. Many modern and frequently updated tools support native JSON output.

hnlmorg
2 replies
3d2h

Aws cli isn't the best example because it supports outputting json natively.

The s3 sub command annoyingly doesn’t. Which I’m guessing is the reason the GP used that specifically.

cwilkes
1 replies
3d1h
hnlmorg
0 replies
3d1h

You can, but it’s not nearly as nice to use. For starters you have to manage pagination yourself.

dj_mc_merlin
2 replies
3d3h

What about

jc 'aws sts get-caller-identity' | jq [..]

That way the aws process can be a subprocess of jc, which can read where the binary is and get its version automatically.

sesm
0 replies
3d1h

jc already has this, see ‘jc dig example.com’ in examples. They call it ‘alternative magic syntax’, but IMO it should be the primary syntax, while piping and second-guessing the previous commands parameters and version should be used only in exceptional cases.

jasonjayr
0 replies
3d2h

That can get thorny, because it adds another level of shell-quoting/escapes, and that is a notorious vector for security problems.

sesm
1 replies
2d20h

In theory, if it could load something like ‘plugins’ (for example as separate shell commands) some of the maintenance effort could be offloaded to ‘plugin’ authors.

kbrazil
0 replies
2d18h
amelius
0 replies
3d2h

I salute whoever chooses to use this and runs into the assumptions made by this tool that turn out to be wrong.

CoastalCoder
0 replies
3d3h

Good point. This reminds me of the Linux (Unix?) "file" program, and whichever hero(es) maintain it.

timetraveller26
6 replies
3d3h

Does anybody know of a listof modern unix command-line tools accepting a --json option?

It may even be useful to add that information to this repo.

user3939382
0 replies
3d3h

Probably not what you had in mind but, AWS CLI.

pkkm
0 replies
2d3h

lsblk accepts a --json flag and can give you a lot of information (try lsblk --json --output-all). Very useful if your script needs to check what disks and partitions there are in the system.

pirates
0 replies
3d2h

kubectl with “-o json”

geraldcombs
0 replies
2d23h

TShark (the CLI companion to Wireshark) does with the `-T json` flag.

chungy
0 replies
3d3h

Basically everything on FreeBSD supports it via libxo.

bravetraveler
0 replies
2d16h

I don't have a list, but the modern replacement for "ifconfig" does JSON: "ip"

As does "lldpctl"

Ansible provides details about systems in JSON called 'facts'. The intention is to use these to inform automation

mejutoco
4 replies
3d1h

I wonder if a tool could parse any terminal output into json in a really dumb and deterministic way:

    {
      "lines": [
         "line1 bla bla",
         "line1 bla bla",
       ],
      "words": [
         "word1",
         "word2",
       ],
    }
With enough representations (maybe multiple of the same thing) to make it easy to process, without knowing sed or similar. It seems hacky but it would not require any maintenance for each command, and would only change if the actual output changes.

hk__2
3 replies
3d

What’s the point of JSON, then?

mejutoco
2 replies
1d22h

without knowing sed or similar

Not having to know sed or a similar tool. Most unix tools are structured in lines, columns, etc. anyway.

hk__2
1 replies
1d21h

Not having to know sed or a similar tool.

But you get a JSON with a list of lines that you still have to process in some way. Instead of having a program that reads the input line by line, you read a JSON that contains a list of lines.

mejutoco
0 replies
1d19h

You could have multiple representations of it. For example split by tabs to represent columns in a cronfile, so you could choose lines to get the second lines or columns to get the 3rd column etc. Anyway, just brainstorming here.

rplnt
2 replies
3d2h

Wish it would have automatic parser selection by default. Even if just for a (possible) selected subset. Typing `foo | jc | jq ...` would be more convenient than `foo | jc --foo | jq ...`.

mikecarlton
1 replies
3d2h

It supports `jc foo | jq` which is quite handy. E.g. `jc dig google.com txt | jq '.[]|.answer[]|.data'`

kbrazil
0 replies
3d1h

Also, `jc` automatically selects the correct /proc/file parser so you can just do `jc /proc/meminfo` or `cat /proc/meminfo | jc --proc` without specifying the actual proc parser (though you can do that if you want)

Disclaimer: I'm the author of `jc`.

freedomben
2 replies
3d1h

Really glad to see this is already packaged for most linux distributions. So many utilities nowadays seem be written in Python, and python apps are such a PITA to install without package manager packages. There's so many different ways to do it and everything seems to be a little different. Some will require root and try to install on top of package manager owned locations, which is a nightmare.

Fedora Toolbox has been wonderful for this exact use case (installing Python tools), but for utilities like this that will be part of a bash pipe chain for me, toolbox won't cut it.

Spivak
1 replies
3d1h

Installing self-contained programs written in Python not packaged for your distro:

    PIPX_HOME=/usr/local/pipx PIPX_BIN_DIR=/usr/local/bin pipx install app==1.2.3
It sets up an isolated install for each app with only its deps and makes it transparent.

The distro installation tree of Python is for the exclusive use of your distro because core apps cloud-init, dnf, firewalld are built against those versions.

freedomben
0 replies
3d

thank you! That's amazingly helpful. I had no idea pipx was a thing

For others: https://github.com/pypa/pipx

It's also in the Fedora repos: dnf install -y pipx

codedokode
2 replies
3d

They are doing all wrong, instead of using human-readable and machine-readable formats, all CLI tools should use human-and-machine-readable format.

mathfailure
1 replies
2d23h

You mean YAML?

codedokode
0 replies
2d23h

YAML is over-complicated.

abound
2 replies
3d2h

Nushell [1] ends up at mostly the same place (structured data from shell commands) with a different approach, mostly just being a shell itself.

[1] http://www.nushell.sh/

saghm
0 replies
2d23h

I had glanced at nunshell every now and then since it was initially announced, but it wasn't until a month or two before that I finally really "got" the point of it. I was trying to write a script to look through all of the files in a directory matching a certain pattern and pruning them to get rid of ones with modified timestamps within 10 minutes of each other. I remembered that nushell was supposed to be good for things like this, and after playing around with it for a minute, it finally "clicked" and now I'm hooked. Even when dealing with unstructured data, there's a lot of power in being able to convert it even into something as a list of records (sort of like structs) and process it from there.

danyx23
0 replies
2d19h

Nushell actually pairs really well with JC, given that nushell has a "from json" operation. I recorded a video some time ago that shows a few nice features of Nushell and I bring up combining it with jc at around minute 19: https://www.youtube.com/watch?v=KF5dtxVsn1E

Pxtl
2 replies
3d2h

Honestly this is half the reason I use Powershell for everything. Bash-like experience but everything returns objects.

It's a messy, hairy, awful language. Consistently inconsistent, dynamically-typed in the worst ways, "two googles per line" complexity, etc.

But for the convenience of being able to combine shell-like access to various tools and platforms combined with the "everything is a stream of objects" model, it can't be beat in my experience.

And you can still do all the bash-like things for tools that don't have good Powershell wrappers that will convert their text-streams into objects. Which, sadly, is just about everything.

zlg_codes
1 replies
2d18h

I love this comment, measuring complexity via googles/line. That really sends home how difficult it is to grok PowerShell.

What are you building with it that would be harder in bash or another shell? I'm not seeing the value of passing around opaque objects instead of text.

Pxtl
0 replies
1d21h

The objects aren't opaque, you can catch them in a variable and inspect it with get-member. I find it very nice to work interactively when developing a script -- call a getter command and either let it out to the console or store it in variable and then play with the variable.

Jason/XML/csv files and API results all get turned into PS objects easily, as do things with config objects like file permissions or SQL servers or whatnot. There's a bunch of dumb ideas like non-file filesystems for things like SQL and the Windows Registry, but the basic concept of "call command-line tools and navigate the filesystem with the ease of Bash, but also work with objects like Python or JS" works well for me.

kazinator
1 replies
3d

  $ dig example.com | txr dig.txr
  [{"query_time":"1","rcvd":"56","answer_num":1,"status":"NOERROR",
    "when_epoch":1702030676,"opcode":"QUERY","udp":"65494","opt_pseudosection":{"edns":{"udp":65494,"flags":[],"version":"0"}},
    "query_num":1,"question":{"name":"example.com.","type":"A","class":"IN"},
    "server":"127.0.0.53#53(127.0.0.53)","id":"48295","authority_num":0,
    "answer":[{"name":"example.com.","type":"A","data":"93.184.216.34","ttl":"4441",
             "class":"IN"}],
    "additional_num":1,"when":"Fri Dec 08 10:17:56 PST 2023"}]

  $ cat dig.txr
  @(bind sep @#/[\s\t]+/)
  @(skip)
  ;; ->>HEADER<<- opcode: @opcode, status: @status, id: @id
  ;; flags: qr rd ra; QUERY: @query, ANSWER: @answer, AUTHORITY: @auth, ADDITIONAL: @additional
  @(skip)
  ;; OPT PSEUDOSECTION:
  ; EDNS: version: @edns_ver, flags:@flags; udp: @udp
  @(skip)
  ;; QUESTION SECTION:
  ;@qname@sep@qclass@sep@qtype
  @(skip)
  ;; ANSWER SECTION:
  @aname@sep@ttl@sep@aclass@sep@atype@sep@data
  
  ;; Query time: @qtime msec
  ;; SERVER: @server
  ;; WHEN: @when
  ;; MSG SIZE  rcvd: @rcvd
  @(do (put-jsonl #J^[{
                        "id" : ~id,
                        "opcode" : ~opcode,
                        "status" : ~status,
                        "udp" : ~udp,
                        "query_num" : ~(tofloat query),
                        "answer_num" : ~(tofloat answer),
                        "authority_num" : ~(tofloat auth),
                        "additional_num" : ~(tofloat additional),
                        "opt_pseudosection" :
                        {
                          "edns" :
                          {
                            "version" : ~edns_ver,
                            "flags" : [],
                            "udp" : ~(tofloat udp)
                          }
                        },
                        "question" :
                        {
                          "name" : ~qname,
                          "class" : ~qclass,
                          "type" : ~qtype
                        },
                        "answer" :
                        [
                          {
                            "name" : ~aname,
                            "class" : ~aclass,
                            "type" : ~atype,
                            "ttl" : ~ttl,
                            "data" : ~data
                          }
                        ],
                        "query_time" : ~qtime,
                        "server" : ~server,
                        "rcvd" : ~rcvd,
                        "when" : ~when,
                        "when_epoch" : ~(time-parse "%a %b %d %T %Z %Y" when).(time-utc)
                      }]))
The latest TXR (292 as of time of writing) allows integers in JSON data, so (toint query) could be used.

pkkm
0 replies
2d1h

So it's a mix of pattern matching and Lisp? That looks pretty useful, I'm going to give it a try.

crazysim
1 replies
3d2h

Jesus Christ, it's JSON for Bourne!

I wonder how well this could work/interact with Powershell.

majkinetor
0 replies
3d2h

Perfectly: <json stringn input> | ConvertFrom-Json

zubairq
0 replies
3d3h

Simple idea, really great to see this!

timetraveller26
0 replies
3d3h

The modern kids want it all easy, when I was learning Linux we used null delimiters, xargs, cut, sed & awk and that was enough! \s

sesm
0 replies
3d1h

IMO ‘jc dig example.com’ should be the primary syntax, because ‘dig example.com | jc —dig’ has to retroactively guess the flags and parameters of previous command to parse the output.

nickster
0 replies
3d3h

All output being an object is one of my favorite things about powershell. I miss it when I have to write a bash script.

nailer
0 replies
3d

Nice.

Too many "lets fix the command line" (nushell, pwsh) have noble goals, but also start with "first let's boil the ocean".

We need to easily ingest old shitty text output for a little while to move to the new world of structured IO.

moss2
0 replies
3d1h

Excellent

js2
0 replies
3d2h

Previous discussion (linked to from the project's readme):

https://news.ycombinator.com/item?id=28266193

da39a3ee
0 replies
2d22h

Awesome, does it work for man pages? They're a huge elephant in the room -- people get really upset if you point out that man pages are an unsearchable abomination, locking away vast amounts of important information about unix systems in an unparseable mess. But, it's true.

bottled_poe
0 replies
3d1h

I’d bet money on people (here) using tools like this to process millions of records or more. It’s a sad truth those people won’t have jobs in a few years when AI, which will know better, takes hold :(

PreInternet01
0 replies
3d3h

Oh, this is cool. I'm a huge proponent of CLI tools supporting sensible JSON output, and things like https://github.com/WireGuard/wireguard-tools/blob/master/con... and PowerShell's |ConvertTo-Json are a huge part of my management/monitoring automation efforts.

But, unfortunately, sensible is doing some heavy lifting here and reality is... well, reality. While the output of things like the LSI/Broadcom StorCLI 'suffix the command with J' approach and some of PowerShell's COM-hiding wrappers (which are depressingly common) is technically JSON, the end result is so mindbogglingly complex-slash-useless, that you're quickly forced to revert to 'OK, just run some regexes on the plain-text output' kludges anyway.

Having said that, I'll definitely check this out. If the first example given, parsing dig output, is indeed representative of what this can reliably do, it should be interesting...

Cyph0n
0 replies
3d

Interesting project! But I expected them to be using textfsm (or something similar) as a first step parser. textfsm is heavily used to parse CLI outputs in networking devices.

https://github.com/google/textfsm

AtlasBarfed
0 replies
3d

My God, doesn't properly handle ls?

Animats
0 replies
2d19h

I always felt that was a design flaw of UNIX. Programs accept command line parameters and environment variables as input, but all they output for their calling program is an integer exit code. It's not like exit(II) has an argv and an argc. GUI programs that call command line programs thus tend to be nearly blind to what the called program did. You can't treat command line programs as subroutines.

I know why it worked that way in Research Unix for the PDP-11. It's a property of the hokey trick used to make fork(II) work on tiny machines. It didn't have to stay that way for four decades.