return to table of content

Jaq – A jq clone focused on correctness, speed, and simplicity

gigatexal
39 replies
22h14m

It's so awesome when projects shout out other projects that they're similar to or inspired by or not replacements for. I learned about https://github.com/yamafaktory/jql from the readme of this project and it's what I've been looking for for a long time, thank you!

That's not to take away from JAQ by any means I just find the JQ style syntax uber hard to grokk so jql makes more sense for me.

jjeaff
29 replies
21h3m

Nice find. I think I'll try it out. Although I was hoping for a real SQL type experience. I don't understand why no one just copies SQL so I can write a query like "SELECT * FROM $json WHERE x>1".

Everyone seems to want to invent their own new esoteric symbolic query language as if everything they do is a game of code golf. I really wish everyone would move away from this old Unix mentality of extremely concise, yet not-self-evident syntax and do more like the power shell way.

pdntspa
13 replies
20h1m

Be the change you want to see.

I personally don't understand why people aren't willing to learn instead. It's not hard to sit down and pick up a new skill and it's good to step out of one's comfort zone. I personally hate Powershell syntax, brevity is the soul of wit and PS could learn a thing or two from bash and "the linux way".

We seem obsessed with molding the machine to our individual preferences. Perhaps we should obsess over the opposite: molding our mind to think more like the machine. This keeps a lot of things simple, uncomplicated, and flexible.

Does a painter wish for paints that were more like how he wanted them to be? Sure, but at the end of the day he buys the same paint everyone else does and learns to work with his medium.

stevage
3 replies
19h48m

In my case, my memory doesn't work that way. I have learnt jq several times but I don't use it frequently enough to retain the knowledge.

A better tool for me would be something that uses JS syntax but with some syntactic sugar and a great man page.

throwaway2037
1 replies
15h39m

What is "JS syntax"? And can you write a frontend for jq that converts "JS syntax" to jq syntax?

And is the jq man page poor? I'm sure they will accept patches for it.

andelink
0 replies
12h41m

The jq man page is pretty good IMO. It’s where/how I learned to use jq

ruuda
0 replies
9h41m

I have that same problem, the advanced features I use too little to remember. Then I started working on a configuration language that should have a non-surprising syntax (json superset, mostly inspired by Python, Rust, Nix). And it turns out, this works well as a query language for querying json documents. https://github.com/ruuda/rcl Here is an example use case: https://fosstodon.org/@ruuda/111120049523534027

unsui
1 replies
19h54m

While I appreciate the sentiment for bending your mind, rather than the spoon, the practical reality is that developer time is far costlier than compute time.

It is easier to map compute structures and syntax to existing mental models than to formulate new mental models. The latter is effortful and time-consuming.

So, given the tradeoffs, I could learn a new language, or leverage an existing language to get things done.

And yes, given sufficient resources (particularly time), developing new mental models is ideal, but reality often prohibits the ideal.

3np
0 replies
19h21m

If the crux is that you want something that maps closer to your personal mental model than what's available, I guess the other option is to build the missing tool yourself. That's the other side of "be the change you want to see".

So, given the tradeoffs, I could learn a new language, or leverage an existing language to get things done.

There is also the option to create a new language (jqsql or whatnot), optionally sharing it publically.

If you do this I think you'd find out why beyond very trivial stuff, sibling commenters have a point in that SQL isn't a good fit for nested data like JSON. Would still be a useful exercise!

vips7L
0 replies
18h54m

brevity is not clarity.

smabie
0 replies
13h30m

The machine is uncomplicated and simple? That is the last way I would describe modern CPUs and their peripherals.

The whole point of programming is to bend the machine towards humans, not the other way around.

reportgunner
0 replies
5h43m

Yeah I don't understand why people aren't willing to learn SQL too.

pdimitar
0 replies
19h16m

I personally don't understand why people aren't willing to learn instead

You misunderstand. As programmers we learn every day, obviously that's one of our strong points.

The real problem is that every single tool wants you to go deep and learn their particular dyslexic mini programming language syntax or advanced configuration options syntax. Why? We have TOML, we have SQL, we have a bunch of pretty proven syntaxes and languages that do the job very well.

A lot of these programmers authoring tools suffer from a severe protagonist syndrome which OK, it's their own personal character development to grapple with, but in the meantime us the working programmers are burning out because everyone and their dog wants us to learn their own brain child.

imiric
0 replies
19h2m

We seem obsessed with molding the machine to our individual preferences. Perhaps we should obsess over the opposite: molding our mind to think more like the machine.

How so? Everything in "the machine" was created by other humans; from the latest CLI tool, to the CPU instruction set. As computer users, given that it's practically impossible for a single person to be familiar with all technologies, we must pick our battles and decide which technology to learn. Some of it is outdated, frustrating to use, poorly documented or maintained, and is just a waste of time and effort to learn.

Furthermore, as IT workers, it is part of our job to choose technologies worth our and our companies' time, and our literal livelihood depends on honing this skill.

So, yes, learning new tools is great, but there's only so much time in a day, and I'd rather spend it on things that matter. Even better, if no tool does what I want it to, I have the power to create a new one that does, and increase my development skills in the process.

cjaybo
0 replies
17h46m

“Brevity is the soul of wit”

Maybe we have different goals but I don’t get paid to write witty code and I don’t think anyone on my team would appreciate it if I did.

I don’t think the redeeming qualities of brevity in prose transfer to something like terse syntax.

Grimburger
0 replies
19h27m

I personally don't understand why people aren't willing to learn instead.

Mostly because if you don't use it that often then it ends up forgotten again. I can smash out plenty of trivial regexes, but anything even slightly complicated means I'm learning backreferences again for the 6th time in a decade.

soulbadguy
3 replies
20h50m

While i agree about the general sentiment on preferring well defined and explicit standard as opposed to "cute" custom made languages. In this case i am not convince that SQL would be the best candidate for querying nested structures like JSON.Something like xpath maybe.

jjeaff
1 replies
20h43m

I agree, it wouldn't be the best to handle all json edge cases, but it would be a super easy way to quickly get data from a big chunk of simple json and you could just use subqueries or query chaining for nested results.

For anyone who hasn't used powershell, this is the difference I'm talking about. I would not be able to write either of these without looking up the syntax. But knowing very little about powershell, I can tell exactly what that command means while the bash command, not so much.

```powershell $json | ConvertFrom-Json | Select-Object -ExpandProperty x ```

```bash echo $json | jq '.x' ```

deredede
0 replies
10h18m

On the other hand, I find the bash one clear and concise. That PowerShell example is so verbose, it'd drive me crazy to do any sort of complex manipulation this way! To each their own, I guess.

tubthumper8
0 replies
13h23m
filmor
2 replies
20h32m

SQL is built for relational/tabular data, JSON is not relational and usually not tabular.

im3w1l
1 replies
20h8m

Well there is nothing saying you can't put relational data in json format.

stevage
0 replies
19h47m

But that wouldn't help query arbitrary JSON files which was the point.

throwaway2037
0 replies
15h41m

    do more like the power shell way
I just checked the GitHub page [1] for Microsoft PowerShell. It looks written in C# and available on Win32/MacOS/Linux, where DotNet is now supported. Do you use PowerShell only on Win32 or other platforms also?

    Everyone seems to want to invent their own new esoteric symbolic query language
Can you give an example of something that PS can do that is built-in for text processing, instead of a proprietary symbolic query language?

[1] https://github.com/PowerShell/PowerShell

screature2
0 replies
19h54m

I think the closest I've seen to a SQL experience for JSON is how steampipe stores json columns as jsonb datatypes and allows you to query those columns w/postgres JSON functions etc.

- https://steampipe.io/docs/sql/querying-json#querying-json #example w/the AWS steampipe plugin (I think this is a wrapper around the AWS go SDK)

- https://hub.steampipe.io/plugins/turbot/config #I think this lets you query random json files.

(edited to try to fix the bulleting)

psd1
0 replies
4h41m

nushell and pwsh. I'm not familiar with nushell, but pwsh offers where, select, foreach, group, sort.

N.B. those aliases are not created by default on *nix

It's pipeline-based and procedural, but you can be very declarative in data processing

pgeorgi
0 replies
19h12m

Although I was hoping for a real SQL type experience. I don't understand why no one just copies SQL so I can write a query like "SELECT * FROM $json WHERE x>1".

With somewhat tabular data, you can use sqlite to read the data into tables and then work from there.

Example 10 from https://opensource.adobe.com/Spry/samples/data_region/JSONDa... (slightly fixed by removing the ellipsis) results in this interaction:

    sqlite> select json_extract(value, '$.id'), json_extract(value, '$.type') from json_each(readfile('test.json'), '$.items.item[0].batters.batter');
    1001|Regular
    1002|Chocolate
    1003|Blueberry
    1004|Devil's Food

    sqlite> select json_extract(value, '$.id'), json_extract(value, '$.type') from json_each(readfile('test.json'), '$.items.item[0].topping');
    5001|None
    5002|Glazed
    5005|Sugar
    5007|Powdered Sugar
    5006|Chocolate with Sprinkles
    5003|Chocolate
    5004|Maple
Instead of "select" this could also flow into freshly created tables using "insert into" for more complex scenarios.

nbk_2000
0 replies
3h11m

OctoSQL[1] does a pretty good job of allowing you to query JSON (and CSV) with SQL.

[1] https://github.com/cube2222/octosql

justinsaccount
0 replies
20h35m

The datafusion cli https://arrow.apache.org/datafusion/user-guide/cli.html can run SQL queries against existing json files.

chthonicdaemon
0 replies
11h27m

Have you looked at [duckdb's JSON support](https://duckdb.org/docs/extensions/json.html)? It's pretty transparent and you can do exactly what you say: `select * from 'file.json' where x > 1` will work with "simple" json files like {"x": 1, "y": 2} and [{"x": 1, "y":2}, {"x":2, "y":3}]

bobobar339
0 replies
19h36m
Valodim
2 replies
19h44m

Very nice in this regard is gron, too. It simply flattens any json into lines of key value format, making it compatible with grep and other simple stream operations.

https://github.com/tomnomnom/gron

mstade
0 replies
18h15m

This is brilliant, thank you for sharing!

lkuty
0 replies
15h5m

And also https://github.com/adamritter/fastgron that I've just discovered.

antonvs
1 replies
10h57m

I just find the JQ style syntax uber hard to grokk

You're not alone. ChatGPT (3.5) is terrible at it also, for anything non-trivial.

I'm not sure if that's because of the nature of the jq syntax, but I do wonder.

never_inline
0 replies
7h50m

Well ChatGPT doesn't 'grok' anything, really..

PhilippGille
1 replies
21h3m

I can also recommend checking https://github.com/tidwall/jj

stevage
0 replies
19h41m

That looks excellent, thank you!

klausnrooster
0 replies
17h48m

jql homoiconicity looks rather ... Lispy. Like you could use it on itself, write "Macros", etc.

OJFord
0 replies
19h31m

I do sympathise with that a bit, but for me at least it does not look like jql is the solution:

    '|={"b""d"=2, "c"}'
this appears to be something like jq's:

    'select(."b"."d" == 2 or ."c" != null)'
which.. is obviously longer, but I think I prefer it, it's clearer?

(actually it would be `.[] | select(...)`, but I'm not sure something like that isn't true of jql too without trying it, I don't know if the example's intended to be complete - and I don't think it affects my verdict)

loudmax
19 replies
23h26m

I applaud this project's focus on correctness and efficiency, but I'd also really like a version of `jq` that's easy to understand without having to learn a whole new syntax.

`jq` is a really powerful tool and `jaq` promises to be even more powerful. But, as a system administrator, most lot of the time that I'm dealing with json files, something that behaved more like grep would be sufficient.

ishandotpage
9 replies
23h19m

Have you tried `gron`?

It converts your nested json into a line by line format which plays better with tools like `grep`

From the project's README:

▶ gron "https://api.github.com/repos/tomnomnom/gron/commits?per_page..." | fgrep "commit.author"

json[0].commit.author = {};

json[0].commit.author.date = "2016-07-02T10:51:21Z";

json[0].commit.author.email = "mail@tomnomnom.com";

json[0].commit.author.name = "Tom Hudson";

https://github.com/tomnomnom/gron

It was suggested to me in HN comments on an article I wrote about `jq`, and I have found myself using it a lot in my day to day workflow

hu3
3 replies
23h0m

Thank you so much. This seems like a saner approach for some simpler use cases.

It flattens the structure. And makes for easy diffing.

evntdrvn
2 replies
22h27m

There's also this awesome tool to make JSON interactively navigable in the terminal:

https://fx.wtf

llimllib
1 replies
21h21m

https://jless.io/ is similar, and will give you jq selectors so the two combine very well. (fx might have that feature too, I dunno)

evntdrvn
0 replies
1h57m

Ah thanks, jless is actually the one I was originally thinking of and trying to find! :D

stronglikedan
1 replies
22h4m

This is awesome, thanks! Not OP, but this will help me to write specifications for modifying existing JSON structures immensely. It's kind of a pain parsing JSON by (old man) eye to figure out which properties are arrays, and follow property names down a chain. This will definitely help eliminate mistakes!

pdimitar
0 replies
18h43m

Also try jless[0], it's amazingly convenient and it shows you a JSON path at the bottom of the screen as you navigate.

[0] https://jless.io/

sn0wf1re
1 replies
22h30m

You can also mimic gron, including support for yaml with

yq -o=props my-file.yaml

majewsky
0 replies
5h27m

Doesn't work in my terminal. When you recommend yq behavior, please specify which yq you're using. There are at least two incompatible implementations.

jbverschoor
0 replies
21h41m

This looks some much better as an ad-hoc tool. Would be cool if it supported more formats - plist, yaml, xml (hoow to do body, or conflicting attr/elements)

gchamonlive
1 replies
21h53m

It is a little early to say, but I have been learning how nushell deals with structured data and it seems like it is very usable for simple cases to produce readable one-liners, and if you need to bring out the big guns the shell is also a full fledged scripting language. Don't know about how efficient it is though.

It needs to justify moving to a completely different shell, but the way you deal with data in general does not restrict itself to manipulating json, but also the output of many commands, so you kinda have one unified piping interface for all these structured data manipulations, which I think is neat.

bobbylarrybobby
0 replies
21h47m

From the data side, nushell uses polars for querying tabular data so it should be pretty fast. Not sure about its scripting language.

zellyn
0 replies
22h17m

ChatGPT excels at producing `jq` incantations; I can actually use `jq` now…

notatoad
0 replies
21h2m

there's got to be some syntax though. jq does a unique function that isn't defined in any other syntax. i'm with you, the jq syntax is weird and sometimes difficult to understand. but the replacement would just be some different syntax.

these little one-off unique syntaxes that i'm never going to properly learn are one of my favourite uses of chatGPT.

msluyter
0 replies
23h21m

Obligatory reference to "gron" ("make JSON greppable"), which I find to be quite useful for many common tasks:

https://github.com/tomnomnom/gron

jrockway
0 replies
22h39m

One of my coworkers really likes Miller: https://github.com/johnkerl/miller

The idea is that you get awk/grep like commands for operating on structured data.

hyperthesis
0 replies
21h34m

Maybe like SQL for relational algebra? Codd made two query languages that were "too difficult for mortals to use". (B-trees for performance was a separate issue)

But jq's strength is its syntax - the difficulty is the semantics.

frou_dh
0 replies
22h6m

I'd also really like a version of `jq` that's easy to understand without having to learn a whole new syntax.

Since JSON is JavaScript Object Notation, then an obvious non-special-snowflake language for such expressions on the CLI is JavaScript: https://fx.wtf/getting-started#json-processing

INTPenis
0 replies
22h21m

jq, and yq, are tools you spend an hour figuring out and then leave them in a CI pipeline for 3 years.

lopatin
9 replies
21h16m

Regarding correctness, will it display uint64 numbers without truncating them? That's my biggest pet peeve with jq currently.

necubi
4 replies
21h11m

Unfortunately JSON numbers are 64 bit floats, so if you're standards compliant you have to treat them as such, which gives you 53 bits of precision for integers.

Also hey, been a while ;)

Edit: I stand corrected, the latest spec (rfc8259) only formally specifies the textual format, but not the semantics of numbers.

However, it does have this to say:

This specification allows implementations to set limits on the range/and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision.

In practice, most implementations treat JSON as a subset of Javascript, which implies that numbers are 64-bit floats.

rdtsc
0 replies
20h59m

Unfortunately JSON numbers are 64 bit floats, so if you're standards compliant you have to treat them as such,

Are you sure? Looking at https://www.json.org/json-en.html I don't see anything about 64 bit floats.

matt_kantor
0 replies
21h2m

I'm being pedantic here, but JSON numbers are sequences of digits and ./+/-/e/E. Whether to parse those sequences into 64-bit floats or something else is left up to the implementation.

However what you say is good practice anyway. The spec (RFC 8259) has this note on interoperability:

This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.
lopatin
0 replies
20h56m

I thought the JSON spec says that numbers can have an arbitrary amount of digits.

Also, what!! Hey! Miss you man.

Groxx
0 replies
21h6m

JSON does not define a precision for numbers, so: it's often float64 (but note -0 is allowed, but NaN and +/-Inf are not), but it depends on your language, parser config, etc.

Many will produce higher precision but parse as float64 by default. But maximally-compatible JSON systems should always handle arbitrary precision.

re
2 replies
20h23m

I believe this has improved in jq 1.7: https://github.com/jqlang/jq/releases/tag/jq-1.7

Use decimal number literals to preserve precision. Comparison operations respects precision but arithmetic operations might truncate.
anonymoushn
1 replies
15h38m

This is still broken in jq 1.7 for sufficiently long exponents

re
0 replies
15h27m

From a quick test it looks like it supports exponents up to 9 digits long (i.e. 1.0e999999999), which, frankly, seems pretty reasonable; it's hard for me to imagine a use case where you'd want to represent numbers larger than that.

wwader
0 replies
19h20m

jq 1.7 do preserve large integers but will truncate if any operation is done on them. Unfortunetly it currently truncates to a decimal64 which is a bit confusing, this will be fixed in next release where it follow the suggestion from the JSON spec and truncates to binary64 (double) https://github.com/jqlang/jq/pull/2949

Yanael
8 replies
18h58m

How have you been using jq? It is more adhoc for exploring JSON files during development/data analysis or in programs that run in production?

wwader
2 replies
18h52m

Quite a lot! i use it to explode both JSON and tex (parse using jq functions). I also use it for exploring ane debug binary formats (https://github.com/wader/fq). Now a days i also use it for some adhoc programming and a calculator.

Yanael
1 replies
18h41m

Oh sounds a very neat way to explore binary!

wwader
0 replies
17h32m

If you spend lots of time with certain binary formats then i can recommend adding a decoder, happy to help with it also!

brundolf
2 replies
17h46m

Yeah, I've always liked the idea of jq but personally I find it easier to open a REPL in the language I'm most familiar with (which happens to be JS, which does make a difference) and just paste in the JSON and work with it there

It may be more verbose, but I never have to google anything, which makes a bigger difference in my experience

wwader
1 replies
17h41m

https://github.com/wader/fq has a REPL and can read JSON. Tip is to use "paste | from_json | repl" in a REPl to paste JSON into a sub-REPL, you can also use `<text here>` with fq which is a raw string literal

brundolf
0 replies
17h24m

The important part wasn't having a REPL, it was using a language I already know off the top of my head

delecti
0 replies
17h26m

My most common usage is pretty-printing the output of curl, or getting a list of things from endpoint service/A and then calling service/endpoint B/<entry> to do things for each entry in the list.

Liskni_si
0 replies
50m

I use it as a "JSON library for bash". :-)

Not really in "production", but I have a lot of small-ish shell scripts all over the place, mostly in ~/bin, and some in CI (GitHub Actions) as well.

rad_gruchalski
5 replies
22h58m

I started using yq over jq. Any significant differences?

MrDrMcCoy
3 replies
22h2m
Yasuraka
1 replies
21h41m

I prefer the former, single static binary which works great on workstations and CI alike, the latter requires python as well as jq as it's a wrapper

bbkane
0 replies
20h39m

I've been using yq + git-xargs to automate config files in repos (CI/CD, linters, etc). The combo has been spectacular for me.

https://github.com/bbkane/git-xargs-tasks

rad_gruchalski
0 replies
21h12m
a-nikolaev
0 replies
20h37m

jq feels like a much more robust tool than yq. I understand that the task of processing YAML is much harder than JSON, but:

- yq changed its syntax between version 3 and 4 to be more like jq (but not quite the same for some reason)

- yq has no if-then-else https://github.com/mikefarah/yq/issues/95 which is a poor design (or omission) in my opinion

So yq works when you need to process YAML, it can even handle comments quite well. Buy for pure JSON processing jq is a better tool.

pizza_pleb
5 replies
23h13m

Somewhat off-topic, but is there a tool which integrates something like this/jq/fx and API requests? I’d like to be able to do some ETL-like operations and join JSON responses declaratively, without having to write a script.

awayto
4 replies
22h28m

Is there anything out there like "SELECT * FROM "http://..."?

pizza_pleb
1 replies
21h40m

I think a query language would be great, with a way to subquery/chain data from previous requests (e.g. by jsonpath) to subsequent ones.

The closest I’ve gotten is to wrap the APIs with GraphQL. This achieves joining, but requires strict typing and coding the schema+relationships ahead of time which restricts query flexibility for unforeseen edge cases.

Another is a workflow automation tool like n8n which isn’t as strict and is more user-friendly, but still isn’t very dynamic either.

Postman supports chaining, but in a static way with getting/setting env variables in pre/post request JS scripts.

Bash piping is another option, and seems like a more natural fit, but isn’t super reusable for data sources (e.g. with complex client/auth setup) and I’m not sure how well it would support batch requests.

It would be an interesting tool/language to build, but I figure there has to be a solution out there already.

hnlmorg
0 replies
7h35m

This is exactly what Murex shell does. It has lots of builtin tools for querying structured data (of varying formats) but also supports POSIX pipes for using existing tools like `jq` et al seamlessly too.

https://murex.rocks

hnlmorg
0 replies
21h44m

My shell will do that

    open http://… | select * where …
    # FROM can be omitted because you’re loading a pipe

https://murex.rocks/optional/select.html

RyanHamilton
0 replies
21h18m

I'm working on a project I call babeldb. It allows "select * from query_rest('https://api1.binance.com/api/v3/exchangeInfo#.symbols')" The #.symbols at the end is actually jq path expression, it's sometimes needed when the default json to table is suboptimal. You can see it in action by selecting babeldb in the dropdown, then clicking "Run All" here: https://pulseui.net/sqleditor?qry=select%20*%20from%20query_...

mgaunard
5 replies
23h33m

While jq is a very powerful tool, I've also been using DuckDB a lot lately.

SQL is a much more natural language if the data is somewhat tabular.

MrDrMcCoy
2 replies
21h57m

I like textql [0] better for this use case, as it's simpler in my mind.

[0] https://github.com/dinedal/textql

bdcravens
1 replies
21h31m

textql doesn't seem to work with JSON. I think the grandparent comment meant that the data was in a table of sorts, represented in JSON.

MrDrMcCoy
0 replies
20h20m

Ah, you're right. TextQL combined with Miller would be closer, but DuckDB can do the same things all in one. Always good to have a variety of tools to choose from.

suchar
0 replies
22h23m

Some time ago I tried Retool and it does have "Query JSON with SQL": https://docs.retool.com/queries/guides/sql/query-json (it is somewhat relevant because it was extremely convenient)

It is somewhat similar to Linq in C# although SQL there is more standardised so I like it more. Also, it would be fantastic to have in-language support for querying raw collections with SQL. Even better: to be able to transparently store collections in Sqlite.

It is always sad to see code which takes some data from db/whatever and then does simple processing using loops/stream api. SQL is much higher level and more concise language for these use cases than Java/Kotlin/Python/JavaScript

CBLT
0 replies
21h9m

I've found the same. I store all raw json output into a sqlite table, create virtual columns from it, then do a shell loop off of a select. Nested loops become unnested, and debugability is leagues better because I have the exact record in the db to examine and replay.

I've noticed what I'm creating are DAGs, and that I'm constantly restarting it from the last-successfully-proccessed record. Is there a `Make`-like tool to represent this? Make doesn't have sql targets, but full-featured dag processors like Airflow are way too heavyweight to glue together shell snippets.

coldtea
5 replies
16h5m

nan > nan is false, while nan < nan is true.

If this wrong behavior from jq, or some artifact consistent with how the floating point spec is defined, surprising, but faithful to IEEE 754 nonetheless?

throw555chip
2 replies
15h24m

I used Bard after trying unsuccessfully to decipher the wikipedia page and Bard says, according to IEEE 754, nan < nan should return false (0); while nan > nan should return false (0)

ClassyJacket
1 replies
13h13m

I wish there was some version of Wikipedia for people who speak good English (not Simple English), but aren't assumed to already be experts on the topic. Technical articles are pretty much impenetrable.

coldtea
0 replies
6h23m

So you basically wish for Wikipedia to also feature simplified explanations of technical topics.

I don't think "good English vs simple english" plays into this.

It's not like the problem for technical articles being impenetratable on Wiki is that Wiki doesn't have an intermediate level between expert-talk and simple english.

It's just that it doesn't have simple english explanations of some technical topics.

extraduder_ire
1 replies
15h51m

IIRC, any comparison using a nan must fail (return false) according to the IEEE spec.

kopecs
0 replies
14h37m

I think it is a bit more complex, since NaN is defined to be "unordered" with respect to all other values (including other NaNs), and so any relation for which unordered values result in true (e.g., compareQuietNotEqual) will return true. (See section 5.11)

stickfigure
4 replies
21h14m

Congratulations! We're almost back to the basic functionality we used to have with XSLT.

nurettin
1 replies
21h11m

To be fair, xslt is a lot more verbose than `map(.*2)`

lkuty
0 replies
8h49m

A bit more verbose but you have the full power of XQuery with you. XSLT however is more verbose than that like you mentioned.

    for $price in json-to-xml(unparsed-text($file))/map/map/number[@key="price"]
    return $price+2
For the following JSON document:

    {
      "fruit1": {
        "name": "apple",
        "color": "green",
        "price": 1.2
      },
      "fruit2": {
        "name": "pear",
        "color": "green",
        "price": 1.6
      }
    }
The call to json-to-xml() produces this XML document:

    <?xml version="1.0" encoding="UTF-8"?>
    <map xmlns="http://www.w3.org/2005/xpath-functions">
       <map key="fruit1">
          <string key="name">apple</string>
          <string key="color">green</string>
          <number key="price">1.2</number>
       </map>
       <map key="fruit2">
          <string key="name">pear</string>
          <string key="color">green</string>
          <number key="price">1.6</number>
       </map>
    </map>

lkuty
1 replies
14h28m

You could use an elaborate filter with jq (see https://stackoverflow.com/a/73040814/452614) to transform JSON to XML and then use an XQuery implementation to process the document. It would be quite powerful, especially if the implementation supports XML Schema. I have not tested it.

Or https://github.com/AtomGraph/JSON2XML which is based on https://www.w3.org/TR/xslt-30/#json-to-xml-mapping

It even looks like we could use an XSLT 3 processor with the json-to-xml function (https://www.w3.org/TR/xslt-30/#func-json-to-xml) and then use XQuery or stay with XSLT 3.

Now I have to test it.

lkuty
0 replies
9h14m

In fact XQuery alone is enough, e.g. with Saxon HE 12.3.

    (: file json2xml.xq :)
    declare default element namespace "http://www.w3.org/2005/xpath-functions";
    declare option saxon:output "method=text";
    declare variable $file as xs:string external;
    json-to-xml(unparsed-text($file))/<your xpath goes here>

    java -cp ~/Java/SaxonHE12-3J/saxon-he-12.3.jar net.sf.saxon.Query -q:json2xml.xq file='/path/to/file.json'

WhereIsTheTruth
4 replies
19h51m
sgt
2 replies
10h5m

How does that usually play out in the Rust ecosystem? Lots of dependencies tell me there's a huge risk of the dependencies becoming inherently incompatible with each other over time, making maintenance a major task. How will this compile in say, 2 years?

majewsky
1 replies
5h31m

Because of the lockfile, it will use the same library versions when compiling again in the future. The main question for "will this compile" is whether the Rust compiler is sufficiently backwards-compatible, which (at least from my experience) it certainly is.

Also re "lots of dependencies": This is kind of unavoidable in Rust because the stdlib is deliberately very lean, and focuses on basic data structures that are needed for interop (e.g. having common string types is important for different libraries to work together with each other) or not possible to implement without specific compiler support (e.g. marker traits or boxing). Contrast this with Go where the stdlib contains things like a full-fledged HTTP server and regex engine. It's easy to build things in Go with a rather short go.mod file, but only because the go.mod file does not show all the stdlib packages that you're using.

sgt
0 replies
4h28m

I understand the concept of a lock file and they are a blessing, but inevitably one will need to upgrade at least one of the dependencies. Whether this is due to desired functionality or a bug, it is bound to happen.

Lock files won't solve that problem if one of the other libraries will be incompatible. Add more time and the problem compounds. Major problem in e.g. the npm ecosystem.

mozey
0 replies
11h48m
vjust
3 replies
19h54m

I find jq's syntax (and docs) kind of opaque, but I guess we have no other options. And I don't think this latest incarnation breaks any new ground there. But it'd be better if I just wrote it myself - "be the change ...."

stevage
1 replies
19h50m

Well, as pointed out in the jaq docs there is jql.

But I just looked at jql and I liked it even less. The pedantry about requiring all keys in selectors to be double quoted is, um, painful for a CLI tool.

stevage
0 replies
19h41m

Someone else above pointed out JJ which looks much easier to use.

wrsh07
0 replies
19h25m

ChatGPT or the warp chatbot is pretty good at jq syntax

visarga
3 replies
23h23m

This language must be the spiritual successor of Perl

TurboHaskal
2 replies
21h57m

I inherited some piece of code that made use of an extremely long and complicated jq script.

I simply gave up understanding the whole thing, and restored the balance in the universe by rewriting it in Perl.

hnlmorg
0 replies
21h48m

Now you just need to rewrite Perl in Rust and compile that to WebAssembly. And the circle of HN is complete.

LargeTomato
0 replies
20h10m

I know perl is useful. I know it's going to help me. It seems like you can get away with a quick perl script whereas a python script would attract scrutiny.

But it's such a painful language to look at.

sigmonsays
3 replies
21h1m

why not contribute to the existing jq project instead of starting a new one?

We have so many json query tools now it's insane.

sillysaurusx
0 replies
20h58m

Fun, of course. Existing projects are boring almost by definition. And this is volunteer work.

lilyball
0 replies
20h32m

The obvious reason here is jaq makes some changes to semantics, changes which would be rejected by jq.

Another likely reason is that it seems a motivation for jaq is improving the performance of jq. Any low-hanging fruit there in the jq implementation was likely handled a long time ago, so improving this in jq is likely to be hard. Writing a brand new implementation allows for trying out different ways of implementing the same functionality, and using a different language known for its performance helps too.

Using a language like Rust also helps with the goal of ensuring correctness and safety.

anonymoushn
0 replies
20h21m

One reason to do this is that often performance improvements involve architectural overhauls that maintainers are unlikely to approve of.

j1elo
3 replies
19h32m

[[]] | implode crashes jq, and this was not fixed at the time of writing despite being known since five years.

Well, taking into account that jq development has been halted for 5 years and only recently revived again, it's no wonder that bug reports have been sitting there for that time, both well known and new ones. I bet they'll get up to speed and slowly but surely clear the backlog that has built up all this time.

thekoma
1 replies
16h16m

Why was it halted?

slaymaker1907
0 replies
10h57m

I think the original devs just got burnt out for a while https://github.com/jqlang/jq/issues/2305#issuecomment-157263...

wwader
0 replies
19h25m
Osiris
3 replies
10h17m

I love the idea of jq but i use it infrequently enough that I have to search the manual for how to use their syntax to get what I want.

Sadly 99% of what I do with jq is “| jq .”

ruuda
0 replies
9h51m

I have the same problem. Then, unrelated, I started building a configuration language, and it turned out it's quite nice for querying json [1]. Here is an example use case that I couldn't solve in jq but I could in RCL: https://fosstodon.org/@ruuda/111120049523534027

[1]: https://docs.ruuda.nl/rcl/rcl_query/

mmorearty
0 replies
8h23m

Me too; but recently I used ChatGPT to just quickly me the jq syntax I needed: https://chat.openai.com/share/40b68d73-d2dd-412d-867f-9f375e...

dse1982
0 replies
9h51m

I had the same problem, keeping me from really exploiting the power of jq. But for this and similar cases I am really glad about copilot being available to help. I just tell it what I need, together with a reduced sample of the source-json, and it generates a correct jq-script for me. For more complex requirements I usually iterate a bit with Copilot because it is easier and more reliable to guide it to the solution gradually than to word everything out correctly in the question in the first go. Also I myself often get new and better ideas during the iterations than I had in the beginning. Probably works the same with ChatGPT and others.

sgt
2 replies
10h6m

The fact that jq takes almost a second to run on a Pi is crazy[0]. And the tool is written in C.

[0] https://github.com/jqlang/jq/issues/1411

eyegor
1 replies
9h56m

It was fixed in 2019 though? I don't understand your point.

https://github.com/jqlang/jq/issues/1380

sgt
0 replies
8h45m

You are right. I stand corrected.

jhatemyjob
2 replies
20h26m

I switched to jless and never looked back. The user interface is miles ahead of everything else

Snelius
1 replies
15h39m

It's not the same. The jq is not just a viewer. It's a JSON query lang processor.

jhatemyjob
0 replies
13h54m

You are correct, the user interface of jq is not the same as the user interface of jless.

jasonlhy
2 replies
11h41m

I think the best alternative for JQ is datawave, but it is not open source. https://dataweave.mulesoft.com/

anonymoushn
1 replies
11h36m

The latest blog post is about open sourcing it from last September. So the process of open sourcing dataweave takes at least 15 months.

jasonlhy
0 replies
4h56m

It have some learning curve, but it actually makes sense when you get used to it and work for other format too. It is much better than other transformation language, and you can even call Java.

I think they kind of stuck in the development, even the mule engine only have one active developer from the github commit ….

dilsmatchanov
1 replies
22h39m

Haven't checked yet, but I am sure it's written in Rust

anitil
0 replies
16h23m

How could you tell?

Yanael
1 replies
20h37m

jq have been in my toolbox since a while it’s a very great tool. But yet another query language to learn, jaq seems identical on that. I think that’s where LLMs can help a lot to make it easier for adoption, I started a project on that note to manipulate the data just with natural language, https://partial.sh

‘cat’ your json file and describe what you want I think should be the way to go

LargeTomato
0 replies
20h9m

I usually avoid those types of tools. It looks way too fragile and the examples look a bit magical. Do you think it's stable and easy to use?

sesm
0 replies
9h22m

Is there a JS library that is similar to JQ but works on JS objects in memory?

phplovesong
0 replies
12h36m

Before a clicked on the link i had this gut feeling. It turned out my gut was right. It was written in rust. Go figure..

jeffbee
0 replies
22h25m

I guess it's cute that there's some terminal line art library in Rust somewhere, but when I tried to invoke jaq it just pooped megabytes of escape codes into my iTerm and eventually iTerm tried to print to the printer. Too clever.

I tried to do `echo *json | rush -- jaq -rf ./this-program.jq {} | datamash ...` and in that context I don't think it's appropriate to try to get artistic with the tty.

The cause of the errors, for whatever it's worth, is that `jaq` lacks `strftime`.

icco
0 replies
19h1m

I use `yq` for this stuff and it handles most of this pretty well.

fyzix
0 replies
23h3m

I think my benchmark[1] would be a great test for this. The jq[2] version takes 50s on my machine.

[1] : https://github.com/jinyus/related_post_gen

[2]: https://github.com/jinyus/related_post_gen/blob/main/jq/rela...

bilekas
0 replies
9h9m

nan > nan is false, while nan < nan is true.

You learn something new everyday. Does anyone have any idea why this might be happening? Seems like more than just a bug..

232kkk33kk
0 replies
16h37m

and in powershell you don't need to learn all those syntaxes for different tools for different formats like jq, xmlstarlet, etc. Just convert everything to an object and query the data by using powershell syntax

1vuio0pswjnm7
0 replies
18h15m

All else being equal, does the speed of jaq change with the size of the input.