return to table of content

Google's new pipe syntax in SQL

samwillis
19 replies
20h19m

Richard Hipp, creator of SQLite, has implemented this in an experimental branch: https://sqlite.org/forum/forumpost/5f218012b6e1a9db

Worth reading the thread, there are some good insights. It looks like he will be waiting on Postgres to take the initiative on implementing this before it makes it into a release.

Blackthorn
8 replies
19h59m

FROM first would be nothing short of incredible. I can only hope that Postgres and others can find it within themselves to get together and standardize on such an extension!

willvarfar
5 replies
13h1m

Yeap I didn't know DuckDB supported it already!

Being able to do SELECT FROM WHERE in any order and allowing multiple WHEREs and AGGREGATE etc, combined with supporting trailing commas, makes copy pasting templating and reusing and code-generating SQL so much easier.

  FROM table  <-- at this point there is an implicit SELECT *
  SELECT whatever
  WHERE some_filter
  WHERE another_filter <-- this is like AND
  AGGREGATE something
  WHERE a_filter_that_is_after_grouping <-- is like HAVING
  ORDER BY ALL <-- group-by-all is great in engines that support it; want it for ordering too
...

aidos
3 replies
12h17m

What’s group-by-all? Sounds like distinct?

willvarfar
1 replies
11h46m

Normally the SELECT has a bunch of columns to group by and a bunch of columns that are aggregates. Then, in the GROUP BY clause, you have to list all the columns to group by. The query compiler knows which they are, and polices you, making sure you got it right. All the GROUP BY ALL does is say 'the compiler knows, there's no need to list them all'. Very convenient.

BigQuery supports GROUP BY ALL and it really cleans up lots of queries. E.g.

   SELECT foo, bar, SUM(baz)
   FROM x
   GROUP BY ALL <-- equiv to GROUP BY foo, bar
(eh, except MySQL; my memory of MySQL is it will silently do ANY_VALUE() on any columns that aren't an explicit aggregate function but are not grouped; argh it was a long time ago)

Sesse__
0 replies
7h46m

MySQL doesn't do this anymore; the ONLY_FULL_GROUP_BY mode became default in 5.7 (I think). You can still turn it off and get the old behavior, though.

genezeta
0 replies
11h34m

It's different from distinct. Distinct just eliminates duplicates but does not group entries.

Suppose...

  SELECT brand, model, revision, SUM(quantity)
   FROM stock
   GROUP BY brand, model, revision
This is not solved by using distinct as you would not get the correct count.

Group By All allows you to write it a bit more compact...

  SELECT brand, model, revision, SUM(quantity)
   FROM stock
   GROUP BY ALL

croes
0 replies
6h44m

A special keyword like HAVING prevents erros by typing in the wrong line.

How is OR done with this WHERES?

quartesixte
0 replies
2h2m

What exactly is the history of having FROM be the second item, and not the first? Because FROM first seems more intuitive and actually the way you write out queries.

Really hope this takes off and gets more widespread adoption because I really want to stop doing:

  SELECT *
  FROM all_the_joins
into

  SELECT {my statements here}
  FROM all_the_joins

simonw
7 replies
19h50m

That comment where he explains why he's not rushing to add new unproven SQL syntax to SQLite is fascinating:

My goal is to keep SQLite relevant and viable through the year 2050. That's a long time from now. If I knew that standard SQL was not going to change any between now and then, I'd go ahead and make non-standard extensions that allowed for FROM-clause-first queries, as that seems like a useful extension. The problem is that standard SQL will not remain static. Probably some future version of "standard SQL" will support some kind of FROM-clause-first query format. I need to ensure that whatever SQLite supports will be compatible with the standard, whenever it drops. And the only way to do that is to support nothing until after the standard appears.
anitil
6 replies
17h39m

It's so ambitious in an almost boring way, exactly the right steward for a project like this

maxbond
5 replies
16h44m

Dr. Hipp is one of my heroes. He seems to labor quietly in semi obscurity for decades, and at the end of it he's produced some amazing software. I was tickled by the curfuffle over his use of a set of guidelines for living in a Christian monastery as SQLite's code of ethics for the purpose of checking a box on an RFQ (part of the fallout of the libsql fork), because he does seem like a sort of programmer monk. (For what it's worth, as an agnostic, I've read them several times and found them unobjectionable. While I think the drama was unnecessary, the libsql people are doing interesting work.)

I choose never to meet this man and be disabused of this notion. Shine on, doctor.

foldr
4 replies
9h6m

In fairness, I think the complaint over the tongue-in-cheek 'code of conduct' was that it was transparently unsuitable if considered as an actual code of conduct (i.e. a list of rules that SQLite contributors must obey in order to participate in the project). For example, it seems unlikely that Dr. Hipp would wish to exclude contributors who have committed adultery, or who do not pray with sufficient frequency.

(The erstwhile code of conduct is now labeled a 'code of ethics', and AFAIK SQLite has no official CoC currently.)

maxbond
3 replies
8h47m

To me it seemed like they had incompatible visions (SQLite wants to work in 2050 in the contexts it's been traditionally used in, libsql wants to modernize and lean into the more recent use cases) and so a fork was the appropriate and inevitable course of action.

Given that SQLite isn't really open to contribution (one of libsql's frustrations) it doesn't really worry me that they didn't & don't have a clear code of conduct. To me, digging through the repository [ETA: the website, rather] for what amounts to a cringey Easter egg and then linking to it as if it were a serious issue is uncalled for. To be honest, I think the complaints shouldn't stayed out of their announcement entirely - they have a legitimately cool vision for what their fork could be, and the complaints were only a distraction.

foldr
1 replies
8h22m

Yes, it's an important point that SQLite is not a project with an open contribution model. However, they do presumably accept external contributions in the form of bug reports, suggested patches, etc. etc.

You didn't have to dig through the repository to find the CoC. It was right there on the website at /codeofconduct.html: https://web.archive.org/web/20180315125217/https://www.sqlit...

maxbond
0 replies
8h21m

Another cool project from Dr. Hipp is the fossil SCM, which SQLite is developed in, and one of it's features is that it ships with a web view similar to GitHub. The website is actually the web view of the repo. (Apologies for expressing that in a confusing way, I knew it was on the website, I was referring to the website as the repository.)

lupire
0 replies
5h30m

The blatant religious discrimination in the document is both not a problem at all if the author is the only contributor (I suppose thetr must be some form of arms-length way of consuming external support from less beholden entities; I don't know the details of Critical Code of Conduct Theory), and totally unacceptable otherwise.

Following the document itself, it should be rewritten if it ever intends to include other people, and should be explicitly clarified that the current form only applies to the author himself.

bvrmn
1 replies
10h1m

It's funny how he addresses the new syntax as "from-clause-first". Like a very minor change with a low value.

Cthulhu_
0 replies
9h56m

I think that's important, because a lot of concepts are presented as prohibitively complicated; for example, functional programming makes sense in my head, but if you present it as lambda calculus and write it in concise form with new operators, you lost me.

BeefWellington
17 replies
14h15m

Every time this FROM-first syntax style crops up it's always the most basic simple query (one table, no projections / subselects / consideration to SP/Views).

Just for once I want to see complete examples of the syntax on an actual advanced query of any kind right away. Sure, toss out one simple case, but then show me how it looks when I have to join 4-5 reference tables to a fact table and then filter based on those things.

Once you do that, it becomes clear why SELECT first won out originally: legibility and troubleshooting.

As long as DBs continue to support standard SQL they can add whatever additional syntax support they want but based on history this'll wind up being a whole new generation of emacs vs vi style holy war.

dietr1ch
5 replies
13h52m

Sounds a bit like "new thing scary" unless you show why having select in front actually avoids problems, and I don't think there's a clear problem they avoid, but it does make it really hard to autocomplete (can you even do it properly?) while something along the lines of just swap select for from is well defined.

garrettgarcia
2 replies
13h33m

Sounds a bit like "new thing scary" unless you show why having select in front actually avoids problems

This isn't really fair. BeefWellington gave a reason why SQL is how it is (and how it has been for ~50 years). It's reasonable to ask for a compelling reason to change the clause order. Simon's post says it "has always been confusing", but doesn't really explain why except by linking to a blog post that says that the SQL engine (sort of but not really) executes the clauses in a different order.

I think the onus of proof that SQL clauses are in the wrong order is on the people who claim they're in the wrong order.

Sankozi
1 replies
12h40m

But it has been explained many times from many angles.

* SELECT first makes autocomplete hard

* SELECT first is the only out of order clause in the SQL statement when you look at it from execution perspective

* you cannot use aliases defined in SELECT in following clauses

* in some places SELECT is pointless but it is still required (to keep things consistent?)

Probably many more.

bvrmn
0 replies
8h17m

you cannot use aliases defined in SELECT in following clauses

Some DBs allow it or allow it partially. It's a major constant friction factor for me to do a guess work across different database systems.

mnsc
0 replies
13h23m

This is a case where stating your opinion and credentials will make you sound really old and conservative so it will be easy to take cheap shots like "you are just afraid of change".

At my previous gig I worked for a decade with an application that meant creating and maintaining large hairy sql that was created to offload application logic to the database (_very_ original) And we used to talk about this "wrong order" often but I never once actually missed it. It was at the most a bit annoying when you jumped in a server to troubleshoot and you knew the two columns you were interested in and you could have saved two seconds. But when working with maintaining those massive queries it always felt good to have the projection up top because that is the end result and what the query is all about. I would not have liked if the method signature in eg Java was just the parameters and the return type was after the final brace. This analogy falls apart of course since params are all over the place but swapping things around wouldn't help.

So just go 'SELECT *...' and go back and expand later, I want my sql syntax "simple". /old developer

BeefWellington
0 replies
5h49m

It really isn't. I've been working in this field for ages and did a lot of those years as a DBA and data modeler. I've worked with other syntaxes too, mostly MDX but some others specific to Hadoop/Spark. I'm not afraid of new things. I just want them to improve on what we have. I want them to be honest about situations where their solution isn't great.

SQL has lots of warts, e.g.: the fact that you can write SQL that joins tables without including those tables in a JOIN, which leads to confusion. It's fragmented too -- the other example I posted shows two different syntaxes for TOP N / LIMIT N because different vendors went different ways. The fact that some RDBMSes provide locking hint mechanics and some don't (at least not reliably). The fact that there's no standard set of "library" functions defined anywhere, so porting between databases requires a lot of validation work. It makes portability hard, and some of those features are missing from standards.

You'll note I also mentioned that if they want to add it that's fine but it's gonna wind up being a point of contention in a lot of places. That's because I've seen the same thing happen with the "Big Data" vs "what we have works" crowd.

Having select up front avoids problems in a couple key ways:

1. App devs who are working on their application can immediately see what fields they should expect in their resultset. For CRUD, it's probably usually just whatever fields they selected or `*` because everyone's in the habit of asking for every field they'll never use.

2. Troubleshooting problems is far easier because they almost always stem from a field in the projection. Seeing the projected field list (and thus, table aliases that field comes from) are literally the first pieces of information you need (what field is it and where does that field come from) to start troubleshooting. This is why SELECT ... FROM makes the most sense -- it's literally the two most crucial pieces of information right up front.

3. Query planners already optimize and essentially compile the entire thing anyways, so legibility trumps other options IME.

Another point I'd make to you and everyone else bringing up autocomplete: If you need it, nothing is stopping you from writing your FROM clause first and then moving a line up to write your SELECT. Kinda like how you might stub out a function definition and later add arguments. This doesn't affect the final form for legibility.

mixedCase
4 replies
13h32m

https://prql-lang.org/ has a bunch of good examples on its home page.

If you engage the syntax with your System 2 thinking (prefrontal cortex, slow, the part of thinking we're naturally lazy to engage) rather than System 1 (automated, instinctual, optimized brain path to things we're used to) you'll most likely find that it is simpler, makes more logical sense so that you're filtering down things naturally like a sieve and composes far better than SQL as complexity grows.

After you've internalized that, imagine the kind of developer tooling we can build on top of that logical structure.

BeefWellington
2 replies
6h6m

Edit: In my pre-coffee rush this morning I completely missed the grouping by role (which is not that much harder FWIW). This unfortunately invalidates my entire post as it was posted and I don't want to spread misinfo.

fader
1 replies
5h20m

I don't think your alternatives actually solve the same problem. Your alternatives would give you the single most recently joined employee. The actual problem being solved is to find the most recently joined employee in each role.

You'd need to do some grouping in there to be able to get one employee per role instead of a single employee out of the whole data set.

BeefWellington
0 replies
5h17m

Yeah you're correct, I caught that and edited my reply right as you responded.

Time willing I will provide an updated reply with fixed SQL.

meepmorp
0 replies
8h12m

If you engage the syntax with your System 2 thinking (prefrontal cortex, slow, the part of thinking we're naturally lazy to engage) rather than System 1 (automated, instinctual, optimized brain path to things we're used to)

You might not have intended it this way, but your choice of phrasing is very condescending.

summerlight
0 replies
2h50m

As a test, I refactored a 500 line-ish analytical query that joins more than 20 tables with tens of complex CTE and I can say that this FROM-first syntax is superior than the legacy syntax on almost every single aspect.

otabdeveloper4
0 replies
11h45m

FROM order is, like, the least offensive and least wrong thing about SQL.

Bikeshedding par excellence.

nsonha
0 replies
13h23m

becomes clear why SELECT first won out originally: legibility and troubleshooting

nothing "becomes clear" just by you claiming so, better elaborate

bvrmn
0 replies
8h15m

SELECT first won out originally: legibility and troubleshooting.

It quite interesting to dive into history of SQL alternatives in 70x/80x.

WorldMaker
0 replies
4h21m

Once you do that, it becomes clear why SELECT first won out originally: legibility and troubleshooting.

Select first was as much an accident of "it sounded better as an English sentence" to the early SQL designers. Plus also they were working with early era parsers with very limited look ahead and putting the primary "verb" up front was important at the time.

But English is very flexible, especially in "command syntax" and From first is surprisingly common: "From the middle cupboard, grab a plate". SQL trying to sound like English here only shows how inflexible it still is in comparison to actual English.

I've been using C#'s LINQ since it was added to the language in 2007 and the from/where/join/group by/select order feels great, is very legible especially because it gives you great autocomplete support, and troubleshooting is easier than people think.

WesolyKubeczek
0 replies
8h18m

Once you do that, it becomes clear why SELECT first won out originally: legibility and troubleshooting.

Also, tools can trivially tell DQL from DML by the first word they encounter, barring data-modifying functions (o great heavens, no!).

aragonite
14 replies
16h35m

This remains a long-standing pet peeve of mine. PDFs like this are horrible to read on mobile phones, hard to copy-and-paste from ...

I've never understood why copying text from digitally native PDFs (created directly from digital source files, rather than by OCR-ing scanned images) is so often such a poor experience. Even PDFs produced from LaTex often contain undesirable ligatures in the copied text like fi and fl. Text copied from some Springer journals sometimes lacks space between words or introduces unwanted space between letters in a word ... Is it due to something inherent in PDF technology?

crazygringo
4 replies
15h22m

Is it due to something inherent in PDF technology?

Exactly. PDF doesn't have instructions to say "render this paragraph of text in this box", it has instructions to say "render each of these glyphs at each of these x,y coordinates".

It was never designed to have text extracted from it. So trying to turn it back into text involves a lot of heuristics and guesswork, like where enough separation between characters should be considered a space.

A lot also depends on what software produced the PDF, which can make it easier or harder to extract the text.

vips7L
1 replies
14h3m

My favorite is when they do bold by duplicating and slightly shifting the letters. Bboolldd. PDFs are hell.

lupire
0 replies
5h28m

That's inherited from the original Portable Document Format for machines - the typewriter instructions.

spatulon
1 replies
8h52m

I've never looked into the PDF format, but, does it not allow for annotations that say, "the glyphs in the rectangle ((x0, y0), (x1, y1)) represent the text 'foobar'")? That's been my mental model for how they are text-searchable.

kccqzy
0 replies
2h57m

They do but such annotations are optional.

mjevans
2 replies
16h31m

ligatures like fi fl ffi ffl etc are for changes in fonts specific to rendering correctly on a screen or printer. It's intended to be a _rendered_ format, rather than a parse-able format.

Well formatted epub and HTML generally are usually intended to update to end user needs and better fit available layout space.

lupire
0 replies
5h27m

That's fine, but a good compiled format should also include a source map for accessibility.

WorldMaker
0 replies
4h39m

Though it's also a stuck legacy throwback. Modern advice would be to not send ligatures directly to the renderer and instead let the renderer poll OpenType features (and Unicode/ICU algorithms) to build them itself. PDF's baking of some ligatures in its files seems something of a backwards compatibility legacy mistake to still support ancient "dumb" PostScript fonts and pre-Unicode font encodings (or least pre-Unicode Normalization Forms). It's also a bit of the fact that PDF has always been confused about if it is the final renderer in a stack or not.

0cf8612b2e1e
1 replies
15h59m

It is a shame that CSS pagination is still a mess. Not that I like CSS, but it would go a long way towards unlocking some layouts from PDF.

jamesfinlayson
0 replies
12h44m

Agreed - I used CSS to lay out a book a couple of years ago and it wasn't too bad, but the things that have poor support/don't work at all (like page numbers) are a pain to hack around.

meindnoch
0 replies
8h20m

If a PDF doesn't support text extraction, it's the fault of the software that created it. Most likely the software didn't include the glyph → Unicode character mapping in the PDF.

jonathanyc
0 replies
12h16m

PDF natively supports selectable/extractable text. Section 9.10 of ISO 32000 is literally “Extraction of Text Content.” I’ve implemented it myself in production software.

There are many good reasons why PDF has a “render glyph” instruction instead of a “render string”. In particular your printer and your PDF viewer should not need to have the same text shaping and layout algorithms in order for the PDF to render the same. Oops, your printer runs a different version of Harfbuzz!

The sibling comment is right that a lot depends on the software that produced the PDF. It’s important to be accurate about where the blame lies. I don’t blame the x86 ISA or the C++ standards committee when an Electron app uses too much memory.

jahewson
0 replies
15h45m

It’s due to poor choices made in the implementation of pdfTeX. For example the TeX engine does not associate the original space characters with the inter-word “glue” that replaces them, so pdfTeX happily omits them. This was fixed a few years back, finally. But there’s millions(?) of papers out there with no spaces.

ericjmorey
0 replies
13h12m

XPS solved a lot of the problems with PDF, but Microsoft couldn't reach a critical level of adoption to let network effects take hold.

However, I don't know if XPS handles the copying of text better.

yarg
12 replies
19h52m

This reminds me .NET's short lived Linq to SQL;

There was a talk at the time, but I can't find the video: http://jaoo.dk/aarhus2007/presentation/Using+LINQ+to+SQL+to+....

Basically, it was a way to cleanly plug SQL queries into C# code.

It used this sort of ordering (where the constraints come after the thing being constrained); it needed to do so for IntelliSense to work.

dragonwriter
5 replies
19h43m

This reminds me .NET's short lived Linq to SQL;

"Short lived"? Its still alive, AFAIK, and the more popular newer thing for the same use case, Linq to Enntities, has the same salient features but (because it is tied to Entity Framework and not SQL Server specific) is more broadly usable.

plusplusungood
1 replies
19h7m

LINQ is not the same as LINQ-to-SQL. The former is a language feature, the latter a library (one of many) that uses that feature.

yarg
0 replies
18h48m

Did you reply to the wrong person? Because I'm not the guy that didn't know that.

LeonB
1 replies
19h6m

Yeh. Linq to sql was a much more lightweight extension than EF, and was killed due to internal warring at MS.

Database people were investing a lot of time and energy on doing things “properly” with EF, and this scrappy little useful tool, linq to sql, was seen as a competitor.

yarg
0 replies
18h43m

I quite liked it in the 5 minutes it existed - it was just really easy to use.

cyberax
3 replies
19h33m

"Short-lived"? LINQ is very much alive in the C# ecosystem.

And FROM-first syntax absolutely makes more sense, regardless of autocomplete. You should put the "what I need to select" after the "what I'm selecting from", in general.

yarg
2 replies
19h22m

LINQ yes, but they killed off the component not long after introducing it.

jiggawatts
0 replies
5h41m

It was replaced by Entity Framework.

BartjeD
0 replies
13h11m

Linq to sql still lives

WorldMaker
0 replies
4h13m

And NHibernate.Linq and Dapper.Extensions.Linq… Most ORMs in the ecosystem have at least one Linq support library, even if just a third-party extension.

Also, there are fun things that support Linq syntax for non-ORM uses, too, such as System.Reactive.Linq and LanguageExt: https://github.com/louthy/language-ext/wiki/How-to-deal-with...

andrewguy9
3 replies
19h40m

I’m a big kusto user, and it’s wonderful to have pipes in a query language.

If you haven’t tried it, it’s great!

tehlike
1 replies
19h37m

I have not tried it, but I used to be a .net developer and worked a lot with LINQ (and contributed a bit to NHibernate and its Linq provider) and I am a big fan of the approach.

Kusto does seem interesting too, and i think some of the stuff i want to build will find a use for it!

Salgat
0 replies
15h5m

LINQ is so incredibly intuitive. I wonder if this will make creating C# LINQ providers for databases that support this syntax easier.

kbouck
0 replies
11h36m

Indeed. Elastic has also recently released a piped query language called ES|QL. Feels similar to Kusto.

I find piped queries both easier to write, and read.

numbsafari
2 replies
18h26m

The paper directly references PRQL and Kusto. The main goal here is to take lessons learned from earlier efforts and try and find a syntax that works inside and alongside the existing SQL grammar, rather than as a wholly separate language.

lupire
0 replies
5h24m

It's wild that the enterprise and connected world has moved on from forcing COBOL compatibility for modern projects, but still insists on SQL compatibility.

hn_throwaway_99
0 replies
14h33m

I've been following PRQL for some time now since it first got good traction on HN and I like it a lot, but I'm really hoping this pipe syntax from Google takes off for a couple of reasons:

1. Similar to what you mention, while I think PRQL is pretty easy to learn if you know SQL, it still "feels" like a brand new language. This piped SQL syntax immediately felt awesome to me - it mapped how my brain likes to think about queries (essentially putting data through a chain of sieves and transforms), but all my knowledge of SQL felt like it just transferred over as-is.

2. I feel like I'm old enough now to know that the most critical thing for adoption of new technologies that are incremental improvements over existing technologies is to make the upgrade path as easy as possible. I shouldn't have to overhaul everything at once, but I just want to be able to take in small pieces a chunk at a time. While not 100% the same thing, if you look at the famously abysmal uptake of things like IPv6 and the pain it takes to use ES module-only distributions from NPM, the biggest pain point was these technologies made you do "all or nothing" migrations - they didn't have an easy, simple way to get from point A to point B. The thing I like about this piped SQL syntax is that in a large, existing code base I could easily just start adding this in new queries, but I wouldn't really feel the need to overhaul everything at once. With PRQL I'd feel a lot less enthusiastic about using that in existing projects where I'd have a mix of SQL and PRQL.

anonzzzies
1 replies
8h8m

Not having LINQ is a terrible inconvenience everywhere. Most languages have libs that try to hack something similar, but it usually simply isn't.

mrits
0 replies
7h9m

It's a lot easier to design a good DSL when it doesn't have to be compatible with anything

oaiey
0 replies
10h20m

Is "from" keyword originating from .NET (Framework 3.5 in 2007) or is this pre-existing somewhere in research?

summerlight
10 replies
20h31m

Previous submissions on the paper itself:

https://news.ycombinator.com/item?id=41321876 (first) https://news.ycombinator.com/item?id=41338877 (plenty of discussions)

I tried this new syntax and this seems a reasonable proposal for complex analytical queries. This new syntax probably does not change most simple transactional queries though. The syntax matches the execution semantic more closely, which means you less likely need to formulate query in a weird form to make query planner work as expected; usually users only need to move some pipe operators to more appropriate places.

FridgeSeal
7 replies
19h40m

Kinda looks like a half-assed version of what PRQL does. Like, if we’re going to have nonstandard sql, let’s just fix a whole bunch of things, not just one or two?

hn_throwaway_99
4 replies
14h28m

Kinda looks like a half-assed version of what PRQL does. Like, if we’re going to have nonstandard sql, let’s just fix a whole bunch of things, not just one or two?

To be honest, this feels exactly like the kind of mistake that IPv6 made. It wasn't just "let's extend the IPv4 address space and provide an upgrade path that's as incremental as possible", it was "IPv4 has all these problems, lets solve the address space issue with a completely new address space, and while we're at it lets fix 20 other things!" Meanwhile, over a quarter century later, IPv4 shows no signs of going away any time soon.

I'd much rather have an incremental improvement that solves 90% of my pain points than to reach for some "Let's throw all the old stuff away for this new nirvana!" And I say this as someone that really likes PRQL.

andrewshadura
3 replies
12h21m

You can't "just" extend the IPv4 address space while keeping the compatibility.

bvrmn
2 replies
9h46m

Extending src/dst in current IPv4 protocol headers is much easier than adopting a completely new suite.

quectophoton
1 replies
8h52m

Extending src/dst in current IPv4 protocol headers is much easier than adopting a completely new suite.

And that's precisely why that was also one of the competing proposals back then, so that tells me that just being easier probably wasn't enough.

You can search for RFC 1475 ("IPv7") and its surrounding history.

bvrmn
0 replies
8h41m

Yes I know. And IPv6 win because it's an objectively a superior standard. No politics and all committee garbage of course.

summerlight
0 replies
19h25m

Like, if we’re going to have nonstandard sql, let’s just fix a whole bunch of things, not just one or two?

I think they intentionally kept themselves away from massive redesign of the languages, which has a good chance of becoming multi decades of frustrating death march. I know a number of such cases from C++ standard proposals and probably the team wanted to avoid it.

chubot
0 replies
18h24m

This is addressed in the paper -- it's nice to have something deployable in existing SQL languages, and it also doesn't rule out using PRQL

summerlight
0 replies
20h12m

Thank you, added it to my comment. I missed all the discussions!

0xbadcafebee
6 replies
15h31m

As to the writer's problem with PDFs on the web: they aren't for reactive web app viewing on mobile phones. Not everything has to be. If you reeeeeeeally need to read that research paper, find a screen that's bigger than 3" wide.

simonw
4 replies
14h15m

Why shouldn’t I read research papers on my phone? That’s where I read almost everything else.

adrian_b
3 replies
12h7m

Even when reading on the phone, I do not understand the complaint against the two-column format.

The one-column format is fine on a large monitor, but on a small phone I prefer narrower columns, because a wide column would either make the text too small or it would require horizontal panning while reading.

So I consider the two-column format as better for phones, not worse.

9dev
2 replies
10h2m

One of the most complex and battle-tested open source projects is essentially a rendering engine for semantic text that has supported reflowing text to fit the screen for decades. And now you’re seriously considering having to zoom in on a column, then scrolling all the way back up and right to the next column, then down to the footnotes at the bottom, then to a random figure, to be a solution?

kccqzy
0 replies
2h47m

I don't want reflowing text to fit the screen. Text has an optimal number of characters per line, and it's between 40 and 60 depending on who you ask. Lines longer than that hinder reading. Lines shorter than that are just inconvenient.

The usual two-column layout is because having 40 to 60 characters per line in a single column is wasteful of paper. That is a real issue. But the solution is to make the PDF page narrower. Almost nobody prints these documents anyways; there's no good reason they need to conform to legacy sizes like A4 or letter paper commonly found in office printers. Just choose A5 as the size. People who really need to print can fit two A5 pages on one A4 page, and people who view these documents on a phone screen will also find A5 more convenient.

adrian_b
0 replies
9h21m

Yes, I strongly prefer reading PDF documents with fixed layout instead of HTML or any other formats with reflowing text, including on small phone screens.

I frequently read documents with many thousands of pages, which also contain many figures and tables.

A variable layout, at least for me, makes the browsing and the search through such documents much more difficult.

I have never ever seen any advantage in having the text reflow to match whatever window happens to be temporarily used to display the text, except for ephemeral messages that I will never read again.

For anything that I will read multiple times, I want the text to retain the same layout, regardless of what device or window happens to display it. If necessary, I see no problem in adjusting the window to fit the text, instead of allowing changes in the text, which would interfere with my ability of remembering it from the previous readings.

I really hate those who fail to provide their technical documentation as PDF documents, being content to just have some Web pages with it.

jillesvangurp
0 replies
12h31m

I think his point is that Google is a web company. And a mobile phone company. And they publish a lot of stuff in a format that's basically optimized for print and kind of useless for anything else.

I did my PhD more than 20 years ago and it was annoying then to be working with all these postscript and pdf documents. It's still annoying. These days people publish content in PDF form on websites and mostly not in printed media. People might print these or not. Twenty years ago, I definitely did. But it's weird how we stick with this. And PDFs are of course very unstructured and hard to make sense of programmatically as well.

I bet a lot of modern day scientists don't actually print the articles they read anymore and instead read them on screen or maybe on some ipad or e-reader. Print has become an edge case. Reading a pdf on a small e-reader is not ideal. Anything with columns is kind of awkward to deal with. There's a reason why most websites don't use columns: it kind of sucks as a UX. The optimal form to deliver text is in a responsive form that can adapt to any screen size where you can change the font size as well. A lot of scientific paper layouts are optimized to conserve a resource that is no longer relevant: paper real estate. Tiny fonts, multiple columns, etc.

Anyway, I like Simon's solution and how it kind of works. It's kind of funny how some of these LLMs can be so lazy. The thing with the references being omitted is hilarious. I see the same with chat gpt where it goes out of its way to never do exactly as you asked and instead just give you bits and pieces of what you ask for until you beg it to just please FFing do as you're told?! I guess they are trying to save some tokens or GPU time.

rileymat2
4 replies
19h35m

Is there research on what is easier to read when you are sifting through many queries?

I like the syntax for reading what the statement expects to output first, even though I agree that I don’t write them select first. I feel like this might be optimizing the wrong thing.

Although the example is nice, it does not show 20 tables joined first, which will really muddle it.

beart
3 replies
18h42m

The select list is meaningless without everything that follows. Knowing that a query selects "id, "date" tells you nothing without knowing the table, the search criteria, etc.

rileymat2
0 replies
6h40m

If you name fields that way, but accountId, createDate may not be meaningless in the context you are looking at.

aragonite
0 replies
15h9m

I really wish SQL used "RETURN" instead of "SELECT" (like in XQuery):

1. Calling it "RETURN" makes the fact of its later order of execution (relative to FROM etc) less surprising.

2. "RETURN RAND()" just reads more naturally than "SELECT RAND()". After all, we're not really "selecting" anything here, are we?

3. Would also eliminate any confusion with the selection operation in relational algebra.

antonvs
0 replies
15h8m

That's one benefit of the SQL naming convention which would use names like e.g. customer_id, invoice_date, etc. Also, when joining tables (depending on the SQL dialect) that can allow a shortcut synax, JOIN ON field_name, if the field name in the two tables is the same.

urbandw311er
3 replies
20h47m

Title should probably be changed, since the article is about using AI to convert a PDF to semantic HTML.

simonw
2 replies
20h2m

A surprising problem I'm seeing with maintaining a link blog is that articles from it occasionally get submitted to Hacker News, where people inevitably call them out as not being as appropriate as the source they are linking to - which is fair enough! That's why I don't tend to submit them myself.

This particular post quickly turned into a very thinly veiled excuse for me to complain about PDFs, then demonstrate a Gemini Pro trick.

In this case I converted to HTML - I've since tried converting a paper to Markdown and sharing in a Gist, which I think worked even better: https://gist.github.com/simonw/46a33d66e069efe5c10b63625fdab... - notes here https://simonwillison.net/2024/Aug/27/distro/

simonw
0 replies
15h34m

That's pretty neat! I like that it's run by a GitHub employee too (presumably as a side-project, but still) - makes me less nervous about the domain name blinking out of existence one day.

themerone
3 replies
17h49m

My big wish for SQL is for single row inserts to have a {key: value} syntax.

zX41ZdbW
0 replies
16h49m

In ClickHouse you can do

    INSERT INTO table FORMAT JSONEachRow {"key": 123}
It works with all other formats as well.

Plus, it is designed in a way so you can make an INSERT query and stream the data, e.g.:

    clickhouse-client --query "INSERT INTO table FORMAT Protobuf" < data.protobuf

    curl 'https://example.com/?query=INSERT...' --data-binary @- < data.bson

nickpeterson
0 replies
17h42m

This would condense lines of code by a lot and prevent a lot of dumb bugs.

BostonFern
0 replies
16h39m

MySQL has it without the braces.

minkles
3 replies
11h1m

That is basically R with tidyverse.

  flights |>
    filter(
      carrier == "UA",
      dest %in% c("IAH", "HOU"),
      sched_dep_time > 0900,
      sched_arr_time < 2000
      ) |>
    group_by(flight) |>
    summarize(
      delay = mean(arr_delay, na.rm = TRUE),
      cancelled = sum(is.na(arr_delay)),
      n = n()
      ) |>
    filter(n > 10)
If you haven't used R, it has some serious data manipulation legs built into it.

dan-robertson
1 replies
10h20m

An interesting thing to me about all these dplyr-style syntaxes is that Wickham thinks the group_by operator was a design mistake. In modern dplyr you can often specify a .by on an operation instead. I found switching to this style a pretty easy adjustment, and I think it’s a bit better. Example:

  d |> filter(id==max(id),.by=orderId)
I think PRQL were thinking a bit about ways to avoid a group_by operation and I think what they have is a kind of ‘scoped’ or ‘higher order’ group_by operation which takes your grouping keys and a pipeline and outputs a pipeline step that applies the inner pipeline to each group.

_Wintermute
0 replies
9h20m

Given 10 more years dplyr syntax might resemble data.table's

countrymile
0 replies
8h41m

My thoughts exactly, it even uses the same pipe syntax, though I do prefer `%>%`. I've been avoiding SQL for a while now as it feels so clunky next to the tidyverse

Ericson2314
3 replies
15h49m

We should really standardize a core language for SQL. Rust has MIR, Clang is making a CIR for C/C++. Once we have that, we'll be able to to communicate much better.

Right now, it's everyone faffing around with different mental models and ugly single pass compilers (my understanding is that parsing-->query planning is not nearly as well-separated in most DBs as parsing-->optomize-->codegen in most compilers).

anothername12
2 replies
15h15m

We should really standardize a core language for SQL

Do you mean something other than ISO/IEC 9075:2023 (the 9th edition of the SQL standard)?

roenxi
0 replies
13h5m

It costs 194 CHF to read. There is room for improvement.

Ericson2314
0 replies
2h15m

A core language is a minimal AST without surface syntax (and thus no bikeshedding of that) that distills the surface language to its essence.

slaymaker1907
2 replies
20h1m

I actually work on SQL Server, but I also write a lot of KQL queries which also work this way and I totally agree that the sequential pipe stuff is easier to write. I haven't read through the whole paper, but one aspect that I really like is that I think it's easier to guide the query optimization in this sequential style.

beart
1 replies
18h24m

Is there any internal inertia for such changes to SQL server?

WorldMaker
0 replies
4h10m

Given how Entity Framework is quite ubiquitous as "the ORM of choice" for SQL Server and its usage of C# Linq, there's certainly external momentum, whether or not SQL Server devs themselves are paying attention to how the majority of their users are writing queries today.

philippta
2 replies
11h49m

Why even add the pipe operator?

If the DB engine is executing the statement out of order, why not allow the statement to be written in any order and let itself figure it out?

self
0 replies
11h24m

Why even add the pipe operator?

To make it easier for humans to read/write the queries.

bvrmn
0 replies
10h12m

Aggregations could be non-commutative in general case and order is important. Filters before and after grouping are also tied to a particular place in the pipeline.

isoprophlex
2 replies
11h56m

I love the idea but something in my brain starts to itch when I see that pipe operator

     |>
What IS that thing? A unix pipe that got confused with a redirect? A weird smiley of a bird wearing sunglasses?

It'll take some getting used to, for me...

summerlight
0 replies
2h44m

They considered ditching `|>` or using `|` but unfortunately there's a bunch of syntactic ambiguity.

WorldMaker
0 replies
4h2m

It's like other "arrow" digraphs in common programming languages today, such as =>. You can picture it as a triangle pointing to the right.

Many Programming Ligature fonts even often draw it that way. For instance it is shown under F# in the Fira Code README: https://github.com/tonsky/FiraCode

AdieuToLogic
2 replies
18h52m

If anyone is interested in the theoretical background to the thrush combinator, a.k.a. "|>", here is one using Ruby as the implementation language:

https://leanpub.com/combinators/read#leanpub-auto-the-thrush

Being a concept which transcends programming languages, a search for "thrush combinator" will yield examples in several languages.

AdieuToLogic
0 replies
15h9m

A key thing to keep in mind is that the thrush combinator is a fancy name for a simple construct. The semantics it provides is a declarative form of traditional function composition.

For example, given the expression:

  f (g (h (x)))
The same can be expressed in languages which support the "|>" infix operator as:

  h (x) |> g |> f
There are other, equivalent, constructs such as the Cats Arrow[0] type class available in Scala, the same Arrow[1] concept available in Haskell, and the `andThen` method commonly available in many modern programming languages.

0 - https://typelevel.org/cats/typeclasses/arrow.html

1 - https://wiki.haskell.org/Arrow_tutorial

wvenable
1 replies
12h59m

I didn't see this the first time:

    GROUP AND ORDER BY component_id DESC;
Is this kind of syntax combining grouping and ordering really necessary in addition the pipe operator? My advice would be to add the pipe operator and not get fancy adding other syntax to SQL as well.

bvrmn
0 replies
10h9m

It could be a custom zetasql extension leaked into the paper.

victorbjorklund
1 replies
11h40m

Looks just like writing sql using Ecto in Elixir:

"users" |> where([u], u.age > 18) |> select([u], u.name)

https://hexdocs.pm/ecto/Ecto.Query.html

h0l0cube
0 replies
11h31m

Thought this too. The example queries look very much like Ecto statements. I miss the ergonomics and flexibility of Ecto when I use database wrappers on other platforms.

make3
1 replies
14h28m

this reads like an article written by someone with adhd who started writing about a scientific paper but got distracted by some random thing instead of reading it

fridental
1 replies
10h2m

For the sake of God, please fucking stop inventing new pipe languages.

LINQ: exists

Splunk query language: exists

KQL: exists

MongoDB query language: exists

PRQL: exists

bvrmn
0 replies
9h56m

SQL parsers: exists.

The paper clearly describes the goal: add a pipe syntax into existing systems with minor changes and be compatible with existing SQL queries.

BTW: LINQ is an AST transformer not a language per se tied to a particular platform. None of existing DBs allows to use it directly.

eternauta3k
1 replies
13h22m

Do manually-generated SQL strings have a place outside of interactive use? I use them in my small projects but I wonder if a query builder isn't better for larger systems.

otabdeveloper4
0 replies
11h48m

Query building for an analytics database is impossible.

These queries are always hand-rolled because you pay the analysts to optimize them.

ahmed_ds
1 replies
9h19m

This is why I like tools like datastation and hex.tech. You write the initial query using SQL than process the results as a dataframe using Python/pandas. Surely, mixing Pandas and SQL like that is not good for data pipelines but for exploration and analytics, I have found this approach to be enjoyable.

theodpHN
0 replies
5h5m

Yes, it's very convenient to be able to use SQL with your massively parallel commercial database (Oracle, Snowflake, etc.) and then again with the results sets (Pandas, etc.). Interestingly, it's a concept that was implemented 35 years ago in SAS (link below) but is just now gaining traction in today's "modern" software (e.g., via DuckDB).

USING THE NEW SQL PROCEDURE IN SAS PROGRAMS (1989) https://support.sas.com/resources/papers/proceedings-archive... The Sql procedure uses SQL to create, modify, and retrieve data from SAS data sets and views derived from those data sets. You can also use the SOL procedure to join data sets and views with those from other database management systems through the SAS/ACCESS software interfaces.

KronisLV
1 replies
11h54m

This feels like this should be in the official SQL standard and supported across a bunch of RDBMSes and understood by IDEs, libraries and frameworks.

riku_iki
0 replies
11h51m

Yeah, and we will have two standards given popularity of existing syntax

stevefan1999
0 replies
12h15m

That's just Linq from C# except Google want to make it a SQL standard...

sharpshadow
0 replies
4h50m

I have to honestly say that I like PDFs they always work and don’t fail without JS.

notfed
0 replies
17h20m

Is it just me, or does this seem anachronistic? Like, this is a conversation I expected to blow up 20 years ago. Better late than never.

nagisa
0 replies
10h23m

People here are describing many projects that already have something resembling this syntax and concept, so I'll add another query language to the pile too: Influx's now-mostly-abandoned Flux. Uses the same |> token and structures the query descriptions starting with an equivalent of "FROM".

middayc
0 replies
9h51m

Looking at the first example from PDF:

    FROM customer
    |> LEFT OUTER JOIN orders ON c_custkey = o_custkey
    AND o_comment NOT LIKE '%unusual%packages%'
    |> AGGREGATE COUNT(o_orderkey) c_count
    GROUP BY c_custkey
    |> AGGREGATE COUNT(*) AS custdist
    GROUP BY c_count
    |> ORDER BY custdist DESC, c_count DESC;
You could do something similar with Ryelang's spreadsheet datatype:

    customers: load\csv %customers.csv
    orders: load\csv %orders.csv

    orders .where-not-contains 'o_comment "unusual packages" 
    |left-join customers 'o_custkey 'c_custkey
    |group-by 'c_custkey { 'c_custkey count }
    |group-by 'c_custkey_count { 'c_custkey_count count }
    |order-by 'c_custkey_count_count 'descending
Looking at this, maybe we should add an option to name the new aggregate column (now they get named automatically) in group-by function because c_custkey_count_count is not that elegant for example.

metadat
0 replies
18h1m

Simon: Please keep pushing, and mute nothing.

mav3ri3k
0 replies
12h19m

The first piped query language I used was Nushell's implementation of wide-column tables. PRQL offers almost similar approach which I have loved dearly. It also maps to different SQL dialects. There is also proposal to work on type system: https://github.com/PRQL/prql/issues/381.

Google has now proposed a syntax inspired by these approaches. However, I am afraid how well it would be adopted. As someone new to SQL, nearly every DB seem to provide its own SQL dialect which becomes cumbersome very quickly.

Whereas PRQL feels something like Apache Arrow which can map to other dialects.

julien040
0 replies
11h16m

I haven't seen it mentioned yet, but it reminds me of PQL (not PRQL): https://pql.dev

It's inspired by Kusto and available as an open-source CLI. I've made it compatible with SQLite in one of my tools, and it's refreshing to use.

An example:

  StormEvents
  | where State startswith "W"
  | summarize Count=count() by State

jiggawatts
0 replies
11h43m

They’re a bit late to the game, there’s are least a dozen such popular query languages. LINQ and KQL come to mind, but there are many others…

jappgar
0 replies
6h26m

Wait, is this post about SQL or PDF...

gopiandcode
0 replies
11h23m

I find this particular choice of syntax somewhat amusing because the pipe notation based query construction was something I ended up using a year ago when making an SQL library in OCaml:

https://github.com/kiranandcode/petrol

An example query being:

```

let insert_person ~name:n ~age:a db = Query.insert ~table:example_table ~values:Expr.[ name := s n; age := i a ] |> Request.make_zero |> Petrol.exec db

```

eezing
0 replies
11h29m

For autocomplete, FROM first makes a lot of sense. For readability, SELECT first makes more sense because the output is always at the top.

donatj
0 replies
13h42m

I've been writing SQL for something like 25 years and always thought the columns being SELECTed should have come last, not first. Naming your sources before what you're trying to get from them to me at least makes much more logical sense. Calling aliased table names before I have done the aliasing is weird.

Also it would make autocomplete in intelligent IDEs much more helpful when typing a query out from nothing.

delegate
0 replies
9h35m

There's honeysql library in Clojure, where you define queries as maps, which are then rendered to SQL strings:

    {:select [:name :age]
     :from {:people :p}
     :where [:> :age 10]}
Since maps are unordered, this is equivalent to

    {:from {:people :p}
     :select [:name :age]
     :where [:> :age 10]}
and also

    {:where [:> :age 10]
     :select [:name :age]
     :from {:people :p}}


These can all be rendered to 'SELECT... FROM' or 'FROM .. SELECT'.

Queries as data structures are very versatile, since you can use the language constructs to compose them.

Queries as strings (FROM-first or not) are still strings which are hard to compose without breaking the syntax.

datadeft
0 replies
11h5m

It's been 50 years. It's time to clean up SQL. This

Is it though?

Are we trying to solve the human SQL parser and generator problem or there is some underlying implementation detail that benefits from pipes?

chubot
0 replies
18h22m

The next thing I would like is to define a function / macro that has a bunch of |> terms.

I pointed out that you can do this with shell:

Pipelines Support Vectorized, Point-Free, and Imperative Style https://www.oilshell.org/blog/2017/01/15.html

e.g.

    hist() {
      sort | uniq -c | sort -n -r
    }

    $ { echo a; echo bb; echo a; } | hist
      1 bb
      2 a

    $ foo | hist
    ...
   
Something like that should be possible in SQL!

carabiner
0 replies
16h59m

I like this. Reminds me of pandas.

OscarCunningham
0 replies
10h29m

Rationale: We used the same operator name for full-table and grouped aggregation to minimize edit distance between these operations. Unfortunately, this puts the grouping and aggregate columns in different orders in the syntax and output. Putting GROUP BY first would require adding a required keyword before the AGGREGATE list.

I think this is bad rationale. Having the columns in order is much more important than having neat syntax for full-table aggregation.

1024core
0 replies
15h52m

Isn't this the same syntax (or very similar to) Apache Beam?