What a neat web development stack, just html & database! No back-end and no front-end required.
What a neat web development stack, just html & database! No back-end and no front-end required.
I'd like to write down a task with this app so I don't forget it! My task is this:
<script>alert("XSS is still a thing and building plain HTML responses without a proper templating engine is irresponsible");</script>
(In this case, I don't see how the "task" column is sanitized anywhere.)
This seems like a strawman argument against example docs. HTMX just serves html, you have to sanitize your inputs somehow. You can render the HTML with any templating language that does that for you.
HTMX is also at fault here.
If you construct your DOM imperatively on the client with newElement and textContent then there is no room for XSS to sneak in, because you're never even parsing (non-static) HTML. You're inherently correct by construction! React is just a declarative wrapper around that, but all dynamic data is still structurally separate from the shape of the DOM tree.
HTMX abandons all of that, going back to "lol hope your HTML escape function is correct" (and that you use it consistently!).
You could argue that you were just moving the goalpost to serializing JSON safely… but that's a much smaller (and static!) target than escaping HTML's bespoke flavour of SGML.
Ladies and gentlemen, I'll be brief.
The issue here is not whether we broke a few rules, or took a few liberties w/hypermedia — we did.
winks
But you can't hold a single library responsible for the behavior of a few developers unfamiliar w/ server-side template escaping. For if you do, then shouldn't we blame the whole web development ecosystem? And if the whole whole web development ecosystem is guilty, then isn't this an indictment of hypermedia's uniform interface in general?
I put it to you, Nullability: isn't this an indictment of our entire American society?
Well, you can say what you want about us, but I'm not going to sit here and listen to you badmouth the United States of America!
Gentlemen!
But you can't hold a single library responsible for the behavior of a few developers unfamiliar w/ server-side template escaping.
"Server-side template escaping" is a much thornier issue than it seems as first, and I can certainly blame it for trying to return to a paradigm where it's a problem.
For if you do, then shouldn't we blame the whole web development ecosystem? And if the whole whole web development ecosystem is guilty, then isn't this an indictment of hypermedia's uniform interface in general?
Yes! It's almost like SGML is a pile of garbage for safely embedding user content, and HTML doesn't exactly improve things.
Well, you can say what you want about us, but I'm not going to sit here and listen to you badmouth the United States of America!
Oh, don't get me started on the state of the US... :)
daily reminder that any API you can use to build a DOM on the front end you can also use to build a DOM on the back-end
HTMX abandons all of that, going back to "lol hope your HTML escape function is correct" (and that you use it consistently!).
You make this sound like an unsolved problem. You can use any of the frameworks, including react, in your backend to render HTML and automatically handle all of that for you.
In terms of using it consistently? I guess you could say the same about prepared statements. Yes, you have to actually use the tools and not go "lol I'm gonna just pass the HTML myself for no reason."
HTMX is not at fault here, the backend is.
Of course "you have to sanitize your inputs somehow", but this example does that exactly nowhere, i.e. it is exploitable. Having the tutorial omit this problem altogether is dangerous, as not discussing it might create the impression that PostgREST somehow already handles this, or that the toy example given here does not have a glaring security vulnerability. This is particularly problematic because the reason that the docs for many other backend frameworks also don't discuss the problem is that they do in fact already handle it (usually via a built-in templating language).
Steve acknowledged the problem in a sibling comment, so hopefully the next iteration of the tutorial will address this. (Thanks!)
In the bad old days, XSS was rampant because our views talked directly with our databases. Then we put templating engines and models in the middle and the problem seemed to be solved. Now we have XSS again because the database is the view?! This is a preventable problem!
I don't even hate the idea of generating HTML responses in the database (why not, that's where all the data is!), but this is very clearly not the way to do it. Those functions read like plain PHP scripts, and are obviously just as vulnerable. Just as PHP's problems are largely solved these days by frontend frameworks like Laravel, so could a frontend framework in the database solve the same problems here.
I have to say I always liked the Postgres project, but this kind of dangerous and wrong information in an official tutorial makes me wary of using the platform. Who knows what other lessons from the bad old days have been forgotten by the developers?
This doc was meant to be a POC, just to show what's possible. We're working on migrating it to Mustache templates which do automatic escaping: https://github.com/PostgREST/plmustache?tab=readme-ov-file#e...
You're right though, we'll add a warning there. Thanks for the feedback.
yep, htmx will happily run that script. they even had this bug in their demos: https://github.com/bigskysoftware/htmx/pull/1995/files
the htmx examples are done w/ a mock server based on sinon.js so everything is client-side. this means we can avoid deploying server-side functionality (which would cost money) and it also means that cross-site scripting attacks aren't an issue (you can only xss yourself.)
the examples are to show how htmx works, the "server-side" (such as it is) is just there to support the front end demo.
While cool as a proof of concept and kudos for execution, this looks like a nightmare to maintain for any non-trivial webapp.
It's definitely for websites or light apps only. There's a lot of mileage in that, though.
The "light apps" that also require a database that don't have a more capable front-end framework seem to make HTMX look like a solution in search of a problem.
I'm thinking this is more interesting for admins who need common functions on top of their databases but don't want to deal with the cumbersome cli calls. Make a nice UI with the most used views and functions and you can ship it directly with the database, regardless of the stack above it. Your functions change when your admin tasks change or when your schema changes, not when the application has new requirements.
Or even something like Django's admin interface could be implemented in htmx.
I would argue the “more capable” frameworks, while invented with the best of intentions, have become unmanageable, unmaintainable monstrosities that just keep rolling out, as you put it, solutions in search of a problem.
htmx is not meant to do anything fancy that you can’t do with Ember/Angular/React/Vue/etc.
The main motives for me deciding to go with htmx:
- No duplication of data models and routing, and all business logic stays on the server-side where it belongs.
- Locality of behavior: you can see exactly where a request will be triggered and what will be done with the response, so less jumping between files or scrolling up and down.
- No build step, no dependency hell, and no outrageous churn; just include one JS file that browsers should be able to run indefinitely.
Many web sites don't have or need frameworks
HTMX is stand-alone from PostgREST, should be judged on its own interface and merits.
I’ve been using it to build a nontrivial site for a few months. It has some surprises coming from a background of experience with SPAs but it’s been pleasant to use overall, once I got over my own biases.
Not having to define types in JS is massive, and having a number of transitive frontend dependencies that I can count on one hand is huge.
It might not be the best tool for every job, but it’s a pretty great tool for small full stack shops.
Well, for now, sure. Now we just need a way to compile and deploy HTMX routes to the database server as part of a CI process...
I'm experimenting with it right now using Squitch [1] to make maintenance easier. It still feels like a hack and I also still have my doubts about the viability of this for real-world use. It's fun though and I'm learning about all kinds of advanced Postgres features.
I'm honestly wondering what in our main web application would not be 100x easier using this.
The only postgREST app I ever worked on was awful. Why? Because like most of these ‘simple’ frameworks it’s only simple until your requirements get complicated. Then the original authors had to resort to writing a ton of stored procedures on the database to get back the results they wanted and that led to scalability problems. The solution, as always, is go back to SQL.
What, huh? Aren’t stored procedures SQL in function form?
That’s how everyone used to build apps before Rails came along and made everone think putting biz logic into a slow server side language was a good idea.
I’m sure you know this but the reason for taking computation off the database is that it’s much easier to horizontally scale a stateless middle tier than it is to scale a sql database.
Some DBAs I’ve worked with even advocated for taking sorting off the database. I wasn’t entirely convinced by that one.
My server side language in this case was Scala, so it wasn’t slow, just memory hungry.
Is it really easier to scale a Rails or Node app than Postgres (Scala might be an exception)? And how many pieces of software actually ever reach the kind of scale where database is the bottleneck? For many use cases, biz logic in the database will absolutely smoke doing it on server side due to query planner optimizations.
IMHO and I might be entirely wrong but placing and coupling all that logic into the database seems like a bad idea and it's not a question of speed, it's a question of separating responsibilities. Also, for the case shown in the article, it seems all right for a "hello world" kind of thing. For something complex or prone to deep changes (like most of software projects I've been involved with), this seems like a true nightmare.
I use PostGREST a lot, but with strong guardrails. IMO you should always have a real API layer, and use this just for a convenience to do the base load. Like all these tools, once you go to real world requirements, adapting the tool is worse than what it is trying to replace (SQL + some language and framework).
Already PostGREST is getting complicated, additions like this will make it less attractive to me.
Already PostGREST is getting complicated, additions like this will make it less attractive to me.
This feature[1] actually simplified a lot and removed a lot of magic assumptions in the PostgREST codebase. It goes in line with REST as well — SQL functions are REST resources and HTML is just another representation for them.
Most of the code you see here is pure SQL and plpgSQL. The only PostgREST-specific part is the CREATE DOMAIN.
Right now most users view PostgREST as a HTTP->JSON->SQL->JSON->HTTP service and we're trying to turn that into HTTP->SQL->HTTP. If that's not some true top level simplification, I don't know what is!
[1]: https://postgrest.org/en/stable/references/api/media_type_ha...
Sounds like the team picked the wrong tool for the job. If most of your endpoints contain complex backend logic, don't use PostgREST. It's made for CRUD apps, which applies to the majority of applications I come across.
If your app has the occasional custom backend logic, you can spin up a separate server (or edge function) to handle those one-offs endpoints.
I think it started as pretty much crud and then just grew features that were more complex backend logic. But I was the one brought in to fix it, not the original architect - he had left the company at this point so I don’t know the whole history.
I'm currently in the process of evaluating PostgREST via Supabase for replacing the CRUD aspects of our legacy PHP/Laravel code base.
It's a compelling option, but there are already a lot of sharp edges.
For example, PostgREST doesn't really highlight this, but for any non-trivial and sane application you have to create a separate schema ("api" or similar) to carefully pick what's exposed. PostgREST has a scary "allow by default" permission model which is nearly enough to turn me off of the whole project.
To help mitigate this, I'm evaluating only using PostgREST for reads in the "api" schema via access-restricted views, and having all writes go through supabase edge functions. This should simplify the RLS permissions (hopefully).
RLS has some pitfalls too, and it's the only mechanism you have to secure your data.
Serving assets from Postgres seems like a bad idea aside from some simple edge use cases. In general, you want to treat your DB as a precious resource and minimize the amount of work it has to do.
Nginx and similar are built and optimized for serving assets. Using your database to do this doesn't seem like a great idea if your application needs to scale.
PostgREST has a scary "allow by default" permission model which is nearly enough to turn me off of the whole project.
PostgREST follows Postgres' "deny by default", you have to explicitly grant permissions for tables and views to be used. This is noted on the first tutorial[0].
Supabase overrides this default via `ALTER DEFAULT PRIVILEGES .. GRANT`[1]. This is done for easier onboarding of new users but you can turn this off with `ALTER DEFAULT PRIVILEGES .. REVOKE`.
PostgREST also encourages you to create a dedicated schema for your api[2].
[0]: https://postgrest.org/en/stable/tutorials/tut0.html#step-4-c...
[1]: See an example of this on https://supabase.com/docs/guides/api/using-custom-schemas.
[2]: https://postgrest.org/en/stable/explanations/schema_isolatio...
Thanks for the additional info!
Yep, I'm aware of the ability to revoke permissions, it's the design choice of "allow by default" that worries me - but I agree this is a Supabase thing, not PostgREST.
On a new project, if you look at the dashboard (pointing at an existing db) access to all tables and data is on by default, you have to explicitly deny access to those tables. That's a concerning design choice for my industry (healthcare).
PostgREST also encourages you to create a dedicated schema for your api[2].
Yes, it does - but this isn't really called out strongly by Supabase (not PostgREST's fault).
As some other commenters mentioned, it's not a big deal to create views in the api schema, and I agree. The part I'm still evaluating is whether there will be actual time savings and efficiency vs just creating CRUD endpoints in NodeJS (for example).
The big "win" for PostgREST here is the ability to do deep querying and result shaping directly from the UI without needing to create a bunch of custom endpoints. If I have to create views anyway, it seems like I lose a lot of the benefits of using PostgREST in the first place, and add a layer of complication to the stack.
I like PostgREST a lot, and I've been hoping that Supabase would be an effective Directus replacement (now that Directus has gone closed source), but I'm still figuring out whether the juice is worth the squeeze on my project.
Would love any insight you or others have from using it in production with a medium-scale sized app.
Supabase overrides this default
Ah, thank you for this clarification. I was wondering what that comment was referring to.
(and keep up the great work <3)
Just to address some of your points:
PostgREST doesn't really highlight this, but for any non-trivial and sane application you have to create a separate schema ("api" or similar) to carefully pick what's exposed.
That recommendation is highlighted here: https://postgrest.org/en/stable/explanations/schema_isolatio...
PostgREST has a scary "allow by default" permission model which is nearly enough to turn me off of the whole project.
I assume you're talking about access via the db-anon, without authenticating your users. I can't think of what else you could be referring to. If you rely on JWT-based authentication, as any non-trivial and sane application would do, you'd have access to transaction-scoped impersonated roles. https://postgrest.org/en/stable/references/auth.html#overvie...
having all writes go through supabase edge functions
Why is this necessary, what are you unable to achieve with JWT + RLS/RBA? Again, I have to assume you don't have auth configured correctly.
RLS has some pitfalls too, and it's the only mechanism you have to secure your data.
You also have role-based access, and the JWT auth to start with. However I have found RLS to be flexible enough, especially as you can define policies using subqueries into which you can embed UDFs. What are you trying to do that you can't do with it?
Serving assets from Postgres seems like a bad idea aside from some simple edge use cases.
I instinctively agree in terms of devex, although I'm reluctant to just dismiss it. I think the performance hit with a database is the data retrieval, not the templating. I can see a possibility of this evolving into something very usable with some packaging of templates, especially once you realise the approach eliminates the need for more services, simplifies auth, and reduces rendering latency. The memory footprint of PostgREST is trivial compared to most middleware you'd have to add to the mix just to do some SSR.
Supabase folks seem to agree regarding serving assets since they ship an object store for that kind of stuff in their constellation of apps rather than attempting to instead serve blobs out of the db. Hearkening back to an age long ago of having to deal with stuff like images and pdfs stored in db rows and the annoyances around that, I’m also inclined to agree, but less from the performance angle and more from the usability/compatibility angle of being forced to deal with the nuances of BLOB datatypes
Using views is a good practice anyway as you do not tightly couple your API to your schema. It's been a few years since I used Postgrest (and to be fair I had no write scenarios to cover) but I do not remember it limiting what security capabilities you have on Postgres. Needed a bit of a setup, but nothing terrible. If row level security does not cover your use case you are down the path of custom development anyway. My advise, be careful, this is a bottomless pit, with special combinations/overlapping rules etc. Go with a rules engine, prolog/datalog style would be my path.
you have to create a separate schema ("api" or similar) to carefully pick what's exposed.
Yep, but it's a CREATE VIEW instead of writing a route in python or ruby which also will likely hit an ORM...
Serving assets from Postgres seems like a bad idea aside from some simple edge use cases. In general, you want to treat your DB as a precious resource and minimize the amount of work it has to do.
The database should still be the source of truth (of generated assets). An easy solution is to configure NGINX to cache those asset requests, or write a script to unpack them to the file system.
PostgREST is one of my favorite opensource projects. Supabase's success as a billion(?) dollar company is a direct result the great designs of PostgREST and of course Postgres. I don't know the details of the Supabase sponsorship of this project, but I hope its VERY significant. Seeing this project have only 12 paying supporters [1] even though its certainly a core dependency used by at least hundreds of revenue producing companies, makes me very sad at the state of open source financial support.
We hire the lead maintainer for PostgREST Steve [1] to primarily work on PostgREST since it’s a core part of the Supabase stack as you mentioned
In situations like that, how does one handle (or plan, if you do, to handle) potential conflict between the company's product plans and what the maintainer believes is right for the OSS?
I just think without some sort of pre-agreement for it, or way of avoiding it, whether intentional or not this way of 'supporting' a project also (or even instead) buys control of it, doesn't it?
(I've never used PostgREST, I'm not referring to anything that may or may not have actually happened, just musing.)
We segregate responsibilities so no one has complete control of the PostgREST project. For example, begriffs[1] is in charge of handling the funds (Patreon), wolfgangwalther[2] owns the .org and .com domains and both are owners of the PostgREST's GitHub org. For development, this ensures I don't go crazy and add some feature that is Supabase-specific, since I'm not the only one who has a say in the project's direction.
This has been working well until now and if you follow PostgREST's development, you'll notice that all enhancements are vendor-neutral and keep the original design.
We're a much smaller team but we took some inspiration from PostgreSQL distributed model (no single company owns development) for this.
[1]: PostgREST author https://github.com/begriffs
[2]: Also part of the PostgREST team and major contributor https://github.com/PostgREST/postgrest/graphs/contributors
i'll ping steve to drop a comment here from his POV. while we don't have anything formal in place, developers are self-managing and make their own choices about the software they are maintaining.
if you know a model that works where we could have clear boundaries, we'd be happy to explore! in the meantime, I hope you can look at the past few years of development to get an appreciation of whether we have maintained an arms-length relationship
some other non-obvious ones:
- most of the client libs are a result of supabase: https://postgrest.org/en/stable/ecosystem.html#client-side-l...
- we sponsor a contributor, managed by steve (I believe he mostly works on performance): https://opencollective.com/supabase-postgrest/expenses
steve joined supabase in June 2020 (before our seed round). iirc the project was earning something like ~$300/m in donations which he was splitting between the contributors. it's basically impossible for open source projects to sustain themselves (IMO) through donations
oh wow i didn't realize postgrest was part of supabase. i assumed supabase was a greenfield ripoff.
I can already hear the usual bullshit they spout when we're seeing situation like this. "users has no obligation to support you financially, mate." "if you don't like it, don't release your project in permissive license." and many other similar sentences.
Now, I only use agplv3 and similar license. want to use it commercially? pay me 1% of your gross revenue.
Now 13 (but still only $1529/mo)
first direct DB to API, now direct DB to HTML, when does this insanity end?
A new web server that has this all embedded as a single go binary?
Do you mean fossil?
I mean, if you set your permissions very carefully, you could use database accounts as user accounts...
that's not the problem. The problem is that your back-end always has things to do OTHER than crud to DB. What about working with users' token? in-memory caching? calling other services? sending emails? And don't tell me that you have postgres plugins and other roundabout ways to solve it, that's just more insanity.
Is this kind of functionality / coding pattern used in new or modern applications?
Couchdb, a (json) document database, whose api is http-based, had built-in list and detail methods that allowed you to respond with any type of format you could generate within their javascript interpreter. In other words, no need for a server as the client can directly hit the database and get html and/or json back.
After v1 they stopped working on that front as it makes for nightmare maintenance work.
I think many here remember the good old days of php or asp files having sql statemts mixed with html all in a single file. This doesn't look very different.
Oh man, those really were the good old days”; DROP TABLE users;
I think it's just a new generation relearning the same stuff, even if it's been rejected in past for good reasons.
I really liked the CouchDB web stuff around 13? years ago for personal projects, but it was really awkward for teams to deal with.
It’s far from modern, Oracle had this over 25 years ago.
What additional tooling do you think would be needed to turn this concept into a maintainable stack with good UX for mid-to-large sized applications?
I've been using Astro (https://astro.build) for static site serving + PostgREST Htmx for just simple data-centric components.
For anything mid-large size, I believe a separate abstraction layer is needed, which would be an API. I also tried to build something similar on top of SQL and the tech stack is Jinja-templated SQL and an OpenAPI layer implemented in YML but I would still scope it out for internal tooling. Here it is: https://jinj.at
PostREST is a hack, a bad one. Every time I'm seing that tech coming up, I'm like why would anyone use something like that, so much limitation and coupling.
Coupling isn't intrinsically bad. You could design a three tier architecture with all sorts of loose coupling, or with PostgREST you could have a one or two tier architecture with orders of magnitude less code but tight coupling. I'm not sure why the first choice is necessarily better in all cases.
PostgREST is a phenomenal project, and you get A LOT for it's design. How many thousands of applications are simply wrappers around CRUD databases? PostgREST gives a great REST API implementation for exactly this use case. PostgREST probably has a more robust API interface than 70% of basic CRUD apps, with RLS built in.
Am I using PostgREST for a project with lots of complex backend logic, long running tasks, etc? Absolutely not. But that's not what it was built for.
Does this do input validation and sanitization? Where?
Answered that above: https://news.ycombinator.com/item?id=38692597
We've also updated the doc with some manual sanitization[1], but that's definitely not the final form of this POC.
[1]: https://postgrest.org/en/stable/how-tos/providing-html-conte...
For comparison, MSSQL has been able to provide query results in XML for quite some time. I've found it useful in a couple of situations. See https://learn.microsoft.com/en-us/sql/relational-databases/x...
PostgreSQL has supported XML [1] for a while too. With that, even SOAP endpoints are possible: https://postgrest.org/en/stable/how-tos/create-soap-endpoint...
[1]: https://www.postgresql.org/docs/current/functions-xml.html
I was just scrolling through the article and saw that they are using Tailwind CSS for styling. Why? Why would you do that for a simple demo, that doesn't demand complex styling.
Tailwind is great for simple stuff though, you just stick the classes in your html and don't have to worry about thinking up class names or organizing CSS files
Related Show HN: Render HTML in SQL with pg_render https://news.ycombinator.com/item?id=38677852
Really cool! I'm working on something similar https://github.com/PostgREST/plmustache.
PostgREST is fun because when you have a Haskell perspective..it's such an obvious project. But that's why it's genius. It's such a Haskell idea. Love it.
For those who might be interested, I've recently published SmoothDB on GitHub [1], which, like PostgREST, provides a RESTful API to PostgreSQL databases.
It's a beta, aiming for compatibility with PostgREST, but it's not ready for production yet (so continue to use the very good PostgREST).
Written in Go, SmoothDB can be used both stand-alone and as a module for more complex server applications, which was my main motivation for writing this.
Your thoughts and contributions are greatly appreciated as they will help in the ongoing development and refinement of SmoothDB.
I'm learning Rust by developping a backend that will template out HTML with JSON data, with Mustache. I'm testing everything with HTMX and PostgREST.
I used postgrest (without htmx) on an old project; it's impressive how far you can push it. HTMX seems like a perfect fit for it too, although I'm not sure how much I really want to maintain htmx templates inside SQL functions...
I really like this general approach of using HTML as a declarative language.
I've been working on a similar concept except as a serverless platform which updates all data in real time: https://saasufy.com/
Docs: https://github.com/saasufy/saasufy-components/#saasufy-compo...
I love POSTGRES. I've been using it since last 6 years and I like how easy it is to get started and there is also ton of support and ecosystem available for PG.
Recently, I tried a new product called "Neurelo". It provides API for PostgreSql. And I love how easy it is to migrate my existing database on their platform, they have an option called introspection which helps to generate json schema which you can also modify in visual mode. On top I generated some SDKs for TS and it works so much better directly with any Js/Ts framework. You can download json spec from auto generated docs and run any generator of your choice if you want more from it. You can also write custom queries (AI assisted) which gives you custom endpoint to hit. They also provide a free data source if you want to check it out. Migrations included. Here is an example I built using HTMX and NEURELO: https://github.com/smitpatelx/neurelo-go-htmx-example
I was so impressed with their work, I recently joined the team Neurelo. If you have an existing solution you wanna migrate to cloud, have a look, you might get surprised how easy it is. Happy Coding
I created a full website a few years ago that allowed the search, and CRUD of art pieces. I enjoyed bringing the project to life and having a fully working website with no back-end. It was a side project to proof concept so I don’t know about scaling this kind of project, databases tend to cost more. What was harder is to maintain and evolve the logic which is deported to the db and less easily readable. Htmx seems not trivial to maintain either but a nice project still
Reminds me of Derek Sivers' post on using postgres as a backend here
Cool. Now I have to redeploy a Node server to change the padding on a text input.
If you copy/paste this code, note that the Tailwind CSS reference is a big download. You can reduce this by using the Tailwind build tool (https://unpkg.com/tailwindcss@2.2.19/dist/tailwind.min.css) for example to only include what you need.
Instead of dealing with this torture, you can do what is explained in the article in 10 minutes with any library or framework. This is an effort beyond reinventing the wheel.
I feel obligated to add a shameless plug here. The idea is very close to a project I presented at pgconf.eu last week: SQLPage
SQLPage has the same goal as postgrest+htmx, but is a little bit higher level. It let's you build your application using prepackaged components you can invoke directly from SQL, without having to write any HTML, CSS, or JS.
PostgREST is amazing software, I have written an article on how to use it as a generic data-checking pipeline: https://www.fabianzeindl.com/posts/business-information-serv...
And I already have the next article in the works on how to use it as a CQRS/REST-api-layer.
Finally! Someone managed to build SQL on Rails. It only took 11 years.
PostgREST overall is neat, but essentially this article is pointing out that it's possible to use a 'sprintf' equivalent SQL function as a grotty way to template HTML.
That's one of the approaches of all time.
This reminds me of using xquery with MarkLogic circa 2010 to store the data, act as the middle tier, and generate the xhtml views, all in the same language. It had its quirks, but it was refreshing to do everything in one language.
Cool, but this reminds me of the "we only thought whether we could, and didn't stop to think whether we should" meme
Am I trippin or is htmx not a JS library?
It's more like a polyfill for a proposed/nonstandard version of HTML.
Written in JavaScript, if you can believe it.
Yes but I'd argue that this is a case that shows the construction "JavaScript library" to be non-intersective :)
My colleague James is tall and he's a basketball player. But, in one pretty straightforward sense at least, he's not a tall basketball player.
Edit: I thought of a better analogy. Compare Lodash and uBlock. Lodash is clearly a "JavaScript library", uBlock is clearly not. Why not, given that like Lodash it's just so much JavaScript? Because uBlocks' intended users don't use it as a JavaScript library. They use it to modify the default behavior of the chrome browser to something more to their liking, not as a tool for more easily expressing themselves when writing JavaScript code. I'd argue the relation of htmx to HTML is very much like that of a Chrome/VSCode extension to Chrome/VSCode: its purpose is to change the built-in behavior of some tool (HTML) used by the user, whereas a library at bottom serves an expressive purpose.
htmx's angle is that the developer don't interact with the javascript, so it's not a javascript library. Otherwise one could say that html is a C++ library because the markup is interpreted and run by the browser (written in C++).
It would be harder to believe something running dynamically in the browser is not written in JavaScript.
The other option being… webassembly? convince WHATWG members to change their browsers?
Not really. It just augments the hypermedia capabilities of HTML. Adds the missing semantics. You don't invoke library functions.
Why HTMX advocates say 'augments the hypermedia capabilities of HTML' instead of 'another implementation of HTML-over-the-wire approach'? Placing hidden meaning into common words sounds very cult-ish.
we say this, because, more than other HTML-over-the-wire libraries, htmx focuses on generalizing the two core hypermedia controls, anchors and forms
i discuss this extensively in chapter 4 of our book:
https://hypermedia.systems/extending-html-as-hypermedia/
the tldr is that htmx generalizes:
- the event that triggers an HTTP request
- the element that triggers an HTTP request
- the type of the HTTP request
- the placement of the hypermedia response to that HTTP request
In this sense, htmx adds functionality to (or, more dramatically, "completes") HTML.
This is in contrast with, for example, https://unpoly.com, which also uses HTML-over-the-wire, but provides higher level concepts (e.g. layers) and isn't as focused on generalizing hypermedia controls. This has advantages, unpoly gives you more functionality out of the box, but puts it further way from being a pure conceptual HTML extension.
It still needs JS to work though
Look up nontrivial examples on their own docs.
“Simple” form validation is a clusterfuck and that’s what they’re leading with.
Yes, it is. People often talk about how HTMX allows for interactive websites without any JS. Well, there is JS, it's just that it was written by the developer of HTMX, not the developer making their web app.
This is true, but htmx is targeting developer experience and simplicity, not solving <noscript> user case. For this perspective "interactive websites without any JS" is accurate.
I wouldn't say that's accurate because Astro is a better fit for that description.
presumably the implication of a "js library" is that your frontend is written by you using js, rather than declaratively via html(x).
It is a JavaScript library
We've been down this road before. It's neat but not for long term maintenance, support or training as soon as HTMX loses its luster and the, then manager, has to concede the whole thing has to be re-written from scratch.
I've gone down this road before and that's not what I would criticize.
Where things went south for me:
- hard to hire developers that know how to work with this or that are willing to learn
- very hard to debug performance issues
- version control gets weird, since migrations are not really meant for functions and procedures (although Pyrseas[0] helped a lot)
[0] https://pyrseas.readthedocs.io/en/latest/
Is it possible to capture changes in git somehow? If not what did you do?
I smell a startup opportunity.
I think using Postgres as a runtime would feel like a PaaS runtime not much different from other offerings in the market. But, for something self-hosted, it feels like super powers in the beginning.
I've used git as usual to evolve my SQL functions and procedures and my CI/CD pipeline used Pyrseas to sync code with database.
I am starting to believe that "system has to be rewritten from scratch in the future" is not nearly the dealbreaker people make it out to be. Especially if it didn't take that much work building the v0 version.
You should almost always rewrite the system in the future, that's just the nature of growth and systems evolution.
https://wiki.c2.com/?PlanToThrowOneAway
So like nearly every JavaScript framework that comes along?
Years ago, mid-90's, I visited Compuserve in Ohio. The web was just getting going and one of the engineers I met there showed off a music store he was building by returning html from SQL stored procedures :).
The quality of open source databases with views, partitions, etc as well as improvements in server hardware have made this approach much more appealing.
if you think about it, serverless lambdas are just stored procedures with bells and wistles.
if you think about it, php is just stored procedure with bells and whistles.
nah you have to worry about deployments of where .php were placed and mod_php and php-fpm stuff. DB can scale and you don't have to worry where SP were executed.
Lambdas feel more like CGI scripts. Too bad they didn't just use the CGI standard instead of something proprietary.
Yes, we had that at Oracle as a product. SQL + PL/SQL = HTML. It also had a built in versioning that if two users would change the same record it would moan.
I have a huge feeling there will be a compressive period after the sort of cambrian explosion we've seen in the last decade.. everybody do very similar things, spread across N layers just so you can modify a bunch of rows in db. Mind boggling.
You still need to run PostgREST, so still 3 servers. (web/app/db)
Isn't this example just web + db ?
PostgREST is an app that turns the db into an API. This takes the place of a traditional app server, but it's still an app server.
Thanks for the clarification
I can't imagine I'd put the database on a separate host from postgREST - the overheads are so low as to not meaningfully exist.
I _might_ introduce a load balancer to support migrating between servers with only 1-2 seconds downtime, but probably only for the duration of the move.
How is authentication/authorization handled with this stack? Or sign up with email validation and password reset?
It's a little bit involved but doable.
https://postgrest.org/en/stable/references/auth.html
So where is the secret key stored for signing the JWT? In the front end as well?
Edit: Oh I found it here: https://postgrest.org/en/stable/how-tos/sql-user-management....
That’s a pretty neat design. Also an interesting attack surface
RDBMS auth and roles are a thing. They even support mTLS.
The backend and frontend are still there... Just all tangled directly into database code
It's all a shell game.
Yeah, I exaggerated a bit. It's just that where it used to be postgres + node + react + html, then the middle two have been cut.
And I'm actually not contesting the value of a regular back-end and a front-end framework: postgrest and htmx make it easy to achieve more with configuration, but this only works up to a degree. After that point the benefits of a custom application start to overweight its absence.
If you think that's cool, you might also want to check out Omnigres:
https://github.com/omnigres/omnigres
I don't really understand what this project is designed to do. I see a lot of reinvented square wheels, that other Postgres extensions already do better. Another (synchronous) HTTP client library, why? What does the "completed" JSON module even do, what are its aims? Maybe I just don't see the forest for the trees, but I can't figure out the value-add here.
I think the main value proposition is its HTTP server, which AFAICT lets you serve web requests from inside the PostgreSQL DB process.
(I'm not sure whether PostgREST also serves requests from inside the process, but another comment mentioned that it's a separate server process. I can see advantages to both ways.)
Now deploy a DB per customer and you have scaling sorted.
Yep, and that’s easier than ever now with neon, fly.io and similar.
crud without the cruft
... we are back with mainframes but this time browsers as terminals