I looked into Supabase a while back but left confused on how to do a basic REST API. They auto-generate an API to interact with the tables you create which sounds neat, but like, where does the business logic live? I then checked out their edge functions but it wasn't clear if they are meant to be used that way since the examples are more oriented for tasks. Seems like I'm not understanding something simple here.
One thing to keep in mind which I found using Postgrest interface: you will end up having to put logic into stored procedures. The rest APIs are actually very convenient for aggregating data like joins, but I started to get stuck as soon as I wanted things like transactions. I also found that Row Level Security (RLS) for role based access was a chore and the developer experience of it left much to be desired.
If your DB needs are simple then the REST api is very convenient. But if you are planning anything of complexity then you'll have to bone up on your PL/pgsql or go for a regular db connection instead.
I'm really curious about more of your perspective on RLS: I spent most of my career on mobile and rely heavily on Supabase to give me server superpowers. RLS _seems_ really cool to me (just write a one liner to define access rules as simple as complex as you need!), but I'm guessing I'm missing something. Especially because I don't actually have users yet ;)
Have a look at the supabase-community claims repo [1]. NOTE: this is for ACL type permissions where you want to provide granular access to a wide range of services.
This stuff is "really cool" but just keep in mind that it is pretty advanced. And exactly as another commenter noted in this thread, it is possible to destroy your performance if you need to join on other tables in an extended version of this kind of RLS policy.
In this repo, the logic is simply "if a claim exists on the JWT then grant access". But in a lot of cases you may want to do something like "if this user is an owner of <row in another table> then grant access". That can require a join to that other table. That logic can get even more complex, for example, you might want to say "allow the user access to this row if they are an owner of the project". So you have to do more work to join from a child table, to a project table, to the user table, etc.
These operations are in addition to any work you might be doing in the actual query that is executed. I have no idea if the query planner can recognize you are doing the same joins in the RLS as the main query and optimize that away. But at any rate, every single policy invocation (on every single query) will be executing this logic.
These are all considerations if you are planning more advanced access policies to your data. If all you need is a binary "can access"/"cannot access" then basic RLS policies may be fine. But once you get into even moderately complex scenarios your policies are likely to balloon in complexity and you'll be writing a fair amount of PL/pgsql and fighting with testing and validating.
1.https://github.com/supabase-community/supabase-custom-claims...
Aren't these extended queries with joins simply a function of the granular access rules you are trying to enforce? In other words, the downside of RLS that you mention is that it can destroy performance if the rules get complex. But the upside is that those complex rules are available. So there is no "true" downside, since the downside that is mentioned is only a feature of using the upside. One can presumably use RLS in a way that does not make use of all the advanced features it offers, but also does not destroy performance.
Consider you have to do joins in the RLS for policy enforcement and you also have to do the same joins in your query for the business logic. One question is whether or not the query optimizer can recognize that the joins you do in the RLS are the same joins you are doing in your query, or if it will perform the subqueries multiple times.
The bigger downside, IMO, is the dev experience. They are hard to debug and to test their performance. Of course, everyone has a different bar for what they consider "hard", but if I start getting back result sets from the db that don't match my expectations, or if the performance isn't what I expect, I have to track down if the culprit is my query, the RLS or some combination of those. And while I am pretty confident with SQL, I am not so confident in PL/pgsql - which was the point of my original comment. You will have to get confident in that if you go down this route. You'll have to learn what amounts to a complete language but you won't have logging, a debugger, etc. just a rudimentary set of tools.
I'm not telling people not to do it - just warning them that the path passes through some thorny territory and you may get scratched up. It isn't all roses.
The business logic has to live somewhere, and it makes sense to me to start thinking of everything in terms of PL/pgSQL if you're building with Supabase, which I'm thinking of doing.
I suppose I would like to have the problem of having so many users that my app begins to get bogged down, so I'll cross that bridge when I come to it.
The RLS stuff seems like a bonus that I can choose to use if I want to, and I was happy not having it before, so I'm planning to be careful about actually activating it in any projects going forward -- it would have to be a use-case that I thoroughly understand.
About the "as complex as you need": RLS can get slow very quickly for aggregate queries, and is hard to debug (since query planner doesn't work smoothly with RLS).
We have a dashboard that displays aggregated stats for our admin users, and we hit serious performance issues with ~600 users with our first implementation. This repo helped us: https://github.com/GaryAustin1/RLS-Performance
Thank you, that was really helpful and actionable: ex. I had stopped writing filters on queries recently if my RLS had it "built-in", easy to see now it's better for performance, and since it's better for safety anyway, why not do it?
What about plv8? Write js in postgres! I tried it out once for a project of relatively low complexity for maintainability reasons, nobody else knew pl/pgsql. Worked great.
I'm sure it works fine, its just another thing to add to your stack (in some sense). Just a few days ago I saw a comment where a business owner was bragging about how his entire business was run on SQL stored procedures. He had made the technical decision to move all business logic into the database using triggers and stored procedures. That is certainly an option. Otherwise, you end up with a mix of business logic between your code and your database. This can cause confusion and can lead to hard-to-debug systems.
In that sense, if you are like the business owner who swears by SQL and making the database the core business-logic layer of your system, then you might even appreciate that Postgrest forces you do move that kind of logic into the database. It is just something to be aware of before you make the decision so you that you aren't surprised when it happens.
core element is https://postgrest.org/en/stable/ . I use this in production in large corporate projects on k8s. For a large number of use cases you can put logic into stored procedures SQL. PG can also do JS or Py stored procedures but you get a better developer experience if your logic code is deployed through regular CI/CD containers or functions (we use both extensively together depending on cost trade offs either one.)
Supabase suggests you to use their DENO serverless functions which is cool and all but i think most people would rather deploy node functions on cloudflare for webprojects.
That being said the target customer group are those that want to have 99% of their logic in JS frontend. Backend just does CRUD and Auth.
It seems Supabase only supports JS and PL/pgSQL, not Python or the rest of PG languages. But still you could use compile-to-js languages like ClojureScript.
Unfortunately python for Postgres is only available as an untrusted language extension, which can provide avenues for things like privilege escalation[0]
We’ve decided to only bundle trusted language extensions so that there is a balance between flexibility when it comes to users writing their own procedures, all while maintaining security.
[0] https://www.postgresql.org/docs/current/plpython.html
Oh, interesting. Is it related related to any inherent property of CPython? As there's also trusted Perl, Tcl, Lua etc: https://wiki.postgresql.org/wiki/PL_Matrix
I've written a fair bit of pl/pgsql for my startup.. not exactly my frst choice of langauge but I've turned to it for certain optomizations in or system. definitly NOT the choice I'd make for most situations. the language is incredibly clumsy and there isn't much material out there to learn it well. a lot of the ps/pgsl i know comes from reading code and guessing how things should work.
AFAIK Supabase serverless is Cloudflare, or at least I thought...
It previously ran on Deno, but now we run our own edge runtime!
https://github.com/supabase/edge-runtime
Interesting, thanks for the info. I thought they were targeting mobile developers since they claim to replace Firebase, but sounds like a mobile app API wouldn't fit their platform very well. That explains why I was very confused trying to use their mobile SDK for iOS lol.
You can access the database from anywhere (client and server depending on your config). I use nextjs so many of my database calls are in Next serverless functions. However, I’ll probably explore moving some of that logic to supabase functions to keep them as close as possible to the database, but I haven’t wanted to move to deno. When you access supabase on the server you can either use their PostgREST features (basically an autogenerated REST API on top of your db which the supabase clients use), or just access Postgres directly though a typical pg lib
Got it, thanks. This actually fits one of my side projects really well, will have to try it out with NextJS.
Yeah I’m sticking with RDS and such
I don’t understand generated API. It’s useless and doesn’t save much typing to begin with
(supabase team)
you have a few options:
1. connect to Postgres like you do with any other Postgres database. Supabase is just postgres
2. connect to PostgREST, the autogenerated REST API that you mention
3. connect using Edge Functions (Deno)
Most people are fine with 1. You can use 2 & 3 if you want to, they are just another tool in the shed
What we do is much of the business logic in Postgres (triggers, constraints, etc). But then there’s all the other stuff like external integrations, etc.
We handled that by having an event system built on the Postgres WAL that we use like a callback system.
I put together a little library in Elixir (that originally started out as forked Supabase realtime) for this:
https://github.com/cpursley/walex
Recently added the ability to configure WalEx to forward events to webhooks or EventRelay (so you don’t need to know Elixir).