return to table of content

Supabase Storage: now supports the S3 protocol

kiwicopple
76 replies
12h21m

hey hn, supabase ceo here

For background: we have a storage product for large files (like photos, videos, etc). The storage paths are mapped into your Postgres database so that you can create per-user access rules (using Postgres RLS)

This update adds S3 compatibility, which means that you can use it with thousands of tools that already support the protocol.

I'm also pretty excited about the possibilities for data scientists/engineers. We can do neat things like dump postgres tables in to Storage (parquet) and you can connect DuckDB/Clickhouse directly to them. We have a few ideas that we'll experiment with to make this easy

Let us know if you have any questions - the engineers will also monitor the discussion

jonplackett
34 replies
11h36m

Dear supabase. Please don’t get bought out by anyone and ruined. I’ve built too many websites with a supabase backend now to go back.

gherkinnn
18 replies
9h19m

This is my biggest reservation towards Supabase. Google bought Firebase in 2014. I've seen Vercel run Nextjs in to the ground and fuck up their pricing for some short-term gains. And Figma almost got bought by Adobe. I have a hard time trusting products with heavy VC backing.

paradite
7 replies
7h53m

I think Vercel and Next.js are built by the same group of people, the same people who made Now.sh, started the company (Zeit), then changed product name to Now.sh, then changed company and product name to Vercel.

cqqxo4zV46cp
5 replies
6h55m

Yes. That doesn’t mean that they haven’t ran it into the ground.

paradite
4 replies
5h25m

Tell me about it.

My simple SSG Next.js static site loads much slower than my Jekyll site on GitHub pages.

And I can't figure out how to improve its speed or disable the "ISG" feature that I believe is to be blamed for the poor performance.

eropple
3 replies
5h17m

Not defending NextJS, I'm pretty out on it myself, but ISG requires a server to run. It pregenerates static content for defined pages and serves that until revalidating. If you've built a fully static bundle, nothing exists that would handle that incremental/revalidating logic.

paradite
2 replies
3h13m

I understand that ISG requires a server (Node.js runtime) to run it, that's why I want to disable it for my SSG Next.js static site, but I can't figure out how. I just want static hosting like S3+Cloudfront which is much faster.

paradite
0 replies
2h47m

Cool, so that's what I was missing all along!

Unfortunately looks like I can't use it now since I am using `next-plausible` which does require a Node.js proxy...

spxneo
0 replies
1h35m

big reason on why I decided on Flutter

quest88
2 replies
4h4m

What's the actual complaint here , other then company A buys company B.

spxneo
0 replies
1h37m

the only reason i am using supabase is that its cheap and i can export it to postgres, thats it.

i know that one day these founders will exit and it will be sold to AWS, Google or Microsoft cloud.

so its a bit of a gamble but that risk is offset by its super cheap cost and ability to export out the data and swap out the pieces (more work but at that point you should be cashflow positive to pay engineers to do it for you).

MatthiasPortzel
2 replies
6h40m

I’m not defending Vercel or VC backed companies per se, but I don’t understand your comments towards Vercel. They still offer a generous hobby plan and Next.js is still actively maintained open source software that supports self-hosting.

Heroku might be a better example of a company that was acquired and the shut down their free plan.

theturtletalks
1 replies
3h8m

Supabase and Next.js are not the same. Supabase can be self-hosted but it’s not easy to do so and a lot of moving parts. Most of my Next.js apps are not even on Vercel. They can easily be deployed to Netlify, Railway, Render, etc.

ihateolives
0 replies
2h28m

Supabase self-hosting is not difficult, but it makes no sense since some important features are missing, like reports and multiple projects.

spxneo
1 replies
1h42m

You know the whole point of YC companies is to flip their equity on the public market right and then moving on to the next one?

superb_dev
0 replies
42m

We know, but it screws over your existing customers when a very helpful tool is turned over to a greedy investment firm that’s gonna gut the product seeking the highest return

zackmorris
0 replies
1h11m

Firebase was such a terrible loss. I had been following it quite closely on its mailing list until the Google takeover, then it seemed like progress slowed to a halt. Also having big brother watching a common bootstrap framework's data like that, used by countless MVP apps, doesn't exactly inspire confidence, but that's of course why they bought it.

At the time, the most requested feature was a push notification mechanism, because implementing that on iOS had a steep learning curve and was not cross-platform. Then probably some more advanced rules to be able to do more functional-style permissions, possibly with variables, although they had just rolled out an upgraded rules syntax. And also having a symlink metaphor for nodes might have been nice, so that subtrees could reflect changes to others like a spreadsheet, for auto-normalization without duplicate logic. And they hadn't yet implemented an incremental/diff mechanism to only download what's needed at app startup, so larger databases could be slow to load. I don't remember if writes were durable enough to survive driving through a tunnel and relaunching the app while disconnected from the internet either. I'm going from memory and am surely forgetting something.

Does anyone know if any/all of the issues have been implemented/fixed yet? I'd bet money that the more obvious ones from a first-principles approach have not, because ensh!ttification. Nobody's at the wheel to implement these things, and of course there's no budget for them anyway, because the trillions of dollars go to bowing to advertisers or training AI or whatnot.

IMHO the one true mature web database will be distributed via something like Raft, have rich access rules, be log based with (at least) SQL/HTTP/JSON interfaces to the last-known state and access to the underlying sets selection/filtering/aggregation logic/language, support nested transactions or have all equivalent use-cases provided by atomic operations with examples, be fully indexed by default with no penalty for row or column based queries (to support both table and document-oriented patterns and even software transactional memories - STMs), have column and possibly even row views (not just table views), use a copy-on-write mechanism internally like Clojure's STM for mostly O(1) speed, be evented with smart merge conflicts to avoid excessive callbacks, preferable with a synchronized clock timestamp ordered lexicographically:

https://firebase.blog/posts/2015/02/the-2120-ways-to-ensure-...

I'm not even sure that the newest UUID formats get that right:

https://uuid6.github.io/uuid6-ietf-draft/

Loosely this next-gen web database would be ACID enough for business and realtime enough for gaming, probably through an immediate event callback for dead reckoning, with an optional "final" argument to know when the data has reached consensus and was committed, with visibility based on the rules system. Basically as fast as Redis but durable.

A runner up was the now (possibly) defunct RethinkDB. Also PouchDB/PouchBase, a web interface for CouchDB.

I haven't had time to play with Supabase yet, so any insights into whether it can do these things would be much appreciated!

hacker_88
0 replies
40m

Wasn't there Parse from firebase

kiwicopple
11 replies
11h18m

we don't have any plans to get bought.

we only have plans to keep pushing open standards/tools - hopefully we have enough of a track record here that it doesn't feel like lip service

mort96
2 replies
6h45m

Is it even up to you? Isn't it up to your Board of Directors (i.e investors) in the end?

kiwicopple
0 replies
6h15m

we (the founders) control the board

jacob_rezi
0 replies
2h55m

Who controls the board, controls the company

jonplackett
2 replies
9h25m

Absolutely. I am so impressed with SB. It’s like you read my mind and then make what I’ll need before I realise.

(This is not a paid promotion)

kiwicopple
1 replies
9h17m

like you read my mind

we receive a lot of community feedback and ultimately there are only a few "primitives" developers need to solve 90% of their problems

I think we're inching closer to the complete set of primitives which can be used to combine into second-order primitives (eg: queues/search/maps can all be "wrappers" around the primitives we already provide)

jonplackett
0 replies
6h20m

That's a neat way of thinking about it.

Thanks for an awesome product. Please also never get bought or make plans to in the future, or if you really, really, really have to then please not by google.

jerrygenser
2 replies
7h43m

I can believe there are no plans, right now. But having raised over $100mm, the VCs will want to liquidate their holdings eventually. They have to answer to their LPs after all, and be able to raise their next funds.

The primary options I can think of that are not full acquisition are: - company buys back stock - VC sells on secondary market - IPO

The much more common and more likely option for these VCs to make the multiple or home run on their return is going to be to 10x+ their money by having a first or second tier cloud provider buy you.

I think there's a 10% chance that a deal with Google is done in the future, so their portfolio has Firebase for NoSQL and Firebase for SQL.

spxneo
0 replies
1h32m

well before there was supabase I would use Firebase

so it would serve Google well if they matched what supabase is doing or bought them out

jchanimal
0 replies
5h28m

Having founded a database company that IPO'd (Couchbase) and seeing the kinds of customer relationships Supabase is making, an IPO seems a reasonable outcome.

tap-snap-or-nap
0 replies
5h30m

plans*

*Subject to change

robertlagrant
0 replies
10h28m

As long as you make it so if you do get bought a team of you can always fork and move on, it's about the best anyone can hope for.

codextremist
1 replies
8h40m

Never used Supabase before but I'm very much comfortable with their underlying stack. I use a combination of postgres, PostgREST, PLv8 and Auth0 to achieve nearly the same thing.

pbronez
0 replies
5h33m

I was unfamiliar with PLv8, found this:

“”” PLV8 is a trusted Javascript language extension for PostgreSQL. It can be used for stored procedures, triggers, etc.

PLV8 works with most versions of Postgres, but works best with 13 and above, including 14, [15], and 16. “””

https://plv8.github.io/

foerster
0 replies
5h50m

I'm a bit terrified of this as well. I have built a profitable product on the platform, and it were to drastically change or go away, I'd be hosed.

nextaccountic
8 replies
11h10m

A question about implementation, is the data really stored in a Postgres database? Do you support transactional updates like atomically updating two files at once?

Is there a Postgres storage backend optimized for storing large files?

fenos
7 replies
10h59m

We do not store the files in Postgres, the files are stored in a managed S3 bucket.

We store the metadata of the objects and buckets in Postgres so that you can easily query it with SQL. You can also implement access control with RLS to allow access to certain resources.

It is not currently possible to guarantee atomicity on 2 different file uploads since each file is uploaded on a single request, this seems a more high-level functionality that could be implemented at the application level

nextaccountic
6 replies
9h42m

Oh.

So this is like, S3 on top of S3? That's interesting.

fenos
4 replies
9h5m

Yes indeed! I would call it S3 on steroids!

Currently, it happens to be S3 to S3, but you could write an adapter, let's say GoogleCloudStorage and it will become S3 -> GoogleCloudStorage, or any other type of underline Storage.

Additionally, we provide a special way of authenticating to Supabase S3 using the SessionToken, which allows you to scope S3 operations to your users specific access control

https://supabase.com/docs/guides/storage/s3/authentication#s...

pelagicAustral
3 replies
8h42m

What about for second tier cloud providers like Linode, Vultr or UpCloud, they all offer S3 compatible object storage, will I need to write an adaptor for these or will it work just fine in lieu of their S3 compatibility?

fenos
2 replies
8h7m

Our S3 Driver is compatible with any S3 Compatible object Storage so you don’t have to write one :)

pbronez
0 replies
5h23m

I’m confused about what directions this goes.

The announcement is that Supabase now supports (user) —s3 protocol—> (Supabase)

Above you say that (Supabase) —Supabase S3 Driver—> (AWS S3)

Are you further saying that that (Supabase) —Supabase S3 Driver—> (any S3 compatible storage provider) ? If so, how does the user configure that?

It seems more likely that you mean that for any application with the architecture (user) —s3 protocol—> (any S3 compatible storage provider), Supabase can now be swapped in as that storage target.

SOLAR_FIELDS
0 replies
4h29m

Gentle reminder here that S3 compatability is a sliding window and without further couching of the term it’s more of a marketing term than anything for vendors. What do I mean by this statement? I mean that you can go to cloud vendor Foo and they can tell you they offer s3 compatible api’s or clients but then you find out they only support the most basic of operations, like 30% of the API. Vendor Bar might support 50% of the api and Baz 80%.

In a lot of cases, if your use case is simple, 30% is enough if you’re doing the most common GET and PUT operations etc. But all it takes is one unsupported call in your desired workflow to rule out that vendor as an option until such time that said API is supported. My main beef with this is that there’s no easy way to tell usually unless the vendor provides a support matrix that you have to map to the operations you need, like this: https://docs.storj.io/dcs/api/s3/s3-compatibility. If no such matrix is provided on both the client side and server side you have no easy way to tell if it will even work without wiring things in and attempting to actually execute the code.

One thing to note is that it’s quite unrealistic for vendors to strive for 100% compat - there’s some AWS specific stuff in the API that will basically never be relevant for anyone other than AWS. But the current situation of Wild West could stand for some significant improvement

kiwicopple
0 replies
9h11m

in case it's not clear why this is required, some of the things the storage engine handles are:

image transformations, caching, automatic cache-busting, multiple protocols, metadata management, postgres compatibility, multipart uploads, compatibility across storage backends, etc

fenos
4 replies
9h37m

Thanks for asking this, we do not support signed URLs just yet, but it will be added in the next iteration

avodonosov
2 replies
9h27m

Presigned URLs are useful because client app can upload/download directly from S3, saving the app server from this traffic. Does Row-Level Security achieve the same benefit?

arxpoetica
0 replies
2h3m

Can you give an indication of what "next iteration" means in terms of timeline (even just ballpark)?

JoshTriplett
4 replies
10h19m

You specifically say "for large files". What's your bandwidth and latency like for small files (e.g. 20-20480 bytes), and how does it compare to raw S3's bandwidth and latency for small files?

fenos
2 replies
10h4m

You can think of the Storage product as an upload server that sits in front of S3.

Generally, you would want to place an upload server to accept uploads from your customers, that is because you want to do some sort of file validation, access control or anything else once the file is uploaded. The nice thing is that we run Storage within the same AWS network, so the upload latency is as small as it can be.

In terms of serving files, we provide a CDN out-of-the-box for any files that you upload to Storage, minimising latencies geographically

Rapzid
1 replies
8h31m

Generally, you would want to place an upload server to accept uploads from your customers

A common pattern on AWS is to not handle the upload on your own servers. Checks are made ahead of time, conditions baked into the signed URL, and processing is handled after the fact via bucket events.

fenos
0 replies
8h0m

That is also a common pattern I agree, both ways are fine if the upload server is optimised accordingly :)

egorr
0 replies
10h3m

hey, supabase engineer here; we didn’t check that out with files that small, but thanks for the idea, i will try it out

the only thing i can say related to the topic is that s3 multipart outperforms other methods for files larger than 50mb significantly, but tends to have similar or slightly slower speeds compared to s3 regular upload via supabase or simplest supabase storage upload for files with size about and less than 50mb.

s3-multipart is indeed the fastest way to upload file to supabase with speeds up to 100mb/s(115 even) for files>500mb. But for files about 5mb or less you are not going to need to change anything in your upload logic just for performance cause you won’t notice any difference probably

everything mentioned here is for upload only

jarpineh
2 replies
7h16m

Hi, a question, but first some background. I've been looking at solutions to store columnar data with versioning, essentially Parquet. But, I'd also like to store PDFs, CSVs, images, and such for our ML workflows. I wonder if now, that Supabase is getting better for data science DuckDB crowd, could Supabase be that one solution for all this?

kiwicopple
1 replies
7h7m

Parquet. But, I'd also like to store PDFs, CSVs, images

yes, you can store all of these in Supabase Storage and it will probably "just work" with the tools that you already use (since most tools are s3-compatible)

Here is an example of one of our Data Engineers querying parquet with DuckDB: https://www.youtube.com/watch?v=diL00ZZ-q50

We're very open to feedback here - if you find any rough edges let us know and we can work on it (github issues are easiest)

jarpineh
0 replies
6h52m

Well, this is great news. I'll take "just works" guarantee any day ;)

We have yet to make a commitment to any one product. Having Postgres there is a big plus for me. I'll have to see about doing a test or two.

gime_tree_fiddy
2 replies
11h28m

Shouldn't it be API rather than protocol?

Also my sympathies for having to support the so-called "S3 standard/protocol".

preommr
0 replies
11h5m

I think that protocol is appropriate here since s3 resources are often represented by a s3:// url where the scheme part of the url is often used to represent the protocol.

fenos
0 replies
11h9m

Yes, both can be fine :) after all, a Protocol can be interpreted as a Standardised API which the client and server interact with, it can be low-level or high-level.

I hope you like the addition and we have the implementation all open-source on the Supabase Storage server

filleokus
2 replies
9h39m

Do you think Supabase Storage (now or in the future) could be an attractive standalone S3 provider as an alternative to e.g MinIO?

kiwicopple
1 replies
9h28m

It's more of a "accessibility layer" on top of S3 or any other s3-compatible backend (which means that it also works with MinIO out-of-the-box [0])

I don't think we'll ever build the underlying storage layer. I'm a big fan of what the Tigris[1] team have built if you're looking for other good s3 alternatives

[0] https://github.com/supabase/storage/blob/master/docker-compo...

[1] Tigris: https://tigrisdata.com

masfuerte
0 replies
2h50m

I'm still confused. Who is paying Amazon? Is it

1. You buy the storage from Amazon so I pay you and don't interact with Amazon at all or

2. I buy storage from Amazon and you provide managed access to it?

ntry01
1 replies
10h52m

This is great news, and I agree with everyone in the thread - Supabase is a great product.

Does this mean that Supabase (via S3 protocol) supports file download streaming using an API now?

As far as I know, it was not achievable before and the only solution was to create a signed URL and stream using HTTP.

fenos
0 replies
10h42m

Yes, absolutely! You can download files as streams and make use of Range requests too.

The good news is that the Standard API is also supporting stream!

kaliqt
1 replies
11h33m

I tried to migrate from Firebase once and it wasn't really straightforward and decided against doing it, I think you guys (if you haven't already) should make migration plugins a first class priority that "just works" as the amount of real revenue generating production projects on Firebase and similar are of a much higher number. It's a no-brainer that many of them may want to switch if it were safe and simple to do so.

cmollis
0 replies
4h28m

Yes. Duckdb works very well with parquet scans on s3 right now.

foerster
1 replies
5h52m

no feedback on this in particular, but I love supabase. I use it for several projects and it's been great.

I was hesitant to use triggers and PG functions initially, but after I got my migrations sorted out, it's been pretty awesome.

SOLAR_FIELDS
0 replies
4h20m

Do you manage your functions and triggers through source code? What framework do you use to do that? I like Supabase but it’s desire to default to native pg stuff for a lot of that has kind of steered me away from using it for more complex projects where you need to use sprocs to retrieve data and pgtap to test them, because hiding away business logic in the db like that is viewed as an anti pattern in a lot of organizations. I love it for simple CRUD apps though, the kind where the default postgrest functionality is mostly enough and having to drop into a sproc or build a view is rarely necessary.

I think if there was a tightly integrated framework for managing the state of all of these various triggers, views, functions and sproc through source and integrating them into the normal SDLC it would be a more appealing sell for complex projects

WhitneyLand
1 replies
5h38m

Is iOS support a priority for supabase?

mattgreenrocks
0 replies
4h11m

Just commenting to say I really appreciate your business model. Whereas most businesses actively seek to build moats to maintain competitive advantage and locking people in, actions like this paint a different picture of Supabase. I'll be swapping out the API my app uses for Supabase storage to switch it to an S3 API this weekend in case I ever need to switch it.

My only real qualm at this point is mapping JS entities using the JS DB API makes it hard to use camelcase field names due to PG reasons I can't recall. I'm not sure what a fix for that would look like.

Keep up the good work.

brap
6 replies
6h2m

Always thought it’s kind of odd how the proprietary API of AWS S3 became sort of the de-facto industry standard

mousetree
1 replies
5h33m

Because that's where most of the industry store their data.

brap
0 replies
5h28m

Yeah I understand how it came to be, it’s just kind of an uncommon situation

bdcravens
1 replies
2h40m

S3 is one of the original AWS services (SQS predates it), and has been around for 18 years.

The idea of a propriety API becoming the industry defacto standard isn't uncommon. The same thing happened with Microsoft's XMLHttpRequest.

crubier
0 replies
1h0m

Also, the S3 API is simple and makes sense, no need to reinvent something different just for the pleasure of it

moduspol
0 replies
5h20m

Yeah--though I guess kudos to AWS for not being litigious about it.

garbanz0
0 replies
3h59m

Same thing seems to be happening with openai api

spacebanana7
5 replies
7h42m

I wish supabase had more default integrations with CDNs, transactional email services and domain registrars.

I'd happily pay a 50% markup for the sake of having everything in one place.

spacebanana7
2 replies
6h1m

Thanks for the link about integrations! I wasn't aware about the resend one for transactional email.

It'd be nice to have an integration with a domain register, like Ghandi.net or Namecheap. Ideally with the cost coming through as an item in my Supabase bill.

zenorocha
0 replies
3h46m

Resend founder here - feel free to try out the integration with Supabase and let me know if you have any questions

kiwicopple
0 replies
5h11m

Thanks for the recommendations - I’ll see what this would involve

tootie
0 replies
4h18m

At the same time, I worry about them drifting too far from their core mission. I think Vercel and Netlify kinda went that way and when you look at their suite of features, you just have to ask why would I not just use AWS directly.

denysvitali
3 replies
11h13m

Friendly reminder that Supabase is really cool, and if you haven't tried it out you should do it (everything can be self hosted and they have generous free tiers!)

isoprophlex
1 replies
11h5m

Plus, with all the inflated vc money fueled hype on vector databases, they seem to have the only offering in this space that actually makes sense to me. With them you can store your embeddings close to all the rest of your data, in a single postgres db.

kiwicopple
0 replies
11h8m

thanks for taking time to make a comment. it's nice to get some kind words between the feedback (which we also like)

sgt
2 replies
8h5m

Can Supabase host static content yet (in a decent way)?

fenos
1 replies
7h56m

We don’t support static website hosting just yet - might happen in the future :)

kiwicopple
0 replies
6h47m

just a small elaboration:

You can serve HTML content if you have a custom domain enabled. We don't plan to do anything beyond that - there are already some great platforms for website hosting, and we have enough problems to solve with Postgres

devjume
2 replies
9h14m

This is great news. Now I can utilize any CDN provider that supports S3. Like bunny.net [1] which has image optimization, just like Supabase does but with better pricing and features.

I have been developing with Supabase past two months. I would say there are still some rough corners in general and some basic features missing. Example Supabase storage has no direct support for metadata [2][3].

Overall I like the launch week and development they are doing. But more attention to basic features and little details would be needed because implementing workarounds for basic stuff is not ideal.

[1] https://bunny.net/ [2] https://github.com/orgs/supabase/discussions/5479 [3] https://github.com/supabase/storage/issues/439

kiwicopple
1 replies
8h55m

I can utilize any CDN provider that supports S3. Like bunny.net

Bunny is a great product. I'm glad this release makes that possible for you and I imagine this was one of the reasons the rest of the community wanted it too

But more attention to basic features and little details

This is what we spend most of our time doing, but you won't hear about it because they aren't HN-worthy.

no direct support for metadata

Fabrizio tells me this is next on the list. I understand it's frustrating, but there is a workaround - store metadata in the postgres database (I know, not ideal but still usable). We're getting through requests as fast as we can.

fenos
0 replies
7h49m

This is indeed at the very top of the list :)

Rapzid
2 replies
7h31m

I like to Lob my BLOBs into PG's storage. You need that 1-2TB of RDS storage for the IOPS anyway; might as well fill it up.

Large object crew, who's with me?!

kiwicopple
0 replies
7h16m

that's not how this works. files are stored in s3, metadata in postgres

dymk
0 replies
3h29m

38TB of large objects stored in Postgres right here

yoavm
1 replies
9h14m

This looks great! How easy is it to self host Supabase? Is it more like "we're open-source, but good luck getting this deployed!", or can someone really build on Supabase and if things get a little too expensive it's easy enough to self-host the whole thing and just switch over? I wonder if people are doing that.

kiwicopple
0 replies
9h4m

self-hosting docs are here: https://supabase.com/docs/guides/self-hosting/docker

And a 5-min demo video with Digital Ocean: https://www.youtube.com/watch?v=FqiQKRKsfZE&embeds_referring...

Anyone who is familiar with basic server management skills will have no problem self-hosting. every tool in the supabase stack[0] is a docker image and works in isolation. If you just want to use this Storage Engine, it's on docker-hub (supabase/storage-api). Example with MinIO: https://github.com/supabase/storage/blob/master/docker-compo...

[0] architecture: https://supabase.com/docs/guides/getting-started/architectur...

withinboredom
1 replies
10h34m

Now we just need flutterflow to get off the firebase bandwagon.

kiwicopple
0 replies
9h57m

(supabase team member) Firebase is an amazing tool for building fast. I want Supabase to be a "tool for everyone", but ultimately giving developers choices between various technologies is a good thing for developers. I think it's great that Flutterflow support both Firebase & Supabase.

I know Flutterflow's Firebase integration is a bit more polished so hopefully we can work closer with the FF team to make our integration more seamless

stephen37
1 replies
2h50m

Cool to see that Supabase is adding S3 protocol! Nice to see more and more storage solutions available.

We, at Milvus, I've integrated S3, Parquet and other ones to make it possible for developers to use their data no matter what they use.

For those who have used both, how do you find the performance and ease of integration compares between Supabase and other solutions like Milvus that have had these features for some time?

teaearlgraycold
0 replies
2h41m

Just make your own post if you’re selling your product

poxrud
1 replies
4h13m

Do you support S3 event notifications?

jimmySixDOF
1 replies
7h50m

Supabase also announced this week Oriole (the team not just the table storage plugin) is joining them so I guess this is part of the same story. Anyway it's nice timing I was thinking about a hookup to Cloudflare R2 for something and this may be the way.

kiwicopple
0 replies
6h58m

Oriole are joining to work on the OrioleDB postgres extension. That's slightly different to this release:

- This: for managing large files in s3 (videos, images, etc).

- Oriole: a postgres extension that's a "drop-in replacement" for the default storage engine

We also hope that the team can help develop Pluggable Storage in Postgres with the rest of the community. From the blog post[0]:

Pluggable Storage gives developers the ability to use different storage engines for different tables within the same database. This system is available in MySQL, which uses the InnoDB as the default storage engine since MySQL 5.5 (replacing MyISAM). Oriole aims to be a drop-in replacement for Postgres' default storage engine and supports similar use-cases with improved performance. Other storage engines, to name a few possibilities, could implement columnar storage for OLAP workloads, highly compressed timeseries storage for event data, or compressed storage for minimizing disk usage.

Tangentially: we have a working prototype for decoupled storage and compute using the Oriole extension (also in the blog post). This stores Postgres data in s3 and there could be some inter-play with this release in the future

[0] https://supabase.com/blog/supabase-aquires-oriole

simonbarker87
0 replies
8h42m

Supabase is great and I’ve used it for a number of projects over the years both with a backend alongside it or direct from the client with RLS.

There are some weird edges (well really just faff) around auth with the JS library but if nothing else they are by far the cheapest hosted SQL offering I can find so any faff you don’t want to deal with there’s an excellent database right there to allow you to roll your own (assuming you have a backend server alongside it)

madsbuch
0 replies
10h52m

This is really nice to see! I asked about that feature almost 2 years ago as we wanted to use Supabase for everything. Unfortunately there were no plans back then to support it, so we had to use another provider for object storage.

Congrats on the release!

animeshjain
0 replies
3h12m

Is there any request pricing (I could not find a mention to it on the pricing page). Could be quite compelling for some use-cases if request pricing is free.