return to table of content

Postgres.new: In-browser Postgres with an AI interface

codingwagie
21 replies
1d2h

I really do think that software engineering as we know it is ending. It will take 3-5 years for tools like this to mature. But it will happen, and hard fought skill sets like SQL database design, query design, and maybe even ORMs will become obsolete.

My biggest prediction is that ORMs will not be necessary when LLMs can generate SQL. Low level SQL is a better abstraction for LLMs than ORMs, and as people are removed from the equation, so too will abstractions built to help them craft code.

righthand
10 replies
1d1h

We already had SQL model and code generators well before LLMs. What does adding in random output do to improve that?

ldjkfkdsjnv
9 replies
1d

You havent been using these tools if you think this

righthand
7 replies
23h48m

Yes I don’t need to use these tools because I already have code generators. Wondering about config options I use documentation or a search engine. It’s cute to put them together in a single UI but it doesn’t make these tool inherently more intelligent. It just saves me a few alt+tabs.

An LLM is just taking prefabbed templates and swapping the possibilities in the answer for a statistically relevant solution. My code generator outputs a prefabbed template with a deterministic solution no statistical guessing required.

codingwagie
5 replies
23h30m

You dont understand how powerful LLMs are. Go use claude sonnet 3.5. Paste in 1000 lines of code, and describe a code change you want to make. Iterate on the solution it gives. Do this for a week.

righthand
4 replies
23h27m

If I did that I would waste so much time. I know what code is in my codebase. Maybe if I was a novice this would be effective to help me learn it. Is the point of the exercise to wow myself for a week that an LLM can spit out solutions?

codingwagie
3 replies
23h19m

No its that it will code a days worth of work in a few minutes

threecheese
1 replies
17h21m

10x LoC is going to require more automation, to manage the sheer mass of it, which is more tools/money/layers of abstraction. AI coders need AI testers and AI peer reviewers, and need to iterate over and over to compensate for incorrectness to produce a working feature. That sounds hellishly inefficient. (but all it has to be is cheaper i suppose)

Kiro
0 replies
13h30m

You're speaking theoretically but we're already using it like this and it's not hellish or inefficient, or I wouldn't use it. Granted, it fits certain tasks better than others but when it does it's a massive relief and I can't imagine going back.

righthand
0 replies
20h24m

Wow!

Kiro
0 replies
20h33m

What kind of code generators are you talking about? The ones I have are just templates with macros for scaffolding boilerplate, but they are not even remotely comparable to how I use LLMs and definitely not a substitute.

claytongulick
0 replies
21h14m

I have, and I (mostly) agree with GP's point.

The utility of LLMs with code generation varies widely with the problem domain and the amount of experience the developer has.

simonw
3 replies
1d1h

Giving everyone a smartphone with a great video camera built in didn't obsolete the field of cinematography. I don't think giving everyone tools to help them build software will obsolete software engineering.

jedberg
2 replies
1d1h

Are you sure? Most of the popular videos today do not have what one would call great cinematography, but it doesn't seem to matter. No one cares. Sure, movies still use cinematographers, but movie watching time is getting eaten up with Instagram/TikTok, where cinematography doesn't matter.

I fear applications will suffer the same fate. "Good enough" will take over "well-architected".

astockwell
1 replies
23h38m

I think you are 100% spot-on. Good enough has always been fine for the vast majority of people and the vast majority of use-cases.

Couple this with decreasing costs of storage (and ideally compute), and it doesn't matter if the data model is garbage, people can still get something workable that's better than the awful Excel files they curate now. It will still make errors, but eventually fewer than their spreadsheets.

sgarland
0 replies
14h14m

it doesn't matter if the data model is garbage

There is no "good enough" for data modeling. There is correct, and there is "this works, but it has latent bugs that will eventually surface." You either have referential integrity, or you don't.

from-nibly
2 replies
1d1h

LLMs don't have the context to make good decisions though. You need all of that hard fought skills to make those decisions. And people are the only ones that can have the context enough to actually make decisions.

Not only that, but AI is way more expensive than we think. We're currently in a hype bubble funded by last ditch effort VC money. When that money runs out, and it will eventually, AI is going to get WAY more expensive.

Kiro
1 replies
1d1h

I personally think LLMs make much better decisions than me. I often have a design in mind and then when I prompt Claude it gives me a much better one, and also takes a lot of edge case into account that I didn't even think of. Maybe I'm a useless programmer but I'm sure I'm not the only one.

ldjkfkdsjnv
0 replies
1d

Even when I write working code, I prompt claude, and it adds a bunch of stuff I would never have added. It astonishes me how good it is

imtringued
0 replies
1d1h

How does that make any of the skills obsolete? If anything, it makes them even more important.

In the 20th century you got away with knowing the syntax and hacking away. Now you really need to have a deep understanding of relational algebra, since the LLM is doing the typing for you.

adhamsalama
0 replies
14h33m

It will take 3-5 years for tools like this to mature. But it will happen, and hard fought skill sets like SQL database design, query design, and maybe even ORMs will become obsolete.

Until a query becomes a bottleneck and no one knows why because no one knows how databases work anymore.

Nijikokun
0 replies
21h46m

Reducing barrier to entry is not a bad thing

AlexErrant
12 replies
1d2h

Clicking "New database" doesn't do anything for me...? No changes in the UI, and no messages in the console. Admittedly I'm not signed in with Github, but isn't that only for the AI thing (that really I don't want to use).

Edit: Okay reading kiwicopple's comment makes it clearer that chatGPT is not-optional. I'm... not enthused by this. Why would you take something that's local-first and gatekeep it with something as useless as an AI-for-your-db?

https://pglite.dev/repl/ is available as a more barebones browser-pglite playground.

kiwicopple
8 replies
1d1h

We tested several models and GPT-4o was the most accurate and makes the most sense for our initial launch

That said: it's 100% our intention to add local models. This is just our v1.

vrmiguel
3 replies
21h41m

I think the point is that many of us would like this without any AI in it, just a simpler Postgres playground in the browser, much like the Rust or Go playgrounds.

cheema33
2 replies
21h33m

just a simpler Postgres playground in the browser...

Then you want pglite. This project uses it and provides a link to it.

vrmiguel
1 replies
21h12m

I obviously saw that since it's linked in the very first message in the chain of messages I responded to.

pglite.dev/repl does not have the same level of visualizations as postgres.new.

What I 'want' is exactly what I described in my previous message, postgres.new but without the required LLM integration, the same sentiment others had in this thread.

auspiv
1 replies
1d1h

The point is that the verbiage ("To prevent abuse we ask you to sign in before chatting with AI.") implies that only the AI won't work if you don't sign in, not the entire product/site.

kiwicopple
0 replies
1d1h

good feedback, thanks

we'll change the verbiage for now, and look at ways we can provide a 100% local experience without logins

AlexErrant
0 replies
1d1h

Yeah perhaps I came in with the wrong expectations. The ui/url made me expect a postgre playground, when instead I got this. Perhaps Postgres.ai?

Also... the page title is "Postgres Sandbox". This is, at best, misleading.

jeanlucas
0 replies
1d

Clicking "New database" doesn't do anything for me...?

You need to start typing after clicking on the button. It needs some UX rethinking, but no bugs, just start typing on the chat

foragerdev
0 replies
23h37m

Design does not communicate well; you need to sign in with your GitHub to create new database or even type into Input field. Infact it says "To prevent abuse we ask you to sign in before chatting with AI." Which they do not, you have to do it yourself. Why show create "New database" button and Input field before user sign in?

AlexErrant
0 replies
19h14m

(So uh, just to clarify for posterity, when I originally made this comment it was just a link to https://postgres.new/ without any description, and kiwicopple's comment was just a link to another comment of his on a related post. Obviously now there are more words, everywhere, explaining everything.)

(I still hate LLMs in my DB. I KNOW SQL LET ME WRITE IT.)

filleokus
7 replies
1d1h

Please connect from a laptop or desktop to use postgres.new.

There's nothing wrong using Webkit / Safari on your laptop or desktop. There are dozens of us, DOZENS!

kiwicopple
3 replies
1d1h

fwiw this will show for 2 reasons:

1/ you're using a small window (even on Chrome/FF). We don't support mobile yet, so if you're seeing this on a desktop just expand the window

2/ you're on safari, which doesn't have support for OPFS yet: https://github.com/electric-sql/pglite/pull/130

ezfe
1 replies
1d1h

The website works fine for me on Safari. Had it generate a 3-table schema no issues.

kiwicopple
0 replies
1d

my bad - it has been a crazy week with lots of messages flying around. You're right about safari

lolinder
0 replies
1d

It would be worth spending a few minutes improving this notice. Give more context for what the heck this is, and distinguish between "your screen is too small" and "you're using a browser that doesn't have the API we need".

mdasen
0 replies
1d

There are way more than dozens. In fact, it looks like Safari is (probably) ahead of Firefox on desktop: https://gs.statcounter.com/browser-market-share/desktop/worl... (9.1% Safari, 6.6% Firefox), https://www.similarweb.com/browsers/worldwide/desktop/ (7.6% Safari, 5.7% Firefox), https://radar.cloudflare.com/reports/browser-market-share-20... (scroll to "Market Share by OS" and select "Desktop"; Cloudflare is the one that shows Safari behind Firefox at 6.6% marketshare compared to Firefox at 7.2% - but also 39% of Mac users are on Safari, a highly valuable demographic given the higher-income skew of Mac users as well as the fact that a third of professional developers are on Mac (and only 47% on Windows according to Stack Overflow's survey)).

ezfe
0 replies
1d1h

It's working fine from Safari for me

andybak
0 replies
1d1h

Why not just display a warning and let me try anyway? How sure are they that their filtering out absolutely every device that can't run it successfully. Why risk denying it unnecessarily?

gregnr
5 replies
1d1h

Supabase engineer here. This was a lot of fun to build with the Electric team. There were a lot of technical hurdles to jump over - I’m happy to cover any questions. We’ll continue shipping throughout the week. I see a lot of feedback in this thread already which we’ll get started with.

neoecos
3 replies
21h56m

Is there a way to query the local instance without using the AI input ?

kiwicopple
2 replies
21h28m

at this stage there is not, but you can just write SQL and the LLM is smart enough to run it directly on the database.

I can see some situations you might want direct access to postgres, so we'll jot that done as something to look at too

neoecos
1 replies
20h56m

I'm using it, but keep getting

       POST https://postgres.new/api/chat 500 (Internal Server Error)
I've tried, reload, create new project and go back, anything. db id oy6g6g6o5qbt78fo

kiwicopple
0 replies
20h40m

if your session is completely broken you might need to clear your localstorage (in the browser devtools)

I'll flag this with the team but we can't actually access your "db id oy6g6g6o5qbt78fo" as it's run directly in your browser and doesn't touch our servers at all

AlexErrant
0 replies
1d

Hey my guy, just letting you know that I'm pretty excited about the whole PGLite project (sorry if my comments above sounded overly negative >_>, managing expectations and all).

samwillis
2 replies
1d4h

Hey Sam from Electric here, I work on PGlite

What the team at Supabase have built with postgres.new is incredible, it has been a lot of fun to work with them on it over the last month or so. It's fead directly back into the development of PGlite and helped us iron out some bugs and finish hitting the feature list required.

SQL databases and LLMs go together so incredible well, the structured data and schema enables an LLM to infer so much context about the database and the underlying data. This exploration into a UI over this paring is going to be incredibly influential, it opens up what has traditionally been a complex technical problem to everyone. That's the true power of LLMs.

I'm not going to go on about PGlite here as postgres.new really deserves the limelight!

localfirst
1 replies
1d

which mobile browsers can this run on? seems like its not happening on Safari on iPhone. If it did that would be a game changer. Any status on whether ElectricSQL can run on mobile devices in general?

also is the UI tool open source? id prefer to run it locally

edit: this is an incredible tool. its going to eat a lot of backend engineers billing hours.

suggestion: add a copy button for the generated SQLs

kiwicopple
0 replies
23h25m

Everything is open source with the exception of the LLM at the moment, which we'll get to. Here are all the tools:

* postgres-new (https://github.com/supabase-community/postgres-new): The frontend. Apache 2.0

* PGlite (https://github.com/electric-sql/pglite): A WASM build of Postgres. Apache 2.0

* pg-gateway (https://github.com/supabase-community/pg-gateway): Postgres wire protocol for the server-side. MIT

* transformers.js (https://github.com/xenova/transformers.js): Run Transformers directly in your browser

fsiefken
2 replies
22h56m

Nice! If it’s WASM it should also work on iOS, so why can’t I connect? Pg.dev works though. I wonder if postgresql wasm can be made to run on A-Shell, even though there is a working SQLite wasm port running on A-Shell, which should be enough

kiwicopple
0 replies
22h39m

we're working on this right now - I don't know if we'll get it out for today, but it will certainly be available this week

bearjaws
0 replies
21h27m

It can be annoying on IOS. For example OPFS only works inside worker threaders, so you may design your app to access OPFS from the app + OPFS from postgres and realize OPFS isn't working in half your app... Happened to me when I built https://cluttr.ai which uses SQLite WASM

GordonS
2 replies
1d

This is so cool, "just because we could", but I am curious about possible use cases for Postgres in the browser?

kiwicopple
0 replies
21h26m

There are a lot of reasons that you might want Postgres in the browser, but one of my favorite possibilities is to use it as a local data store (similar to mobx or redux)

The Electric team have built this extension which will fit that use-case very well: https://pglite.dev/extensions/#live

CodesInChaos
0 replies
1d

Playgrounds like this which don't need server resources.

Adding offline support (PWA) to an application which was primarily designed for use on a server. Though I'm not sure how well that'll work in practice, since the rest of the application will be designed to run on a server as well.

zooq_ai
1 replies
1d1h

what kind of data load are we talking here? I know it gets stored locally. So, is it limited by my local disk size?

How will it perform if I have 1TB of data?

gregnr
0 replies
1d

1TB might be a struggle :) Yes, this is currently limit by both browser storage (IndexedDB) and memory. The reason it is also memory bound is because Emscripten's IndexedDB VFS is actually implemented as an in-memory VFS which syncs files to IndexedDB every once in awhile (due to an async limitation).

PGlite is working on an OPFS backend, which will likely increase the max data size quite a bit.

xyst
1 replies
1d

Normally not a fan of the “AI/LLM” + {existing workflow} headlines that companies have been pumping out but honestly. This might be a decent case. In my experience, LLM is pretty good at generating on the fly data for inserting into databases. So instead of hand rolling or building a query to insert data, it would be easier to query LLM.

Overall, looks pretty good. I’m on mobile but stumbled upon the blog post in comments.

Ask: I understand why it won’t run on mobile but at least give mobile a synopsis of what it’s suppose to do. I almost ignored this if it wasn’t for the luck of seeing your comment.

kiwicopple
0 replies
23h44m

at least give mobile a synopsis of what it’s suppose to do

agreed - we rushed this one a bit. we're working on some some updates now for mobile users

edit: we've shipped some changes for mobile which embed the video and link to the blog post so that it's a bit clearer. Thanks for the feedback

solarkraft
1 replies
20h22m

It would be cool to have this without the AI stuff.

Also, does the WASM build perhaps enable using Postgres as an embedded db, where typically SQlite would be used?

remus
1 replies
1d1h

From a quick play this is pretty cool! The design choices it made all seemed pretty sensible to me. For example I asked it to create a schema to support a chat application and it came up with something that works pretty well. I then asked it to modify the schema to support various different bits of functionality (e.g. adding support for images in messages, and soft deleting participants from chats) and it was able to handle all those. In addition it was suggesting sensible constraints (foreign keys, nullable, unique) where you'd expect them.

Good work

breadwinner
0 replies
21h15m

Pretty much everything you described is ChatGPT features.

netcraft
1 replies
1d

this looks awesome. Is it possible to create a database and load it with data and then share it with others? Would be amazing for teaching SQL, but also just many data collaboration tasks

jgoux
0 replies
23h30m

Yes, we are working on it. It should be available very soon!

kolbe
1 replies
1d1h

Basic question. Is there a service out there where I can easily link my database to an LLM to do this exact same type of analysis, except on one of my own Postgres DBs instead of one backed by PGlite? My org has several non-technical people who would greatly benefit from being able to interact with our DB via an LLM, rather than SQL 101 queries. The PostgreSQL Explorer extension on VS Code helps some, but doesn't quite make it as seamless at this.

breadwinner
0 replies
23h34m

If you want to use an LLM to type queries in English, Visual DB can do it: https://visualdb.com/

ignoramous
1 replies
17h32m

From the blog's section on semantic search:

  Under the hood, we store the embeddings in a meta.embeddings table then pass back to AI the resulting IDs for each embedding. We do this because embedding vectors are big, and sending these back and forth to the model is not only expensive, but also error prone. Instead, the language model is aware of the meta.embeddings table and simply subqueries to it when it needs access to an embedding.
A couple Qs if anyone knows:

1. What does it mean to "pass back to AI the resulting IDs for each embedding" but not the table rows corresponding to the matched vectors?

2. Does "the language model is aware of the meta.embedding table" mean Supabase has deployed a fine-tuned GPT-4o?

kiwicopple
0 replies
7h0m

1. You can see the relevant code block here[0]. To summarise, instead of storing the embeddings "next to" the data that you provide, we create a table that can be referenced. This is because often we need to send the LLM some data from the table (like creating a chart). If the LLM sees a reference to `meta.embeddings` then it knows it can "fetch" that data later if it's needed (for RAG etc)

2. We haven't fine-tuned, it's all just detailed prompts which you can find in the code. We might need to fine tune later for local-only models, but for now GPT-4o is solid.

[0] https://github.com/supabase-community/postgres-new/blob/4d6c...

dvasdekis
1 replies
17h56m

This tool is amazing for us. There's so many pieces to this - the capability is a big step forward for architecting databases.

Rudimentary compared to what you've done, but is it possible to take an existing database schema, developed either in the Supabase migrations style or another tool like Flyway, and draw the diagram? That alone is huge for us in an enterprise setting, and would allow us to communicate the structure of the database to business folks (especially large databases). How does the tool build that from migrations currently?

kiwicopple
0 replies
7h6m

take an existing database schema, developed either in the Supabase migrations ... and draw the diagram?

It might not be so obvious, but this diagram tool is already built into the Supabase Dashboard (under "Database")

I can see some value in providing this as a generic service for any PG database - I'll look into it - but I know that Dbeaver already has something like this built in (and I think pgadmin too)

TeeWEE
1 replies
14h9m

It is a neat tech demo but it clearly shows the limits of AI:

- I got it to generate invalid SQL resulting in errors - it merely generates reasonable SQL, but in my case it generated to disjoint set of tables…. - In practice you have tot review all code - It can point you into the wrong direction. Novel systems often have something smart/abstract in there. This system creates mostly Straightforward simple systems. That’s not where the value is

All in all, it’s not worth it to me. Writing code myself is easier than having to review LLM code

Within our organization we have forbidden full LLM merge request because more often than not the code was suboptimal. And had sneaky bugs/mistakes.

I’m not saying these can’t be overcome. But not with current LLM design. They mostly generate stuff they have seen and are bad as truly new stuff.

whalesalad
0 replies
10h10m

I have had tremdendous success with using LLM's to generate SQL. In my use, the majority of the time, ChatGPT gets things spot-on. Even for really sophisticated queries that are going to inspect and aggregate multiple tables into one single output.

I do agree it is not perfect - and things need to be reviewed - but I rarely get the sort of gobbledygook that "resembles" valid SQL and is in fact meaningless.

xilis
0 replies
23h12m

create a partitioned table for events with a unique constraint on the event id

infinite error loop

surfingdino
0 replies
10h37m

Nothing a decent SQL training course could not teach.

stkni
0 replies
1d1h

This is seriously impressive. I asked it to create 3 different databases:

- a customer orders database with products with a timeseries of prices, and multiple fulfilments per order. - an issue tracking system with a reflexive user/manager database - a family relationship model

In each case I got it to put in sample model and then asked postgres.new to answer some questions about the data it had inserted. I thought the family model would trip it up, especially when I told it to put in cousins and uncles. But no, it was pretty bang on.

The only thing it didn't quite manage is that some of the relationships are reciprocal (i.e. my sibling also has me as a sibling).

I asked postgres.new to review the data and it fixed some, and i asked it to check again and it fixed the rest. This is a very useful tool that I can see myself using!

saisrirampur
0 replies
18h41m

Super cool. Great work team! Love the deploy feature to deploy the entire playground to the cloud and get a connection string. Helps devs get started with Postgres projects very quickly.

oulipo
0 replies
20h0m

Very cool! What about projects/integrations with

- duckdb which also supports WASM - UWData's Mosaic (https://github.com/uwdata/mosaic) which supports real-time plots

would be really nice to have a kind of "drop-in" page that we could add to any intranet where people could just retrieve an export of some database, and plot it with your code

mrcwinn
0 replies
23h56m

It’s asking me to install Flash. Any ideas?

lolpanda
0 replies
1d

I like the UI. The chat interface is a good fit for this task. How do you prevent or should you prevent users from entering "write me a fib(n) in python" in the chat? To me the chat is solely designed for table creation directives.

Exuma
0 replies
22h24m

Immediately closed when you require me to connect to github to just run a query without the use of AI.