I’m familiar with this project - the creator is a friend. I’ll try to get him on here to answer questions.
He’s a seasoned database architect. With SQLsync he’s made a way for frontend developers to query and update a remote database as if it was completely located right in the browser. Because it basically is. The power of WASM makes it possible to ship a whole SQLite database to the browser. The magic is in how it syncs from multiple clients with a clever but simple reactive algorithm.
It’s a radical attack on the whole problem. Much of our work as developers is about syncing data. When you start looking at React and REST APIs as a kind of sync procedure, this approach can open a lot of new possibilities. You don’t have to write a weird bespoke database of trees of objects fetched and cached from the API any more. You can just update and query it locally, with all the power of a relational database.
I find that moving the full query system into the front end is where most front end devs really want to be. They want a full power query system for the data instead of continuous rounds of re-inventing the transport layer, REST, GraphQL, *RPC, etc.
It's hard to adopt such a system in most traditional web shops with their specialized backend and frontend teams. You're pulling out the database, backend, transport, and auth layers and replacing them with this single block system. Most system architects grew up in the backend so they are generally pretty ignorant of this issue. As it touches both sides extensively you're probably not fitting this into an existing system, which leaves only green field new development. Finally your backend is not an AWS or Asure service, neither is it lambda friendly. All of this means that most architect types I talk to will never touch it.
This style of system mostly already exists with old tech, CouchDB+PouchDB. Which works pretty well for some things. The downsides are that the query system isn't really ideal and the auth and data scoping system is pretty foreign to most people. The easiest model to work with is when the data is totally owned by a single user, and then you use the out-of-the-box database-per-user model. High data segmentation with CRDTs removes a lot of conflict issues.
It has scaling issues though, CouchDB has really high CPU requirements when you're connecting 10k to 100k users. The tech is long in the tooth though it is maintained. On the system design side it gets really complicated when you start sharing data between users, which makes it rather unsuitable as you're just moving the complexity rather than solving it.
This approach seems to hit the same target though will likely have similar scaling issues.
Look forward to see the evolution of the system. Looks like a first step into the world.
Yay, we're moving back to fat clients! What has been is what will be, and what was done is what will be done, there is nothing new under the sun.
I'm on the fat client train with my company and I nudge my clients that way if they're open. It's just a great way to build a system.
Great until you have to support n versions on m platforms and half your customers are enterprisey and stay on a 6-year-old version from before your last LTS version on a now-unsupported platform because they built a core part of their business processes on a a misfeature.
Yes but targeting WASM and SQLite minimizes that pain quite a bit.
Remember when targeting Macromedia Flash was going to solve the web compatibility and interactivity conundrum?
Yeah? It was targeted for destruction by Apple because it was buggy and insecure, not because it wasn't delivering.
Don't forget horrendous mobile performance (battery drain)!
Don't forget proprietary.
And the lack of accessibility. You generally couldn't even copy and paste text out of it.
This sounds like the set up for a "No? Me neither." punchline. Certainly one of the features of Flash is that it gave you fairly good consistency across computers, but honestly my perception of Flash wasn't that it was going to solve some grand web problem, but more "oooh look, shiny rollover buttons!" and "ooh look, super obnoxious advertisement that forces people to pay attention to it!"
I've worked on various forms of "legacy code" for most of my career. As long as the economics line up and the customer is willing to pay for the required support, then it's a fine business avenue.
If economics don't line up then you have to pull the plug and set them adrift, which is much easier and more secure with a fat client that runs without a server than say, a complex cloud system.
Ohh can't wait for the inevitable next step of dropping the "web" part of web assembly and doing, ya know, native fat clients again.
I work with lean, business speculative software mostly. Which means not cross-platform native development is simply not economical to do. I generally need to be able to hit Windows, iOS, Android, and MacOS square on with one code base.
A "native" electron or capacitor distribution system is a fine extension of a local-first web client. And an advantage of building fat clients generally is they lend themselves to such distribution models much easier than say, htmx or hotwire.
Native fat client have had their benefits and lots of people still prefer them, but always had the drawback of manual data management and installs. Being able to leverage any device you own with a cloud synced local-first client really gives you the best of both worlds.
But not all software fits readily within this model.
Why not Java?
Java fails on multiple points.
First, my list failed to include web because of the context. Web is, by far, the largest and most important platform. Even if I'm building only native installers for a project, I need to be able to build web projects with the same tools to make this work.
Java also fails "one code base" requirement as desktop and mobile are very different. The poor iOS support is going to cause it to fail the "square on" requirement as well.
No on Java.
Excel is a great fat client. Writing a sync in VBA is not, but some of the pieces are already there.
Those pesky backends are so annoying, so why don't we just put a backend on every client?
Schema and data migrations are too tricky, so why not have every client do it.
How is this approach meant to handle data visibility and access control? Often a large part of a backend is materializing raw data into a form that the active user is allowed to view.
So if the user owns all their own data, their "data view" is their data set. A To-Do system, a personal finance app, any kind of note-taking or personal record keeping fits this model.
You create a database per user and the auth and sync are all self contained within that database. This system is multi-master, which means that any change on a client or on the server will be replicated to every other. There is no "authority" which trumps the others. The server is simply a central hub that requires the right authentication to allow the sync process to happen.
When you want to create a set of data that crosses user boundaries, it gets complicated. It's possible to do, but you're not on the easy train anymore.
Creating a system that's both easy to use, and scopes the right data view out of the system wide tables and rows we usually think of databases, is not the CouchDB nor SQLSync model.
Correct me if I'm wrong: we can avoid the idea of a master for this use case because we suppose that only a single client (also server, I guess) will write at a time?
You’re wrong if clients can be used offline and sync when they come back online.
One user can have multiple clients. This is frequently the case, many to most users have both a PC and a phone. Also when one allows reasonable sharing of the account with family, 5+ connected clients is common.
So it sounds like this excludes most enterprise use cases?
If I'm generalizing. B2C products frequently fit but not always. B2B products generally don't but can in some circumstances.
Reminds me of Meteorjs. It would let you sync a subset of your data to the client and then the client could query it any which way it wanted. They called this “Minimongo”.
I've used Meteor. I thought it was a good system. It didn't have offline capability, at least not back when I used it. It really needed to be connected to work. But conceptually, yes, it had a very similar system.
It's not quite shipping the DB to the client, but I like the Supabase/PostgREST approach for replacing everything between the client and SQL server with a single very thin proxy that maps tables and functions to REST calls. Even the auth mechanism is fundamentally just Postgres RLS with signed tokens that you can query against in your RLS policies.
Genuinely curious why not just cache the relevant bits in LocalStorage / SessionStorage? I seem to remember Chrome trying to add a literal SQL database to the browser, but it never panned out, localStorage became king. I don't mean to downplay the usefulness, just I usually opt for what the browser gives me. I'm huge on WASM and what it will do for the browser as it matures more (or grows in features).
FWIW, Web SQL was always fine, but could never be standardized, because no one was ever going to redo all the work sqlite has done (when every browser already uses sqlite).
https://en.wikipedia.org/wiki/Web_SQL_Database
Firefox fought against WebSQL. Firefox then re-implemented indexedDB with SQLite on their own browser. Firefox has now largely faded into obscurity.
The issue was that a specific library would be pinned at a specific version for the rest of the history of the web. As good as SQLite is, I hope to hell we're not still stuck using it to handle mining operations in the oort cloud in 200 years.
This is why wasm is great. Webpages can just ship whatever version of SQLite they want. And/or eventually migrate to something better.
Tbf, the WebSQL standard was not well-written from how I've heard that story told. It was bug-for-bug exactly standardized to a particular version of SQLite, which is not a good way to write a standard.
The important thing is - Firefox has been slowly dying for a decade and SQLite has taken over the world.
There is a literal SQL store in the browser its the sqlite Wasm port. Its just panning out a little differently.
Which works only on Chrome, IIRC.
Because if this works it's amazing. Realtime sync with offline support out of the box, while not having to develop state management on client and api, but in one place. Those are very hard problems, done with less development. Will definitely give it a shot.
IndexDB is even better, it supports a wider variety of data serialization, can be queried and versioned
Good question.
First to address the main point: why not cache the relevant bits in some kind of local storage. SQLSync plans on doing this, specifically using OPFS for performance (but will have fallbacks to localstorage if needed).
Second to address the question of why not use built in kv stores or browser side databases. One answer is another question: how do you solve sync?
One approach is using a data model that encodes conflict handling directly, like CRDTs. This approach is easier to put into general kv stores, as syncing requires simply exchanging messages in any order. I find this solution is well suited to unstructured collaboration like text editing, but makes it harder to coordinate centralised changes to the data. Centralised changes are nice when you start introducing authentication, compaction, and upgrades.
Another approach is doing something similar to how Git Rebase works. The idea is to let the application state and server state diverge, and then provide an efficient means for the app to periodically reset to the latest server state and replay any unacked mutations. This approach requires the ability to re-run mutations efficiently as well as efficiently track multiple diverging versions of the database state. It's certainly possible to build this model on top of local storage.
For SQLSync, I found that by controlling the entirety of SQLite and the underlying storage layer I was able to create a solution that works across platforms and offers a fairly consistent performance profile. The same solution runs in native apps, browser sessions (main thread or workers), and on serverless platforms. One of my goals is to follow the lead of SQLite and keep my solution fairly agnostic to the platform (while providing the requisite hooks for things like durable storage).
That sounds awfully like Couchbase, which allows you to query/update databases that will sync to remote and the back to peers. And you can control the process (auth/business logic) with sever side JavaScript plugin with ease.
Creator of Couchbase Mobile here — I’m doing a new web-based thing[1] with a similar reactive API. I’m hoping that my encrypted block replication makes it more of a “data anywhere” solution than a “local first” database. But the paradigm of powerful databases in the browser is definitely one I’m glad to see becoming popular.
[1] https://fireproof.storage/
Very exciting I shall as I was a fan of your prior project!