Fat chance I'll read any of that with five mouse cursors flying around for no obvious reason.
I love local first, but I struggle with how to monetize truly local-first applications. I know that's not everyone's favorite topic, but I've got bills to pay and payroll to make. Our product is about 80% local-first, with live collaboration and image hosting needing a server. I plan to change that eventually, but I worry that jailbreaking the app in this way will lead to financial troubles.
Obsidian's model seems nice: base app is free, and then payment for the networked portions like sync+publish. However, there's little data available on how well this works and how big of a TAM you need to make it sustainable. Or if it's even possible without an enterprise revenue channel.
For those interested in building robust local-first + collaborative apps, I've been using Yjs for a few years now and have overall really enjoyed it. Multi-master collaboration also poses some stimulating technical and design challenges if you're looking for new frontiers beyond the traditional client-server model.
Local-first isn’t a good model for projects that need to monetize. I’ve given this a lot of thought as well. I think it only makes practical sense for apps that exist to serve a public benefit.
Local-first isn’t a good model for projects that need to monetize.
It worked all the way through the 80s, 90s, and most of the 00s. It brought Adobe, Ableton and many others to their dominant market positions, despite rampant piracy. It works for a lot of games, still.
What’s changed?
Did it work? The companies were generating far less revenue.
They're probably generating far less revenue now than if they started to kidnap and enslave people for profit. At what revenue does it start "working"?
Humoring your ridiculous comparison, no that’s not true.
Revenue maximization given mission constraints is usually the goal. SaaS is simpler, more economic and generally aligns incentives between parties better.
There are many things where a one time cost makes sense, though.
So you're saying they've reached peak revenue? The best they can do is stabilize? And that's why we've reached a point that it wasn't working but now it is?
When did I say they’ve reached peak revenue?
"need to monetize" != Need to enshittify and squeeze every drop of blood from the stone.
There's a large spectrum of extreme profits before every getting to that point.
Your comparison is disingenuous.
In any case, folks are free to try local apps for their business and post results.
A bunch of things changed fundamentally in ways that will never go back. As a consequence shrink-wrap software licensing is dead and will never return, sorry :( That happened because:
1. Users expect apps to keep working in the face of platform regressions and policy changes, for "free" (no unexpected extra charges). In the 90s Microsoft went through heroics to make buggy apps work on Windows 95 because if someone bought Windows 95 and an app broke, the user returned Windows to the store and got a refund. In 2024 if Microsoft or Apple ship a buggy OS update and it breaks your app (which they do), the user returns your app to the store and gets a refund. They don't see the platforms changing around them, they just know that your thing worked yesterday, today it doesn't, it's your fault, fix it.
This lack of 100% solid backwards compatibility means it's not feasible to sell a program once and let users keep it forever. They won't keep up their end of the bargain and, by the way, governments will always take their side and not yours so don't think you can just wash your hands of such problems via a EULA. Open source apps can get away with this because there was no purchase to begin with, commercial apps cannot.
2. App Stores happened. That means users can leave reviews and star ratings. In the 90s if you sold an app that happened to crash regularly if you pressed a certain combination of buttons, well, maybe three months later a magazine reviewer would notice or maybe they wouldn't. At any rate it hardly mattered unless you had some kind of unusually high level of bugginess and that fact spread through word of mouth.
In 2024 if you ship a version with even very obscure bugs users affected by it will immediately start leaving negative reviews. You have to fix this or else your app will become unpopular and you'll lose money, but again, this means a continuous stream of updates which everyone gets, which requires ongoing revenue to pay for it.
3. Collaboration happened. In the 90s networks were weak and most computer users worked alone. If Fred bought Office 2000 and you were still using Word 95, then it was expected that Fred would just know this somehow and also know how to use the "Save as" dialog box to save in Word 95 format. It was also expected that Fred's employer would provide a synchronous LAN fileshare using SMB where Fred could store a file. This was OK because remote work wasn't really a thing, and people tended to work in the same physical building. Because apps did releases so rarely and file writes were all synchronous/SMB supported file locks, this complexity was just about tractable albeit only as long as users had a high tolerance for computer nonsense. In the modern era collaboration is expected to Just Work without users having to phone up their mates to find out what app version they're using. That means everyone has to be on the same version, which means everyone has to get all the updates all the time, which means they all benefit from ongoing work by the company, which means a subscription. Plus we expect to be able to work from anywhere in the world, and from devices that don't support synchronous networked filesystems like iPads which means a custom per-app server farm. Those costs also matter here, see my other comment for why the Dropbox trick doesn't really work that well.
You ask why games don't seem to be affected by this. They are affected by this and they adopt a few strategies:
• Primarily target games consoles. Way more stable platforms with way fewer regressions.
• Adopt subscription payments too. See: MMORPGs.
• Keep costs low and hope they can cover them from the sales revenue as the player base expands.
• Just don't support games for more than a few months after they launch. That works because games are short-lived things usually. If the game is broken a year after release a few hard-core fans will care but most people have completed the game or moved on, and won't notice.
So then it sounds like we have to resensitize people to the upgrade treadmill and make people intolerant of it, so we'll have an incentive to get our software right the first time. If hardware engineers can get their products right once and for all, then why can't we? Are we lesser engineers? I suspect so.
Well, hardware engineers make mistakes too but they're covered by us software guys :) Drivers are full of workarounds for buggy hardware. And when things can't be hacked around with a firmware update hardware sometimes needs entire product recalls .... something we fortunately don't have to worry about.
A lot of game developers can afford to "ship it and forget it" because game consoles don't break APIs, and Windows has generally best-in-class backwards compatibility. MacOS and mobile operating systems, less so.
Developers shooting themselves in the foot by providing applications for free or 4.99 usd for user growth. Now it feels more like B2B at scale than B2C.
It didn't help that Apple provided the $0.99 price point for iPhone applications when the expectation for programs before was at least $20.
Thank you for articulating this so well. This is the core of why local-first applications can’t be effectively monetized. Users expect the best things for free or very cheap, and that’s mostly incompatible with the local-first concept.
>It worked all the way through the 80s, 90s, and most of the 00s. It brought Adobe, Ableton and many others to their dominant market positions, [...] What’s changed?
To clarify the conversation, "local-first" vs "cloud-based" is inadvertently getting muddied up with revenue models such as single-payment vs ongoing subscriptions. Those are 2 separate concepts.
Adobe/Ableton/Quicken desktop software still run locally on the users computer. Adobe Photoshop can save ".psd" files locally on the computer. But they have ongoing subscription fees and if users don't keep up the payments, they lose functionality or stop working.
Instead, what people often mean by local-first is "local _data_ stored on my computer as a 1st-class concept". The Single-Source-Of-Truth of data is preferably on my home computer and the cloud is just a convenient replica for backup or sharing. Ideally, the local-first data is also a an open format instead of a proprietary opaque blob. Examples would be local-first Obsidian (markdown files stored locally on computer) -- vs -- Notion (notes stored in the cloud). Or local-first NextCloud instead of cloud-based DropBox.
As your examples showed, local-first software can be monetized via subscriptions.
OTOH, if I need a network connection to verify my subscription status for the software to work in the first place it eliminates one of the main benefits.
For Adobe: they would like your subscription money every month rather than occasional upgrades.
Yeah it sucks that monetisation and long term user interest is so often diametrically opposed.
I would like to make open core software for a living but local first would take away the last reasonable venue for monetisation in those cases.
Depends on who the target customer is. As an enterprise I'd love to pay for a local application wherever possible. Oracle and MS still make billions on what are effectively local database applications.
For consumers projects - especially desktop - it's more difficult. Many would rather pirate software than pay a few hundred dollars a year for a licence. However, the advent of highly controlled app stores has mitigated this somewhat.
Yeah at the end of the day, we'll do what lets us keep doing. It does suck that local-first seemingly does not have monetary alignment with public benefit though. A robust local-first app (if we're including syncronization too) is very expensive to make. It's become a bit easier in the past couple of years though.
Yep. Local-first is further commoditizing software.
Users will refrain from paying a subscription if their app version works and they don't need updates. The good old buy-a-license with 1 year of updates will likely make a big comeback. Local-first software, at least in lix universe [0], does not have fixed recurring maintenance costs like running servers. The elimination of fixed recurring maintenance costs will lead to a proliferation of "hobby apps" that are undercut by price/make it free altogether.
[0] https://github.com/opral/monorepo/blob/main/lix/README.md
Which is wonderful for users that want that and are savvy enough to understand the limitations! However, hobby apps will not be able to afford designers or support or long-lived servers. As long as the social contract is clear, it's a net win. The issue comes when end-users expect the same level of support and continued updates from hobbyists as they got from commercialized apps. Spoiler alert: They absolutely do, at least in B2C. (Or in this case, H2C?)
The issue comes when end-users expect the same level of support and continued updates from hobbyists as they got from commercialized apps.
Maybe. Doesn't change the fact that software will be more commoditized.
If software switching costs get close to 0 [because data is owned and can be opened by any other app], pricing pressure will get intense. Investors love cloud SaaS because they can rent-seek and extract margins of >80%. It's unlikely that apps that do not own and lock-in customer data can rent-seek to such a degree. If they do that, users will switch to the next best solution.
The product is not just the program. People don't just care about function, they care about how the app makes them feel, how it affects how they think others perceive them, etc. -- the companies that can afford to market and make their experiences sexy will run laps around purely functional apps that can't, no matter how well they solve the problem (unfortunately). Good marketers and designers do not have the same proclivities towards working for free as developers do.
The hope is that the markets are large enough that niche apps can carve out a healthy revenue with a more personal, "homegrown" experience. This is where my product lives, and so far it's worked well enough.
Also, switching cost is never zero or even close to it. The user still has to click a button. I know it sounds like I'm being facetious, but coming from a large-scale B2C industry, a single button click is an enormous obstacle.
For most use cases the solution should be evident further down the road. Overwhelming majority of why a server is needed is data sync between clients. Self-hosting should have it's moment where a simple user-maintainable server solution emerges. Then you could just offer a plugin to that platform - a bit like how Synology works already.
I've mostly been cloud free this way. I do send (sometimes encrypted) backups/copies elsewhere, but those are exceptions. And it works well across many client devices (and remote via DNS/forwarding).
But the reality is that there’s plenty of hobby app businesses that do the one time sale revenue model.
The internet is rife with stories of hobbyists who would love to give adobe $500 to just buy their software outright. Many Adobe competitors offer exactly this!
What exactly do you mean by “hobby apps?” I’m slightly confused because it is pretty common to describe open source projects and the like as hobbyist programming, with all the expectations of customer service (specifically, none) that that entails.
If somebody is taking money for something, it isn’t really a hobby, it is a job. It might be a poorly paid one, but they still have an ethical obligation to their customers, to meet some standards of merchantability and provide some level of customer service.
But if you mean, “apps that hobbyists use” then that makes more sense. There’s nothing wrong with producing a toolkit that a hobbyist can use to write their own apps of course. It is like producing hand tools—you have an ethical obligation to make sure the tools themselves are reasonably free of defects but of course you don’t have to provide customer support for the treehouse!
I refer to open source projects, indie hackers, and one man shows with "hobby apps."
Distribution (specifically maintenance) becoming cheaper will enable those people to compete/provide enough value to customers that larger companies face downward pricing challenges.
Remember: it's local-first, not offline.
When users subscribe they should be purchasing a license that renews periodically. Local-first apps should validate the license (based on its expiry) by connecting to the internet to ensure users are subscribed. If they're not subscribed or they don't have internet access, they cannot use pay-walled features. You should offer a free, fair, and robust export tool for those not subscribed, though unfortunately not many companies do this.
What am I missing here? The drawback is of course when customers don't have internet access but are still valid subscribers, so they lose access to pay-walled features. One option would be to only offer annual licenses (infrequent license checking), another would be to offer a complementary 7 day license extension for the short-term until the user reconnects to the internet (grace period).
Stop giving away your value proposition for free. Get paid fairly.
Part of the ethos of local-first is permanent ownership of software over subscription services. Sure, it's not technically in the name, but it's very much part of the background for TFA.
I see, I didn't know that.
Why would one sell permanent ownership of a networked product? I've been seeing the rise of "lifetime" subscription levels (pay once) for SaaS products, but I'm curious as to what the actual long-term economics of it are.
If your cost of operations is not a one-time cost per user, I don't see how you can avoid a subscription model.
Why would one sell permanent ownership of a networked product?
You wouldn't. Maybe TFA has an extreme interpretation of local-first, but the entire point of the article is eliminating the last vendor-controlled server from certain kinds of apps so that the user can own the software forever.
Obviously this doesn't work for all apps, and it sounds like OP may have one that needs some form of network component. In that case what you propose would work very well.
You can charge the user once upfront, possibly after a free trial. 37 signals are trying it out with once.com. As a developer, you are in control of how local-first your application should be.
The way we handle it is: subscribing = full access, non-subscribing = read-only access. Philosophically we'd probably not be viewed as "true" local-first software given this constraint, but like you said: we can't give away our value prop for free.
Linear[0] is local-first and monetizes just fine.
how does linear keep working if the company disappears?
Sorry, to clarify in my case: Monetizing consumer products. Selling to companies is indeed much easier.
Sure, and there are also developers who live off pure donations "just fine", but pointing this out is not useful.
It's easier to build a cloud based subscription business.
Or if it's even possible without an enterprise revenue channel.
Is any company making money in software without an enterprise revenue channel, aside from games? Do regular people “buy software” anymore?
Yes. I have bought several non-DRM games from gog.com.
Edit: I also recently bought Sublime text, several Jetbrains products and some other utilities.
Lots of photo, art, music and other hobby software is for purchase rather than subscription.
Off the top of my head Topaz Labs is a company that offers a suite of photo tools for purchase. They are aimed at both professionals and high end hobbyists.
> but I struggle with how to monetize truly local-first applications.
How did companies like Adobe and Microsoft become giants with a lot of money BEFORE subscription services?They got big in a time when subscription services weren't the norm.
That’s a cop out answer and hardly the whole truth.
Their business model was give away and capture. Give away much of the possible revenue stream in order to lock in students, learning centers, governments, offices, and hobbyists.
That method of success has been well documented and replicated world wide.
The above method is how Google’s horrible products are currently winning.
Google is a serial murderer of its own products and services yet governments, schools, and companies fall over themselves to use Google products.
Capture is a valid, and proven, revenue strategy.
Ha. I just realized it is 10 years since me, Alexei and Oleg launched our CRDT demo on HN. Funnily enough, it featured this mouse-cursor trick as well. It was a demo of the lib. It was August 2014 if I remember correctly. The post was submitted by Dan Abramov. Nostalgie.
That repo is long abandoned https://github.com/gritzko/swarm
Did CRDTs and local-first advance much since 2014? There is a bunch of research, way more CRDT libs on GitHub, but (somewhat) successful businesses you can count on one hand.
I guess, that is the actual challenge - figure out the business model.
It seems there’s a lot more availability of collaborative text editing features in SaaS apps I think due to the availability of convergent rich text data structures. Those seem to have democratized that specific feature. But I don’t see “local-first” with user-owned data or true decentralized apps being successful as businesses yet.
Why not have it phone home to verify a subscription or a license key? Plenty of apps used to do this back in the day.
Certainly! It's what we do currently-- it's just not viewed as "true local-first" if you do this, as there's a server-auth-dependency by definition. Local-first has a bit of FOSS-esque philosophy mixed in. (Which I love! But... also gotta make money to keep the lights on.)
I build a self-hosted application (so I guess that qualifies as local-first), how I monetize it: one-time payment to get the product, then yearly (optional) paid support/updates.
I am not too worried about piracy, as that is a battle you'll never win (games have been fighting this game for a long time). Create a good product that solves a real issue, get in front of people, and provide a good service, so that they are happy to support you in the future (either for updates on this product or for building new products).
At the $800 - $2500 price range, that sounds awesome, but I'm in a market where $50 is considered outrageous. It's self-inflicted pain, I know, but we make it work.
Them being not monetizable by you is exactly the reason why I should use them. You could charge me for a copy of the software, but that won't last forever like subscription cloud crapware does.
You're not the first person that I hear having the exact same issue, and it's a very real one.
IMHO, the problem isn't with local-first application themselves. The problem is that our economic system rewards players responsible for vendor-locking (and any sort of similar practices), while providing no support (or even livelihood) for folks focusing on delivering a great product free of artificial restrictions.
First there were mainframes with thin dumb clients. Classic client-server.
Then came thick clients, where some processing happened on the local terminal and some on the server.
Technology advanced far enough to install programs locally and they ran only locally. The 80's and 90's were quite the decades.
Someone had a bright idea to build server-based applications that could be accessed via a dumb web browser.
That went on for a while, and people realized that things would be better if we did some processing in JavaScript.
Now the push continues to do more and more computation locally.
Can't wait for the next cycle.
All true but I would say the motivations are not same for each cycle.
First there were mainframes with thin dumb clients. Classic client-server.
Computers were very expensive so would be too expensive to have smart clients. The driving force is cost.
Then came thick clients, where some processing happened on the local terminal and some on the server.
Computers became cheaper and we had to deliver more things to a growing number of users. The driving force is dual combo of declining cost and increasing demand to do more.
Technology advanced far enough to install programs locally that ran only locally. The 80's and 90's were quite the decades.
Costs kept going down, computation power kept going up and things became more decentralized so you could do much more at the client side without having to deal with a big server. The driving force was still declining cost and democratization of computers.
Someone had a bright idea to build server-based applications that could be accessed via a dumb web browser.
That was in order to deal with the issue of managing at scale. The driving force was software management and focus on velocity of change.
That went on for a while, and people realized that things would be better if we did some processing in JavaScript.
To provide the richness of experience they missed from the 80's/90's locally installed programs.
Now the push continues to do more and more computation locally.
I would say that in addition to the usual drivers, now there is an aspect of graceful handling of issues introduced by network deficiency (brought due to over-reliance on server-heavy brower-light applications and the generally growing network hungriness) + increased awareness of privacy concerns.
Can't wait for the next cycle.
Maybe server-side democratization/aggregation and continued emphasis on strengthening local experience.
The driving force was software management and focus on velocity of change
That might have been the motivation for some at the beginning, but the reason cloud-only is now dominant is the SaaS + lock-in model that guarantees recurring revenue for the software company.
Also impossible to pirate.
but so much better to 'pirate': from Azure - https://apnews.com/article/microsoft-cybersecurity-hack-raim...
to Snowflake - https://archive.ph/WLmYA
Specifically because investors cared more about MRR and ARR than any specific numbers - this conversion is really what turned everything into a SaaS company.
Let's not forget the hassle when you don't have one source of truth. Life before Dropbox and the like wasn't super fun.
I think this is an underestimated factor at play. It's super nice to not have deal with that issue, and be able to switch between computers seamless without worrying about whether your data is accurate/up to date.
Lock in and recurring revenue is not something new for SaaS. Most ERP systems and similar business systems was sold like this way before. But with the added hassle that you had to manage a server locally yourself.
Sometimes I think people just go straight to "corporations must be evil" because of ignorance/it feels good to blame the "Big guy" one can't relate to.
I wouldn't call it because of lock-in and more because we dropped the fiction that software is a thing that has a one-time cost and having people manage license keys is more expensive than its worth from support tickets alone. It's always been ARR it's just that the only software that was able to sell was huge amounts of units of $Software 2023/2024/2025 to smooth over the churn.
People get mad at you, the software vendor, if you don't provide indefinite support to bits-in-a-box they paid $60 for 5 years ago (or 10 years ago if it's government) or make them buy a support contract but if you just charge it as in all-in-one subscription for the same price they're suddenly happy to pay it.
Fully agree regarding the motivations. The thing many young devs don't realize that there's a reason for doing things a certain way. It's not "just because this is the best way to build everything."
The thing many young devs don't realize that there's a reason for doing things a certain way. It's not "just because this is the best way to build everything."
90% of the modern tech industry is monkey-see-monkey-do.
Plenty of devs on HN who are vocally proud that they don't think about how the things they build work, they just build.
That's not the tech industry, that's just everything.
And that's generally fine. We're good at imitation. It's not that most people can't have their own thoughts, but they save that for things that are more important than work -- or at least I hope that's the case.
Sure but the one not fine thing: There's still just about ZERO accountability for harm.
We need regulation or liability, yesterday.
I teach IT and the thing I try to drill home; the entirety of IT is NOT intelligent design, it's evolution (for BUSINESS, not for tech) -- which includes a whole lot of vestigial stupidity and awfulness.
As for the next cycle, the internet computer protocol is an interesting punt - a distributed network of servers, with replication and a degree of anonymity, where people host and pay for compute
a distributed network of servers, with replication and a degree of anonymity, where people host and pay for compute
Sounds a bit like Bill Gates' vision from the 90's of people subscribing to Windows, and paying a bit each time they turn the computer on.
To provide the richness of experience they missed from the 80's/90's locally installed programs.
I agree with GP: the 80s and 90s were quite the decades. Back then it was normal to have, say, a 3D modelling / rendering software on your computer (like 3DS Max). And something to create music yourself. And a software allowing you to do DTP (with the 'D' even standing for Desktop). We already had IDEs and some of us had Emacs / vi(m).
Computers were used both for work and leisure and they were certainly used in creative ways, to produce "stuff", to experiment.
They were more than simply passive devices made to purely consume AI-generated crap.
And we already had BBSes then the Internet. We had Usenet newsgroups. By the mid to late 90s we had Nestcape and "alltheweb", then Google.
It was local first, but with access to the world's knowledge, basically.
And we had games and networked games: I was playing Warcraft II over the Internet in the mid-nineties and Counter-Strike in beta in 1999.
At that point it was kinda peak civilization, before the gamification of search engines / enshitification of everything. Lootboxes weren't a thing. Seeing ten zombies mindlessly consuming poor content at a bus stop wasn't a thing yet.
And I don't think the move to web-first / browsers-first / ultra-slow mono-threaded JavaScript first (JS was really pathetic at the beginning) empowered users: it was a step back, going years and years back in capabilities compared to what we have locally on our computers.
And, today, if you want to create something in 3D, you'll be using Blender. For music, you'll use something local too. For DTP: same thing. Heck, you'll be creating websites / webapps using a mostly local-first solution.
I do think this last quarter of a century the Web has destroyed the experience, not enriched it. It has turned users into dumb consumers of content.
Sure, I can order stuff on Amazon in two clicks (which I do) and it's more convenient than using eBay to order that video tape of "Le Mans" in the mid nineties (which I did too): but Amazon is just more consumerism.
And it's not just dumb consumerism: it's accompanied with total surveillance and insecurity.
To me in the 80s and 90s computers had, compared to their capabilities, way more richness of experience than most experience today.
What did I not have back then? Hmmm. Access to my broker in real-time: that's one I use a lot. Oh, wait, I'm using my broker's desktop app.
As a fun sidenote, HN would have worked fine on Netscape in the nineties. I could have typed this comment on a 486.
The "richness" of the enshitified Web: I just don't see it. We had the Web. And it was a better Web.
I was a kid at the time so my glasses may be quite rosy. But, it was cool how much space had no computers in it. I was a nerdy kid, and always looking forward to getting back to my computer. But, now there’s no escaping the things. Or going back to them.
It seems almost too obvious to say that the best model, if cost, size, performance, etc are no longer limiting factors, is to run code in both client and server. Advantages of server: no need to distribute code, and data naturally more consistent. Advantages of client: offline accessibility, speed of light limitation on performance. Of course, in the real world, you have additional tradeoffs, but those I described are probably timeless and universal rules.
Can't wait for the next cycle.
Edge compute :D Have data distributed "wherever" (client, server) to perform computations "wherever".
Using a video editor as an example: The video editor running locally means videos can be rendered for free with the user's hardware. Great for hobbyists (or a free-tier!), bad for large projects that would take too long to render. Have "edge compute" and hobbyists can use the free-tier while large projects can render on the cloud.
The new Freenet[1]. Edge compute on stereoids, fully decentralized applications and data. No distinction between server/client, everything is a network node.
And as a network admin I am pushing to have all of our office computers replaced with thin clients, and centralizing all of our compute resources on a single machine that is probably running Windows Server 2022. The current year is 2024. This architecture was cutting edge in the 1990s. I'm over 30 years behind the curve on this.
Local/mainframe is a well known “cycle of reincarnation” in software. We are currently in what I call the "mainframe 2.0" era, or maybe even "mainframe 3.0" depending on how far back you go. (The cloud is just the mainframe.)
Technology advanced far enough to install programs locally and they ran only locally
Commonized PCs are a wonderful thing
A critical motivation for dumb web was blackberries and early iphones. They had extremely limited computational power.
Remember flash? It was just too power hungry to run on phones.
Today's phones are stronger than yesterday's computers.
AI?
Tonsky blog post HN comment boilerplates:
- He writes about UX but there's no dark mode!
- The cursors are really distracting!
- The yellow background color makes it unreadable!
- Some comment about the actual content of the post
Moving pointers mean most people (including me) will never read the actual content, so we might as well discuss why someone would choose to alienate their potential audience in this manner.
Sometimes alienating can be intentional.
If he's targeting a niche, I'd love to know the customer profile. Maybe he can find a group of people who want to read books in Figma
Some people also just want to have fun on the internet, remember Myspace, Tumblr etc? These sites were hardly functional, inclusive or accessible but they sure were fun.
Sure, but isn't that called trolling? Shouldn't trolls be blocked by default?
Appealing to different tastes is far from trolling.
I think it’s designed to shoo away HN middlebrow dismissalists such as yourself.
That seems a stretch to say most people won’t read the content because of this. Some for sure. But most?
Do you know Nikita? He likes to provoke and manipulate the audience.
Check the today's twin post on the same subject: https://news.ycombinator.com/item?id=40772955
Comments are 100% on the point, both bland and boring.
Only in a way that we are in the same website business for a while: https://grumpy.website and https://annoying.technology
Hi :)
That dark mode is hilarious
It's the UX incarnation of a dad joke.
Well to be fair the cursors _really_ are distracting, what do you expect
I think an important requirement for making the "forever" aspect of local-first possible is to make the backend sync server available for local self-hosting.
For example, we're building a local-first multiplayer "IDE for tasks and notes" [1] where simply syncing flat files won't work well for certain features we want to offer like real-time collaboration, permission controls and so on.
In our case we'll simply allow users to "eject" at any time by saving their "workspace.zip" (which contains all state serialized into flat files) and downloading a "server.exe/.bin" and switch to self-hosting the backend if they want (or vice versa).
If there's an app-specific backend sync server it's not going to be "forever."
Why can't syncing data be commodified via a dumb cloud that just syncs blobs? If you want privacy locally encrypt everything.
We already have that in the form of S3 and friends, but they aren't consumer-accessible. We also have some platform-specific ones like Apple iCloud but those are platform-specific.
In the end the problem is that there's no financial motive to do this.
I am working on exactly this in a music player I am writing. Specifically, you can use any S3 compatible storage to sync your music and metadata between devices, and it's end-to-end encrypted.
I plan to offer single click resale of storage directly in the app for users who don't want to deal with access keys, secret keys, ACLs, and the like.
Considering people are still running BBS Doors and other software from very long ago, I'm not sure that I completely agree.
The previous post's point is that they make their app's sync service as available to users as the app itself. "Forever" is the wrong word for any of this, but the sync service will live as long as the app does.
(BTW, your idea supposes the ongoing existence of storage services with a common, stable API an app can be written against. You can barely say that exists now, much less expect it to into the indefinite future. The previous poster only assumes general servers are available, including BYO if you want.)
It would be pretty slick if the export also included a versioned text spec defining the schema for the data..
So even if the software and all supporting software is no longer available you can still make something to read the data..
This is kind of getting into r/Preppers territory, but still... There's something nice about that.
Sure it is. You don't see people complaining that a 20- or 25-year-old game no longer has its original servers running if you can easily host one at home or use a community-run one. Take Quake for example.
I saw the link in your unminified html to [0], and wanted to boost it here in case others had also not seen it before.
We're doing it all completely bootstrapped. Day 1 starts with no social media presence, no followers, no press contacts, no road map, a shoestring budget and no revenue. We want to do it the hard way to demonstrate that the barriers to entry are lower than they've ever been. All you need is a couch and a laptop (couch optional).
Bootstrapping is not very much in vogue in these orange parts, but it's very much my jam.
Yo, that's sick! More things should do exactly this!
Control, privacy and reliability
Dropbox! Well, not necessarily Dropbox, but any cloud-based file-syncing solution. iCloud Drive, OneDrive, Google Drive, Syncthing, etc.
It’s perfect — many people already have it. There are multiple implementations, so if Microsoft or Apple go out of business, people can always switch to alternatives. File syncing is a commodity.
This doesn’t work for collaborative software. It’s also highly questionable for realtime software like chat. That’s a solution looking for a problem.
There is exciting movement in the space but imo people focus too much on CRDTs, seemingly in the hopes of optimal solutions to narrow problems.
What we need is easy-to-use identity management, collaborative features without vendor lock in and most importantly, a model that supports small-medium sized businesses that want to build apps while making a living.
This doesn’t work for collaborative software. It’s also highly questionable for realtime software like chat. That’s a solution looking for a problem.
Yea, a lot of good foundations won't work in specific requirements. Live collaboration is a very different experience though than other applications need. Why pay the cost for something you aren't using?
Why pay the cost for something you aren't using?
If you don’t have multiple users vanilla Dropbox with old-school file based apps are fine because you don’t need concurrency for one person. So I agree with you: why pay for CRDTs if you aren’t using it? This has been solved for 15 years at least.
For local first to be a meaningful term, it has to provide something that can compete with modern apps while providing some meaningful change in terms of portability, data ownership, privacy, hackability, self sovereignty, etc. Otherwise we’re just over-engineering ourselves into spurious idealism.
This doesn’t work for collaborative software.
Is the issue the lack of real-time updates? In principle, you could work around that using a separate WebRTC channel for live updates, with the slower Dropbox sync serving as the source-of-truth. (It does indeed take >=5 seconds for Dropbox to sync a collaborative app in the way the author describes, in my experience.)
I think concurrent editing is where Dropbox collaboration falls down.
E.g: you edit the intro paragraph, and your buddy adds a conclusion, both while offline (or simultaneously). Dropbox can't/doesn't resolve the conflict in your files, and instead picks a winner.
This is still solvable by your app, but requires additional logic/storage above what the "shared folder" model provides.
If somehow Dropbox ran SQLite and sync'd it across devices for me, I'd pay for that.
Some of that will sort itself out naturally. Is there any normal person who goes, “Man, I love going through the toil of coordinating file structures with other people!”
I have always liked the idea of local first. The problem with it though, is that it almost always suck or isn't that important or both.
At least for myself, I barely use any local first software. The software that I do use that is local in any important sense of the word is basically local-only software. I realize this every time I lose connection on my phone. It becomes pretty much a pretty bad camera compared to my Sony.
I live in a country were I have good 3G speed pretty much everywhere so internet connectivity is never an issue, not even on moving things like trains or boats. The very few times I have been flying or whatever I simply don't do any work because it's usually uncomfortable anyway.
This is the main reason I don't really care about local first and have been diving into Phoenix Liveview the last couple of weeks. The productivity boost I get and cool realtime apps it empowers me to build is more important to me than the dream of making local first web apps. A realtime demo of things updating with multiplayer functionality is a far easier sell than "look, the app works even when I turn on flight mode". And honestly, in like 99% of the time, it is also more useful.
I have done local first web apps before and it is always such a pain because syncing is a near impossible problem to solve. What happens if you and someone else have done changes to the same things like 2 hours or more ago? Who even remembers which value is correct? How do you display the diffs?
No matter what you do, you probably need to implement some kind of diffing functionality, you need to implement revisions because the other guy will complain that the changes were overwritten and so on. There is just so many issues that is very hard to solve and require so much work to be done that it isn't worth it unless you are a large team with a lot of resources. You end up with a complicated mess of code that is like git but worse in every way.
It's easier to simply say the app doesn't work offline because we rarely are offline and no one will pay for the effort required. Unfortunately.
Fascinating how far technology use and philosophy has diverged between generations.
I'm old school and local first. I don't own a smart phone and I host my own music, video and I read real books and ebooks I download and read from an off line eReader.
Then I compare with the modern man who streams his music, video, buys audio books as a service, uses only web browser applications, and then bitterly complains when the internet goes down, or when the prices of his apps go up, or when his favourite music or movie disappears from the service provider and so on and so on.
So not only a complete loss of control and computing autonomy, but also a financial drain as well when counting the cost of all the apps and services.
Mean while, I'll happily chug along, local first, and if the internet doesn't work, at most my web browsing and email suffers, but since I do have my reference libraries, documentation and interpreters available locally, I can still get work done until the internet comes back up.
Convenience is king. As life piles up (errands, parenting, everything), cognitive capacity goes down and the more the user wants a "push button do everything" solution. They'll trade autonomy for ultra-convenience + a dropped internet connection now and then. I'm somewhere in the middle, but I see why people do either.
People just don't care (anymore) unless it affects them personally. I have no idea if it used to be like that or if is a function of modern society where you get accustomed to it due to a lifestyle that is good for the majority of the time.
Some people like me cares too much about too much and thinks a lot about stuff like this. I wish more people would care about more things but it just ain't so.
Yeah I kind of agree with what you're saying, the main issue for me is that I work in web dev and very few people are like you are and would pay for local first software. I have a hard time even convincing people I know hate cloud spyware shit to use local first alternatives that require just a little more setup.
I would love to make all my work local first but when the users are requesting other things I have to oblige. The feature list is long and the time is short. I simply can't spend a lot of time on making a sucky git alternative for when conflicts happen. When doing work on my spare time projects I want to make as much as possible in the little time that I do have, so local first is pretty much on the bottom of the list. Personally, I tried to build a music library but it's very hard today since you can't get music legally in a good cheap way. Illegally by torrenting is also hard because no one torrents music anymore.
I do keep a library of movies but I rarely visit it and stream new movies with torrent software. Basically everything else, like work, is mostly online based today so there is not really much to do locally anymore except photo editing and watching movies for me. Of course, I do still develop on my machine but as soon as I get stuck on something I need the internet to look up some reference or whatever and if I don't have the internet I can't get emails or chat messages from colleagues or push my code anyway so then I can only work for a short while.
I completely hate most cloud software and prefer local first but my issue is that no one except nerds on hackernews really cares about it and it is too much work for little gain. I actually are in a process to rewrite a side project that I hope to one day be a business from a local-first react project to a phoenix liveview. Why? Because I think the realtime aspects of the project will bring in more customers and it's far easier to maintain the state. So again, less work and more features I will actually get paid for.
If I ever make it to a stable business, I will probably make a small portion of the app work offline because I know some customers will like it when they use it under ground.
I expect that one mild catastrophe, one that is bound to occur every now and then, one just large enough to disrupt networks for maybe a few weeks or months on a continental scale - would make everyone realize how foolish the whole cloud (first) idea was, and it would be left in the dust along with the perhaps decade of its work and proponents.
"Disrupt networks" how? It’s very hard to break the internet - it’s designed to be extremely resilient to such catastrophes. The odds of a country suffering from a complete internet blackout for more than a few hours without being induced are incredibly slim, and drop further when the country doesn’t have a single point of failure but instead a huge range of connections geographically distributed.
I’d sooner bet on a regional/widespread power grid failure, which at least has way more history of occurring.
A couple of years ago, several provinces in the south of Argentina were connected over a single fibre optic cable that stretched for kilometres next to the road (I doubt that this has changed since).
One day, some construction work dug right into the fibre optic and left several cities without internet for a couple of days.
The Internet is quite resilient in densely populated or wealthy regions, but it's definitely not for a large amount of the human population.
True - but that’s still not on the scale of months. There’s also alternate connection methods like starlink, other mobile providers, etc.
https://remotestorage.io/ was a protocol intended for this.
IIRC the visison was that all applications could implement this and you could provide that application with your remotestorage URL, which you could self host.
I looked into this some time ago as I was fed up with WebDAV being the only viable open protocol for file shares/synchronization (especially after hosting my own NextCloud instance, which OOMed because the XML blobs for a large folder it wanted to create as a response used too much memory) and found it through this gist [0] which was a statement about Flock [1] shutting down.
It looks like a cool and not that complex protocol, but all the implementations seem to be unmaintained.
And the official javascript client [2] seems to be ironically be used mostly to access Google Drive or DropBox
Remotestorage also has an internet draft https://datatracker.ietf.org/doc/draft-dejong-remotestorage/ which is relatively easy to understand and not very long.
[0] https://gist.github.com/rhodey/873ae9d527d8d2a38213
I used remoteStorage.js for a while for an app, but the JS SDK was too limited. I'm surprised to see the last commit was just last week. Is it still maintained?
I never understood why it barely took off, the benefits of local-first are great and a competitive advantage to capture those privacy-sensitive users seeking to own their data. Hoping for a revival...
Looking at the last commits it looks like the following:
- updating dependencies
- fixing new linter recommendations
- fixing typos
- fixing dead links
Really good idea. No reason not to commoditise this and have companies just compete on price.
Loved the realtime cursors on a post talking CRDTs.
cursor sync has 0% to do with CRDTs
Still a state sync though
Its somewhat amusing every time this blog comes up on the front page and 50% of the comments are about the pointers. I guess its a good way to generate activity around the post haha
We're reaching meta territory now, because 25% of comments are about the fact that every post is about the pointers (yours and mine included).
Soon every post from this blog will collapse into a black hole of meta^n comments
Local first is the first software trend in a long time that has gotten me really excited. The aspect of co-located data and logic is what's most interesting for me for two reasons:
1. Easier to develop - Sync layer handles all the tricky stuff, no need for translation layer between server and client.
2. Better user experience - Immediate UI feedback, no network dependency etc.
I suspect there will be a major tide shift within the next year or two when a local first framework with the developer experience similar to Nuxt or Next comes about. The Rails of local first.
I can't recommend enough the localfirst.fm podcast which has been a great introduction to the people and projects in the space: https://www.localfirst.fm/
Podcast host here. Thanks so much for your kind words! Glad to hear you're enjoying the conversations and find them helpful!
I'm working on that.
There are also some interesting projects out there like https://github.com/a-type/verdant
If you set out to build a local-first application that users have complete control and ownership over, you need something to solve data sync.
Dropbox and other file-sync services, while very basic, offer enough to implement it in a simple but working way.
That's how I use KeePassXC. I put the .kdbx file in Seafile, and have it on all my devices. Works like a charm.
I think that is, because KeepassXC has the logic to deal with a database file changing while having that file opened. Yes, can confirm, this works nicely.
Yeah but does it solves conflict for you?
glad to see more discussion about local-first, but there havn't been a good business model for local first products, which might lead to the unsustainable of the tech ecosystem
You can just sell apps for one-time fee. Used to work fine in the past
A vast portion of the software industry works "local first", not to mention that it worked almost exclusively like this for 30+ years. "Not local first" is maybe a 15 year old phenomena and people act like there is no alternative.
Just use RDF/knowledge graphs. Yes, easier said than done. But you own your claims on your facts. It's interoperable. You then need a tool chain for trust/provenance when mixing data locally or remote.
RDF is open-world and entirely additive; it doesn't solve the synchronization problem. You will end up with an ever larger set of triples.
Also, the author has created the most popular client-side database in the Clojure community (Datascript) which happens to be for making knowledge graphs, so I'm sure he's familiar with it.
To get rid of pointers, you can add
tonsky.me##div.pointers
to your uBlock Origin's custom filters list.Or just click the 'Reader view' button at the top of Firefox.
people are always hating on the cursors, i think they're fun
I only just noticed that they are (appear to be?) different depending on the OS of each reader.
I'm pretty sure I'm one of the only people in the world who has actually built an app that works in this exact way, and shipped it, and supported it. It was called Lighthouse and it was for organizing crowdfunds using Bitcoin smart contracts, so it didn't only have the fun of syncing state via DropBox+friends but also via a P2P network.
Here's what I learned by doing that:
1. Firstly - and this is kinda obvious but often left unarticulated - this pattern more or less requires a desktop app. Most developers no longer have any experience of making these. In particular, distribution is harder than on the web. That experience is what eventually inspired me to make Conveyor, my current product, which makes deploying desktop apps waaaaay easier (see my bio for a link) and in particular lets you have web style updates (we call them "aggressive updates"), where the app updates synchronously on launch if possible.
2. Why do you need aggressive updates? Because otherwise you have to support the entire version matrix of every version you ever released interacting with every other version. That's very hard to test and keep working. If you can keep your users roughly up to date, it gets a lot simpler and tech debt grows less fast. There are no update engines except the one in Conveyor that offers synchronous updates, and Lighthouse predated Conveyor, so I had to roll my own update engine. Really a PITA.
3. Users didn't understand/like the file sharing pattern. Users don't like anything non-standard that they aren't used to, but they especially didn't like this particular pattern. Top feature request: please make a server. All that server was doing was acting as a little DropBox like thing specialized for this app, but users much preferred it even in the cryptocurrency/blockchain world where everyone pretends to want decentralized apps.
4. It splits your userbase (one reason they don't like it). If some users use DropBox and others use Google Drive and others use OneDrive, well, now everyone needs to have three different drive accounts and apps installed.
5. Users expect to be able to make state changes that are reflected immediately on other people's screens e.g. when working together on the phone. Drive apps aren't optimized for this and often buffer writes for many minutes.
You don't really need this pattern anyway. If you want to make an app that works well then programmer time is your biggest cost, so you need a business model to fund that and at that point you may as well throw in a server too. Lighthouse was funded by a grant so didn't have that issue.
Re: business models. You can't actually just sell people software once and let them use it forever anymore, that's a completely dead business model. It worked OK in a world where people bought software on a CD and upgraded their OS at most every five years. In that world you could sell one version with time-limited support for one year, because the support costs would tend to come right at the start when users were getting set up and learning the app. Plus, the expectation was that if you encountered a bug you just had to suck it up and work around it for a couple of years until you bought the app again.
In a world where everything is constantly changing and regressing in weird ways, and where people will get upset if something breaks and you tell them to purchase the product again, you cannot charge once for a program and let people keep it forever. They won't just shrug and say, oh OK, I upgraded my OS and now my $500 app is broken, guess I need to spend another $300 to upgrade to the latest version. They will demand you maintain a stream of free backported bugfixes forever, which you cannot afford to do. So you have to give them always the latest versions of things, which means a subscription.
Sorry, I know people don't want to hear that, but it's the nature of a world where people know software can be updated instantly and at will. Expectations changed, business models changed to meet them.
In a world where everything is constantly changing and regressing in weird ways
Rather than accept that this is just the way it is, we should try to make a more stable software world for our users. I don't exactly know how to do that, but I know that we definitely won't get there if we don't try.
The thing about local first syncing options like this is that they mostly do not work on mobile. For example iPhones cannot handle dropbox syncing random text files in the background as a regular filesystem for an app to deal with.
Not saying that's not iPhone's fault, but I doubt any of this works on that platform
I've been a happy user of a PWA doing local sync. That said, the data it needs to sync can fit in localStorage.
Not affiliated in anyway, but the app is http://projectionlab.com/ and it allows you to choose between json import/export, localStorage sync, and server-based sync as desired. Since it has an easy to use import/export, sync with some other cloud provider on iOS is basically just a matter of "saving the file," since iOS lets you do background sync of authorized Files providers.
Even though it's a web app, being able to download the page and then after run it entirely offline in a fresh browser window each time, built a lot of trust with me, to the point where I mostly run it with localStorage enabled and only occasionally check its online/offline behavior anymore.
Dropbox, OneDrive and others are dangerous because they default to cloud-first. To "save disk space", they upload your files and provide a proxy/placeholder for your actual content.
If something happens to the provider, or they decide they don't like you or your files, your data is gone. Worse than gone, because you still have the empty proxies -- the husks of your files.
I personally know of more than one instance where seemingly innocuous data triggered some automated system at Dropbox and the user was locked out of their files without recourse.
If you're using cloud storage, make *absolutely certain* you have it set to download all files. If your cloud storage exceeds the drive space of a laptop (small businesses, etc), get a cheap dedicated PC and a big drive, then set up at least one dedicated cloud mirror.
Local-first cloud storage is great, but the potential for catastrophic data-loss is not even remotely close to zero.
Some years ago I lost my device and I got in first place the experience on how I was dependent of those cloud services for almost everything since email until my photographs.
Fast forward I made exactly that: I got another cloud provider and I started to sync to another 2 physical local devices one with by-sync (remote to device and device to remote) and another one only local to remote on top of a local NAS and hard drive.
A bit of an aside, but CRDTs are not always the best approach to solving the local-first distributed consistency problem. For the specific given example of syncing files it might make sense, but I'm starting to see CRDTs used in places they don't need to be.
Where is your ground truth? How collaborative is a given resource? How are merge conflicts (or any overlapping interactions) handled? Depending on your answers, CRDTs might be the wrong tool.
Please don't forget about straightforward replicated state machines. They can be very easy to reason about and scale, although require bespoke implementations. A centralized server can validate and enforce business logic, solve merge conflicts, etc. Figma uses a centralized server because their ground truth may not be local.[1]
If you try a decentralized state machine approach the implementation is undoubtedly going to be more complex and difficult to maintain. However, depending on your data and interaction patterns, they still might be the better choice over CRDTs.
It could be argued that even for this example, two local-first clients editing the same file should not be automatically merged with a CRDT. One could make the case that the slower client should rename their file (fork it), merge any conflicts, or overwrite the file altogether. A centralized server could enforce these rules and further propagate state changes after resolution.
[1] https://www.figma.com/blog/how-figmas-multiplayer-technology...
Matthew Weidner's blog post has been enlightening [1]. When a centralised server is involved, CRDTs and OT are merely optimisations over server reconciliation so yes CRDTs aren't always the best approach. It is a compelling approach for text editing though.
[1] https://mattweidner.com/2024/06/04/server-architectures.html
In 2016, I built a PWA that can synchronize using two different backends: AWS if the user doesn't care where the data is being saved to or WebDAV (in my case, a Nextcloud instance). Sadly, I built it in prototype style and didn't take the time to fix/rebuild things properly.
But I have used this app every week since, and one of the lessons is that operations-based files grow pretty quickly. If you want to keep sync times short and bandwidth usage to a minimum, you have to consider how you keep read and write times to a minimum. I use localStorage for the client-side copy, and reaching the 5 MB quota isn't that hard either. These things can be solved, but you have to consider them during the design phase.
So yes, it's cool stuff, but the story isn't over with using automerge and op-based files.
localStorage is pretty bad, we saw lost writes under contention from different tabs in all browsers and stopped using it for write-path user data in 2019. IndexedDB is annoying but more trustworthy.
With op sync you can compact & GC old ops past some temporal or data size threshold. Say, if the file reaches $LIMIT mb, compact and drop old ops past $OLDEST_OP_DATE when the limit is hit.
If you receive edits from before the cutoff, fork the file since you can’t merge them without conflict.
Absolutely love the cursors.
100% on board with this.
It's vexing how many tools just assume always-on connectivity. I don't want a tasks-and-notes tool that I need to run in a browser. I want that data local, and I may want to sync it, but it should work fine (other than sync) without the Internet.
This is also true for virtually every other data tool I use.
Rusjnejo
Pouch DB is a great local first DB with optional sync for JavaScript: https://pouchdb.com/
A universal sync engine could be "files as zipped repositories."
A repository as a file is self-contained, tracks changes by itself, and is, therefore, free from vendor lock-in. Here is my draft RFC https://docs.google.com/document/d/1sma0kYRlmr4TavZGa4EFiNZA...
Along similar lines of "just use your preferred cloud-based file-syncing solution", see: https://github.com/filipesilva/fdb - the author spoke about it recently [0]. The neat thing about this general approach is that is pushes all multi-user permissions problems to the file-syncing service, using the regular directory-level ACLs and UX.
[0] "FDB - a reactive database environment for your files" https://www.youtube.com/watch?v=EvAFEC6n7NI
I’ve been pondering doing something like this with SQLite. The primary db is local/embedded on the user’s machine and use something like https://github.com/rqlite/rqlite to sync on the backend.
It also means it would be fairly trivial to allow users/orgs to host their own “backend” as well.
With this topic I think there should be a bigger thought framework at play. What about file formats? User data and settings import/export? Telemetry (also the useful type)? How should monetization/pro features be added? There are good answers to these, but views are scattered. The calling signs are too scoped: local first, selfhosted, open source, "fair software". The software industry is in need of a new GNU Manifesto.
What is going on with the multiple stray mouse cursors? The site scrolls with a considerable lag and the mouse cursors are outright annoying.
For someone who always complains about design choices, it's quite ironic that he ended up putting mouse cursors flying around, obstructing your view and constantly distracting. At the Blacksmith's House...
people are always hating on the cursors, i think it's fun
I've been dreaming of Apple Notes and Obsidian doing what the author suggests. The approach seems similar to Delta Lake's consistency model, which is using object storage like S3, and yet allows concurrent writers and readers: https://jack-vanlightly.com/analyses/2024/4/29/understanding....
But file syncing is a “dumb” protocol. You can’t “hook” into sync events, or update notifications, or conflict resolution. There isn’t much API; you just save files and they get synced. In case of conflict, best case, you get two files. Worst — you get only one :)
Sync services haven't evolved much. I guess, a service that would provide lower APIs and different data structures (CRDTs, etc.) would be a hacker's dream. Also, E2EE would be nice.
And if they closed the shop, I would have all the files on my devices.
Using theory of patches would better compliment the current approach. Integrating a scm such as https://pijul.org or atleast the underlying tech would allow for better conflict resolutions. Transferring patches should also allow for more efficient use of io.
One improvement of the first "super-naive" approach is to break down the state into a whole hierarchy of files, rather than a single file. This helps reduce (but not eliminate) conflicts when multiple clients are making changes to different parts of the state.
There is another important and too often ignored situation: sw availability.
Let's say a day BeanCount (my preferred personal finance software) disappear, well, so far I can switch to Ledger or HLegder, the switch demand a bit of rg/sed works but it's doable. If let's say Firefly disappear I still have my data, but migrating them to something else it's a nightmare. Of course such events are slow, if the upstream suddenly disappear the local software still work, but after some times it will break due to environmental changes around it.
With classic FLOSS tools that's a limited problem, tools are simple without much dependencies and they are normally developed by a large spread community. Modern tools tend to be the opposite: with gazillion of deps often in https://xkcd.com/2347/ mode.
My digital life is almost entirely in Emacs, the chances Emacs disappear are objectively low and even if it happen even if it have a very big codebase there are not much easy-to-break deps BUT if I decide to go the modern path, let's say instead of org-attaching most of my files I decide to put them on Paperless and use them instead of via org-mode notes links with Dokuwiki or something else I get much more chances something break and even if I own anything my workflow cease to exists quickly. Recover would be VERY hard. Yes, paperless in the end store files on a file system, I can browse them manually, Zim have essentially the same Dokuwiki markup so I can import the Wiki, but all links will be broken and there is no direct quick-text-tweaking I can apply to reconstruct http links to the filesystem. With org-attach I can, even if it use a cache-like tree not really human readable.
Anyway to have personal guarantees of ownership of our digital life local-first and sync are not the only main points. The corollary is that we need the old desktop model "an OS like a single program, indefinitively extensible" to be safe, because it's much more fragile at small potatoes level, but it's much more resilient in the long run.
Both "everything in the cloud" and "everything local" have their obvious technical advantages, and I think they are mostly well understood. What really drives the swing of the pendulum are the business incentives.
Is the goal to sell mainframes? Then tell customers than thin clients powered by a mainframe allow for easy collaboration, centralized backups and administration, and lower total cost of ownership.
Do you want recurring SaaS revenue? Then tell customers that they don't want the hassle of maintaining a complicated server architecture, that security updates mean servers need constant maintenance, and that integrating with many 3rd party SaaS apps makes cloud hosting the logical choice.
We're currently working on an Local First (and E2EE) app that syncs with CRDTs. The server has been reduced to a single go executable that more or less broadcasts the mutation messages to the different clients when they come online. The tech is very cool and it's what we think makes the most sense for the user. But what we've also realized is that by architecting our software like this we have torpedoed our business model. Nobody is going to pay $25 per user per seat per month when it's obvious that the app runs locally and not that much is happening on the server side.
Local First, Forever is good for the user. Open data formats are good for the user. Being able to self-host is good for the user. But I suspect it will be very difficult to make software like this profitably. Adobe's stock went 20x after they adopted a per seat subscription model. This Local First trend, if it is here to stay (and I hope it will be) might destroy a lot of SaaS business models.
We are going to reinvent an ad hoc, informally-specified, bug-ridden, slow implementation of half of Usenet.
How well does Obsidian Sync's conflict resolution work compared to Dropbox? Dropbox now supports folder selections for 3rd party apps via the iOS Files app [1], and I wonder how well that stacks up against Obsidian's native sync.
I’ll go against the grain and say that local first isn’t really necessary for most apps and spending time on it is a distraction from presumably more fundamental product problems.
Talking about such things is like catnip on here though.
(this should probably be a post)
CRDTs and local first are ideas that is perpetually in the hype cycle for the last decade or so, starting around the time Riak CRDTs became a thing and continuing all the way to today.
Niki's post is a perfect illustration: CRDTs offer this "magical" experience that seems perfectly good until you try building a product with them. Then it becomes a nightmare of tradeoffs:
- state-based or operation-based? do you trim state? how?
- is it truly a CRDT (no conflicts possible), a structure with explicit conflict detection, or last-writer-wins/arbitrary choice in a trench coat that will always lose data? Case in point: Automerge uses pseudo-random conflict resolution, so Niki's solution will drop data if it's added without a direct causal link between the edits. To learn this, you have to go to "under the hood" section in Automerge docs and read about merge rules very attentively. It might be acceptable for a particular use case, but very few people would even read that far!
- what is the worst case complexity? Case in point: yjs offers an interface that looks very much like JSON, but "arrays" are actually linked lists underneath, which makes it easy to accidentally become quadratic.
- how do you surface conflicts and/or lost data to the user in the interface? What are the user expectations?
- how do you add/remove replication nodes? What if they're offline at the time of removal? What if they come online after getting removed?
- what's user experience like for nodes with spotty connection and relatively constrained resources, like mobile phones? Do they have to sync everything after coming online before being able to commit changes?
- what's the authoritative node for side effects like email or push notifications?
- how do you handle data migrations as software gets updated? What about two nodes having wildly different software versions?
- how should search work on constrained devices, unless every device has the full copy of the entire state?
Those tradeoffs infect the entire system, top to bottom, from basic data structures (a CRDT "array" can be many different things with different behaviour) to storage to auth to networking to UI. Because of that, they can't be abstracted away — or more precisely, they can be pretend-abstracted for marketing purposes, until the reality of the problem becomes apparent in production.
From Muse [1] to Linear [2], everyone eventually hits the problems above and has to either abandon features (no need to have an authoritative email log if there are no email notifications), subset data and gradually move from local first to very elaborate caching, or introduce federation of some sort that gravitates towards centralisation anyway (very few people want to run their own persistent nodes).
I think this complexity, essential for local-first in practice, is important to contextualise both Niki's post and the original talk (which mostly brushed over it).
[1]: https://museapp.com/podcast/78-local-first-one-year-later/
I love the local first design but you have to understand conflicts are inevitable. With local first, you chose Availability and Partition tolerance over Consistency and slapping a CRDT on it does not solve every consistency problem. Think Git merge conflicts: is there an algorithm to resolve them every time?
However, I like the abstractions of CRDTs and libs like Automerge can solve most of the problems. If you must handle all types of file, just be prepared to ask the user to solve them by hand.
Reading this on phone. Disappointed not to see a dozen other fingers scrolling the page :)
"syncing doesn’t work without a server"
I don't think this is true, granted there are some big challenges to transfering data between devices without a central server, but there are several projects like https://dxos.org/ which use p2p, and also there's https://ditto.live/ which uses bluetooth/wifi direct for cases where all users will be in the same room or on the same local network (imagine wanting to play chess with a friend sitting in a different row on a plane without wifi - I was in this situation recently and was pretty surprised that I couldn't find anything on the app store that could do this!)
Of course most of the time its better to have a server because p2p still has a lot of difficulties and often having a central 'source of truth' is worth the costs that come with a server based architecture. So imo things like https://electric-sql.com/ or https://www.triplit.dev/ or the upcoming https://zerosync.dev/ will be far better choices for anyone wanting to build a local first app used by many users.
Isn‘t git the best example for a local first approach? So why reinvent the wheel?
sync services are sources of data corruption. local OR remote, not both.
I like what lobechat does everything in indexedb with webrtc for P2P sync.
I think each cursor is the cursor of someone else currently browsing the page.
I've seen this before on an article that hit the HN front page; was it this same blog?
And more importantly, why would anyone in their right mind think this "feature" is a good idea?
It's a personal blog made for fun. Both features made me smile. What's the problem? The kind of attitude you're displaying here just depresses me. Makes me lose faith in humanity.
At the very least it should be toggleable. It's distracting and impedes readability.
Ostensibly, the goal of writing a blog post is for others to read it.
Literally ALL major browsers support reader mode nowadays.
Chrome's "reader mode" sucks.
Advertising company’s browser makes it hard to remove advertising: news at 11.
Don’t use Chrome then.
Reader mode is much much more than just "turn off other people's pointers".
Makes it even better, yes.
I personally don't think it should be toggleable, it is their choice how to setup their site. You eigther like or you don't.
But you can easily remove the dom element that displays the cursors if the content is interesting to you and the cursors are the only thing preventing you from reading it..
In a perfect world it would respect the user's motion preference (maybe it does?)
It does not.
It is.
Use reader mode in your browser.
It's just distracting. I didn't read to the end because it got so annoying. Now I don't give you any shit if you want to do fun shit on your private blog (hell mine was only accessible via telnet for quiet some time) but don't expect people to not be annoyed. :)
Everything is annoying to somebody. Software doesn't have to always be optimized to minimize annoyances. Sometimes it's just fun to optimize it for your own amusement, and a personal blog (which this is) gets a pass from me.
This is just not true, but in any case, don't get your point. The author has all the right to add anything to his blog, it's just that a lot of people will find it annoying and will stop reading, like some people mentioned here.
If the author thinks the cursors are worth the people not reading the actual content, then good for him.
Which part? The second sentence is just my opinion, and you're free to disagree. :)
My point is that somebody will always find reasons for not wanting to read an article. Whether that's because they don't like the color scheme, font, or any other style choice. You can't please everyone with design. I think we can agree that that statement is objectively true.
So given that, a personal blog is the right place to do whatever pleases the author the most, instead of optimizing for metrics that impact engagement, or whatever nonsense KPIs developers are asked to optimize for in their day job. People interested in the content can always choose to consume it however they prefer instead (thankfully, the web is still open in this sense). In this case you can disable JavaScript, use reader mode, a custom stylesheet, or any other barebones browser instead. Considering this article is directed at a technical audience, any of these should work for you.
And by the same token many will find it the signature of a distinct and memorable personality and be more likely to recall it over the long term. If they like adding quirks that create a lasting impression to a subset of people, good for them!
Hard agree. Always prioritize efficiency and standardization over everything else. The era of "personal touch" and "fun" is over.
No it's not. This is a personal blog and you are just being a software Karen in someone's back yard.
The problem for me (not parent) is that it is distracting to the point where I gave up reading the article. I have ADD, and it just made it impossible for me to focus on the content, which is unrelated to whether or not I may find it amusing as an idea.
Edit: and yes, I could've used Reader Mode of course, but I didn't think of it at the time.
For me it's simply that I don't care about the opinions of anyone who thinks this is a fun feature to overlay their text content. On a front-page or experimental sub-page? Sure. On an article? I'm done.
Might make me a killjoy but there's a time and place for everything.
Yeah it's fun. I chased other peoples cursors
This is the same HN that complains about the demise of the "simple internet". It was filled with things like this!
The problem is that it's annoying. I agree with the first post, I am not going to read this while flies are moving in my screen. Besides, if those are other people's cursors, that's kind of creepy.
If you lose faith in humanity because of this, you should look into it.
The blog looked like something discussing an important topic for the owner and not primarily for entertainment.
People lost their bearing about if and when stupid visuals ruin or not their message, derail attention. And they stopped caring as well. They just want to make something funny looking, this is a primary motivation, coming up with stupid ideas and pushing it anywhere and everywhere. I belive this is a problematic attitude in the age of pretension. Which made me to loose my faith in humanity.
It's fun if I want to have fun on the site. Not when I'm trying to read a technical article. I left half way through.
I think is the goofiest and most lovely feature. It gave me a weird sense of community presence while reading (and most people keep their cursors off the text anyway)
It gave me the feeling I was being observed, and I didn't like that, so I just skimmed the article and closed it as soon as possible.
First thought: Mouse is dying. Came to read the comments to see what's up when I spotted all the other cursors. Interesting mind fuck.
This being HN, I doubt people are reading this on an iPad during working hours on a Tuesday across half the world.
I kept drawing "hello" hoping for a reply.
I'm sorry we let you down :(
I first saw it on https://dimden.dev, the creator of https://nekoweb.org
The feature is really fun and fitting on dimden's page; it's really distracting on this blog
I thought it was cute. You can pull up inspector and delete the pointers div if it bothers you.
Should have a button to disable it. Maybe next to the... useless dark mode button? Funny gimmicks à lá 2004.
"à la", there is no diacritic on "la". "à" means (roughtly) "to" and there is a diacritical mark to distinguish it from "a" as in "il a", "he has". But "la" means "the", like "le" but female, and there is no need to distinguish it from any other homograph.
s/female/feminine
You’re in luck! Your browser has a button to disable them and it will also activate Dark Mode!
I mean the dark mode button was genuinely funny
Its cute for 2-3 seconds. Not worth the time it takes to delete the pointers manually.
It takes less than a second to press reader mode button.
Easier to hit the back or close button actually.
I once though clippy was cute too...
English is an amazing language. Chances are slim == Fat Chance.
Isn’t this an example of sarcasm, rather than an example of two contradictory colloquialisms?
Is that a rhetorical question?
Well, isn't it by definition that the likelihood of a Fat Chance is slim?
This is insanely annoying. It seems the site is infested with bugs.
Bug? It's intentional behaviour
I mean bugs in the literal sense of insects crawling on the screen.
Yeah, I realized after I wrote my comment
I really dislike this. Sure...it's fun, but where did I "sign up" to allow others to see where my mouse pointer is? How do I disable it? What else, from my machine, is being captured and broadcast?
Unless you’ve selectively disabled features, then everything you do and every metric your browser captures.
Did you make the window full screen? Tattled.
Did you change default font or size? Tattled.
Did you visit any site with tracking beacons? Tattled.
Do you use a laptop or phone? Then sensor data and eye tracking are on the table for tattling.
Does your machine have WiFi on? Did you expressly turn off device discovery for connected networks? No? Tattled.
The only way to remain private is not to have family, friends, coworkers, or neighbors.
The weakest security links are those not under your control.
Good to be reminded once in a while that litterally all your moves are tracked.
Yeah this is the most distracting reading experience I've ever encountered. Ironically it's very antithetical to the local-first philosophy.
I think you're reff'ing this article: https://blog.partykit.io/posts/using-vectorize-to-build-sear...
Yeah, at the time I thought "who the hell thought this would be a good idea?"
I delete the div.pointers element from the DOM to read Tonksy's site when it hits HN:
document.querySelector('.pointers').remove();
For a second I thought someone had somehow remote-desktopped into my computer and I could see their pointer.
I'm not sure why but it made me feel nauseous. Did not like. I couldn't continue to read either.