return to table of content

LiveView Is Best with Svelte

bogwog
26 replies
4d4h

So instead of managing state on the client, you manage state on the client and the server? That doesn't seem like an improvement, even if it saves you from having to build yet another API.

pjmlp
14 replies
4d3h

It is just a new generation rediscovering ColdFusion, Web Forms, JSF, PHP, Spring, Rails...

dns_snek
10 replies
4d3h

Could you elaborate? I don't see many similarities between those and LiveView.

The difference between traditional technologies that render HTML server-side and LiveView, is a persistent connection to the server which allows it to re-render templates in response to server-side events and patch client-side HTML without writing any Javascript.

pjmlp
9 replies
4d2h

Mostly based on WebSockets and Server Push.

As one example of such approaches, .NET has SignalR since 2013.

And WebForms could use designer tooling since 2001, and then there was ASP.NET AJAX Control Toolkit.

pdimitar
7 replies
4d2h

But it doesn't have the BEAM VM. The runtime matters a lot, and LiveView wouldn't work as well if it wasn't running under the BEAM.

jddj
3 replies
3d21h

This isn't particularly convincing, as the pattern -- loosely: keep a websocket open and keep the formerly clientside app state serverside while passing events and pushing diffs -- seems to appear successfully outside of the BEAM vm. For example, blazor serverside (.net) and laravel livewire (php).

I haven't checked but I'm sure there will be a python one too

pdimitar
2 replies
3d21h

You are not convinced because you don't know the details. But I am not paid to advocate or to even try to convince. You've been informed now -- from here on it's on you as to whether to remain biased or to expand your horizons.

jddj
1 replies
3d20h

I've written elixir apps, also read Armstrong's thesis, etc etc. Beam is undoubtedly an excellent piece of engineering.

I don't want to engage more than that, as the horizons comment seems bizarre in the context of the thread

pdimitar
0 replies
3d19h

No need to get defensive. Your comment came across as curmudgeon-y and biased, I called you out for it, you could have agreed to disagree which I would respect. But no, you had to go out of your way to try and strike.

Well, OK. But HN is not the place for that and I refuse to engage further.

pjmlp
2 replies
4d2h

Tomato, Tomato, it has another VM.

pdimitar
0 replies
4d1h

It's your right to delude yourself that the runtime does not matter. That's not a discussion I am willing to have though, especially when exactly 100% of my 22+ years of programming experience have demonstrated, time and again, that the runtime inevitably ends up making a lot of difference (sometimes all the difference even).

Or, when you don't have a runtime -- as is the case with Rust, kinda sorta I mean because technically `tokio` can be classified as a runtime -- then you rely on a stricter compiler.

Both strategies work pretty well.

I am not shitting on C# / .NET, they are solid as hell. But some things the BEAM VM just does better and everyone who worked with old-school VMs (in my case the JVM) and the BEAM VM can tell you that.

But again, you do you, think what you will. ¯\_(ツ)_/¯

OJFord
0 replies
3d22h

This saying really doesn't work in writing, I just read ..err.. 'tomato, tomato'

dns_snek
0 replies
4d1h

Not quite, I'd say that SignalR is comparable to Phoenix Channels. It's a communication protocol that can use different transport protocols, Websockets being one of them.

LiveView builds on top of Phoenix Sockets. When the page is loaded for the first time, it starts a lightweight server-side process on the BEAM VM. This long-lived process renders the HTML template (which can consist of multiple SPA-style components) and keeps all of the relevant "props" (called assigns) in-memory.

Every time a new event is received, either client-side (e.g. button click) or server-side (typically via a PubSub subscription), that long-running process will update its assigns (props) to reflect the new state. The framework then intelligently re-renders parts of the HTML template which depend on those modified assigns, and sends a minimal HTML diff to the client.

All of this can be done without writing any custom JS. Basic client-side events are usually set up with special HTML attributes like `phx-click=my_custom_event` and automatically wired up by LiveView JS bundle to be received by that long-running process.

ramon156
1 replies
4d3h

What does php spring and rails have to do with this? I'm confused haha

pjmlp
0 replies
4d2h

Keeping sessions on both sides.

ahallock
0 replies
3d22h

Sounds pretty reductive and dismissive. Have you actually used LiveView and compared the DX? I've used a a few of the technologies you listed, and LiveView is a different animal.

riskable
2 replies
4d2h

It's never that simple. In web applications there's always these types of states:

    * States that the client needs to keep track of
    * States that the server needs to keep track of
Then on top of those there's two more kinds of states that overlap but they're not quite the same thing:

    * States that only need to exist in memory (i.e. transient)
    * States that need to persist between sessions
There's a seemingly infinite number of ways to manage these things and because "there's no correct way to do anything in JavaScript" you either use a framework's chosen way to deal with them or you do it on an ad-hoc basis (aka "chaos" haha).

In the last sophisticated SPA I wrote I had it perform a sync whenever the client loaded the page. Every local state or asset had a datetime-based hash associated with it and if it didn't match what was on the server the server would send down an updated version (of whatever that thing was; whether it be simple variables, a huge JSON object, or whole images/audio blobs).

Whenever the client did something that required a change in state on the server it would send an update of that state over a WebSocket (99% of the app was WebSocket stuff). I didn't use any sort of sophisticated framework or pattern: If I was writing the code and thought, "the server needs to keep track of this" I'd have it send a message to the server with the new state and it would be up to the server whether or not that state should be synchronized on page load.

IMHO, that's about as simple a mechanism as you can get for managing this sort of thing. WebSockets are a godsend for managing state.

uxcolumbo
1 replies
3d12h

Interesting. What kind of app was it? How did you decide whether the server needed to keep track it? I’m assuming that was predetermined? What about new requirements, which would need to be tracked by the server? Could the app deal with this dynamically without code changes?

riskable
0 replies
2d1h

It was Gate One: https://github.com/liftoff/GateOne

(Note: Company no longer exists so it is unmaintained, fork at will =)

It's been so long since I've looked at the code but I did write excellent documentation, most of which is generated from docstrings. Example:

https://github.com/liftoff/GateOne/blob/master/gateone/core/...

You can read the documentation here:

https://liftoff.github.io/GateOne/

If you look at the bookmarks plugin you can see an example of how I performed this style of client<->server synchronization using an Update Sequence Number (USN):

https://github.com/liftoff/GateOne/blob/6ae1d01f7fe21e2703bd...

After the web client connects to the websocket all the plugin's `init()` methods are called and the bookmarks plugin's javascript file calls `userLoginSync()`:

https://github.com/liftoff/GateOne/blob/6ae1d01f7fe21e2703bd...

It sends the current USN (which is retrieved from `localStorage`) to the server (the routing is quite sophisticated... Just know that the message ends up going to the correct function =) and if it's different from the USN on the server the server will send an updated list of bookmarks to the client at which point the client will take care of that via its own handler:

https://github.com/liftoff/GateOne/blob/6ae1d01f7fe21e2703bd...

If you examine the bookmarks.js from top to bottom (it's not THAT long) you should get the gist of where and how various states are stored. That plugin mostly deals with stuff in `localStorage` but if you poke around in Gate One you'll see vastly more sophisticated state synchronization and state update routines at work.

matlin
1 replies
4d1h

Yeah that is the crux of most modern three-tiered architectures (client<->server<->database) these days. We're working on simplifying this with a new concept: a full-stack database.

If you run an identical query engine on client and server and then sync data between them, a client can just write a normal query on the client and you get an optimistic response instantly from a local cache and then later a response from the authoritative server. Frankly, this is where most highly interactive apps (like Whatsapp, Linear, Figma, etc) end up anyway with bespoke solutions we're just making it general purpose.

bogwog
0 replies
3d20h

That sounds a lot like CouchDB and PouchDB. I haven't used the latter, but CouchDB provides a standard REST API (with authentication) out of the box. You'll probably need to add a custom API for more complex stuff, integrating with other services, etc. But all the boring CRUD stuff is already built into the database.

But more stuff like this is always welcome!

realusername
0 replies
4d2h

You already manage two kinds of state when you are building a React app:

- State coming from the server as a result of user actions

- Local state which isn't intended to be shared to the server, usually UI stuff.

open592
0 replies
4d1h

This is no different than any other optimization performed in any other software ecosystem. There is no reason to do this unless you have to. And the reason you would have to is because of performance and user experience.

When your web application requires the following:

- Large amount of user interaction (requires client side JavaScript)

- Large amount of experimentation (bundle is not static, changes per request)

You are going to want to split up the logic on the server and client to reduce the amount of JavaScript you're sending to the client. Otherwise you have a <N>MB JavaScript bundle of all the permutations you're application could encounter. This may be fine for something like a web version of Photoshop where the user understands that an initial load time is required. But for something like Stripe, Gmail, etc. where the user expects the application to load instantly you want to reduce the initial latency of the page.

You can move everything to the server, but like GitHub experienced you then encounter a problem where user interaction on the page suffers because user actions which are expected to be instant, instead require a round trip to the server (speed of light and server distribution then becomes an issue).

You can lazy load bundles with something like async import, but then you run into the waterfall problem, where the lazy loaded JavaScript then needs to make a request to fetch it's own data, and you're left making two requests for everything.

If you encounter all these problems then you end up reaching for solution which make it easier to manage the complex problem of sharing logic/state between the client and the server.

klabb3
0 replies
4d3h

Are you comparing with PWAs? Or what’s the baseline? Because with PWAs you have both too. State is accessed from the client with requests and you have to manage caching and stale state as well, to ensure performance and not drifting out of sync across components, no? Last I checked out a modern “performant” graphql stack – it was horrifyingly complex and full of knobs.

kevinak
0 replies
4d3h

Some things just end up being a better experience fully client-side. Don't go all in on it - just do it when it makes sense.

Another thing I like about this is the ability to be able to use Svelte as a templating language rather than Heex.

akira2501
0 replies
3d23h

build yet another API.

It appears that's one of their major problems. They're not "building" APIs. They're just slapping "one shots" onto the side of a router whenever the need arises. This speaks to a complete lack of a planning and design phase.

I guess if you want to build something without any plan whatsoever, this might be a way to "improve" that process, but there's a much simpler one that doesn't require your team to become polarized over a framework.

MatthiasPortzel
0 replies
4d3h

Eventually, some form of this paradigm is going to win.

In practice, applications need state on both the client and the server. The server needs the authoritative state information (since the client is untrusted), but the client needs to be able to re-render in response to user interaction without a round-trip.

furyofantares
22 replies
4d2h

A pattern sometimes used in multiplayer video games is there's a bunch of code that is by default run on both the client and the server. The client code runs as a prediction of the server state. When the server state is received it slams the client state.

For games "prediction" is an apt description of this, because the client can make a good guess as to the result of their input, but can't know for sure since they don't know the other players' inputs. But this paradigm can also be used to simply respond immediately to client input while waiting for the official server state - say by enabling/disabling a dropdown, or showing a loading spinner.

There's also plenty of client state that's not run on the server at all. Particle systems, ragdolls - stuff that doesn't need to be exactly the same on all clients and doesn't interact with other player inputs / physics.

If we're gonna have a persistent server connection I don't see a reason this wouldn't work in a reactive paradigm.

POiNTx
9 replies
4d2h

That's what I did with: https://territoriez.io/

It's a clone of https://generals.io/

It's built with LiveSvelte. It doesn't need any predictive features as it's basically a board game and runs at 2 ticks per second. It does use optimistic updates to show game state before it actually updates on the server. The server overrides the game state if they're not in sync.

All game logic is done inside Elixir. To do predictive correctly, you'd need to share logic between the server and the client. Otherwise you're writing your logic twice and that's just a recipe for disaster.

One possible solution which I didn't investigate, but should work, is to write all game logic in gleam (https://gleam.run/). Gleam is compatible with Elixir, AND it also can compile to js, so you could in theory run the same code on the server and the client.

Now this is a big mess to understand, you could say "why don't write it all in js and be done with it" and you'd make a very good point and I'd probably agree. The main advantage you get is that you can use the BEAM and all it's very nice features, especially when it comes to real time distributed systems. It's perfect for multiplayer games, as long as you don't take into account the frontend :)

djbberko42
4 replies
4d1h

As someone who started using Elixir this year (for work) this is really cool. I have had some ideas for some SPAs and games just like these io ones and it'd be great to use Elixir for them. Do you think a top down fps io game would be plausible with this setup? There would need to be at least 60 ticks per second I'd think.

POiNTx
3 replies
4d1h

Definitely possible. Tick rate isn't the problem, Elixir is very performant. Just the predictive elements are an issue if you're working with 2 different languages. I'd go with Gleam from the start and look into compiling to js.

spiderice
2 replies
3d23h

Don't you lose a lot of the niceties of Elixir when switching to Gleam, just because Gleam is a younger project? LiveView would be the big one I'm thinking of. Do you see that as a worthy trade?

POiNTx
1 replies
3d23h

You can still use LiveView/Phoenix/Elixir, but have your game logic be in Gleam. I haven't used it though so I could be wrong here.

A little bit more about it here: https://katafrakt.me/2021/10/13/app-with-elixir-business-log...

You'd call Gleam code like this inside Elixir:

`:game.move(game, player_1, :left)`

And you'd receive an Elixir map `%Game{}` which you can then use in LiveView. If that makes sense.

buzzerbetrayed
0 replies
3d21h

Cool! I'm going to have to look more into Gleam. I saw it hit v1.0.0 on ElixirForum last month, but I figured it was an alternative to Elixir rather than something that played so nicely with it.

rytill
1 replies
4d1h

Hey, about your clone of https://generals.io - why not make it a better designed version of that game instead of just a clone? Perhaps you have plans to. There are things about the original that kind of suck.

1) Spawning closer to the center is just strictly significantly worse, corners are best. It's essentially an auto-lose to spawn anywhere but an edge or corner in a FFA.

2) Getting literally all of someone's armies when you kill them is so good that it pushes players to just try to all-in the first person they meet every single time. There is no way to "fast expand" once you've met another player because of the first factor, and also that 40-50 armies on a neutral castle is very high. I think the "early-game" can be much better designed.

3) Perhaps you should be able to change your capital, which could solve problem 1.

4) There are many ways you could keep the simplicity of play, but boost the richness and depth of the actual gameplay. For instance, more chokepoints that would allow actual strategic use of territory and army positioning. Different tiles which have different advantages to owning. Borrowing from something like civ - forests or hills which have a defensive boost when you're inside. Rivers which attacking across is disadvantageous.

Just some feedback from someone who enjoys games like Chess and Starcraft, and thinks the core gameplay loop of generals.io is really fun, but believes that it is seriously lacking in strategic depth.

POiNTx
0 replies
4d

It's a little bit different from generals.io but the points you mention are very valid. I originally started working on it as a showcase of LiveSvelte + I like generals.io a lot and thought it could be improved visually + UX wise. Then afterwards started thinking a bit about the game design and what I could improve, but eventually burned out on the project. Was also at a time when I wasn't working but am working again.

Maybe I'll pick it up at some point, but also open to other people working on it if they're interested. I'd be open to partner up with someone to eventually make it monetized. It's not open source as I wanted to add paid features (cosmetic features, not pay to win).

A solution for most of these issues is a modifier system. I have a basic one in place and wanted to experiment with different modifiers. A modifier can be anything, like prevent spawning in the center, to increasing rewards for capturing cities, to even allow crossing the border of the map into the other side, like pacman (which could address point 1). Then the goal would be to see which modifiers stick with the player base, and make them the default, while still allowing for custom games with different modifiers. Generals has a similar system in place and they sometimes make the modifier the default for the day, which I like a lot.

ElFitz
1 replies
4d1h

To do predictive correctly, you'd need to share logic between the server and the client. > > One possible solution […] is to write all game logic in gleam […]

Rust, with Uniffi, can also be a good candidate. You’d be targeting WebAssembly.

shirogane86x
0 replies
3d6h

Is there a way to emit WebAssembly with Uniffi? I looked for it before, but I couldn't find it (I'm using it in a project to share login between the backend and Kotlin/Swift apps. I wanted to share it with JS for a web frontend, but didn't find any docs mentioning it)

dkersten
3 replies
4d1h

Isn’t that just optimistic updates? This has been common in client side logic for a long time, no?

furyofantares
2 replies
3d23h

I was more talking about the method to achieve optimistic updates, rather than the concept of optimistic updates in general. That is, having the client and server code be identical where applicable, with client code being run in a prediction context.

I'm not a web expert but the optimistic updates I've seen in web stuff is more like, I'm gonna fetch this url and here's the data I expect back. Nothing wrong with that, but it's achieved in a different way, where the server is all about providing data and the client is all about managing state.

The OP is talking about maintaining a persistent connection to a server which is doing most of the state management. They detail things this does well (makes the server easier to write) and things it does poorly (makes optimistic updates harder) and a solution to the things it does poorly. So I'm drawing a parallel to other systems where state is managed on the server and must be predicted on the client.

mb7733
0 replies
3d22h

Haven't used it in years, but Meteor.js worked like this. Even to the point where it had a client-side database implementation that mirrored (parts of) the server-side database.

It would apply the updates optimistically to the client side DB, and the much of code that sat between the database could be shared between client and server. Neat stuff, even if overall Meteor wasn't my favourite thing to work with.

dkersten
0 replies
3d12h

The way I’ve always done it is that I have client side event handlers that perform the UI-visible side of the logic client side, but also asynchronously request the server to do it.

In simple cases, that logic simple sets a toggle or whatever. In slightly more advanced cases it might modify something such as appending to a list. But in some cases it could be performing more complex logic such as applying a filter to a list. Sometimes this logic is only done client side (if the filtering is view-only for example) and sometimes both client and server. Basically, if you remove all of the network requests, it would still look like it’s working, more or less.

Of course only logic that is needed for data to be correctly displayed in the UI is needed client side. Performing logic that is only used server side (even if it has later UI effects — only effects that the user expects immediately need local representation) is unnecessary client side.

Eg even in a game, you might not want to run, for example, bot AI speculatively on every client but rather just the movement prediction like for other players, while everything else may be needed for rendering.

OJFord
3 replies
3d23h

You need to be smart about this though, I don't want it to look like a live/interactive form has worked for example - updating derived data on the page - if it's then going to turn out that actually the server had an error or there was network trouble, and it all gets undone (or worse I close the page/navigate away not realising at all).

willsmith72
2 replies
3d21h

you would be in the minority then, most people these days want (and expect) live feedback if the action's result is predictable enough

e.g. throttle your network and upvote a hn comment. you're not sitting there waiting with a spinner while the server responds, it's all in the background.

the hn implementation isn't great though - if the upvote request fails, the optimistic update isn't rolled back, and you have no knowledge that it failed. for hn, who cares, it's just a lost upvote, but for most modern web apps you would show the user that the action failed

OJFord
1 replies
3d21h

If you're (would be) 'sitting there waiting with a spinner' then that endpoint is too slow, regardless of what the frontend does in the meantime.

willsmith72
0 replies
3d21h

depends on the definition of "slow". unless you have servers and databases everywhere (overkill for 99% of apps), your endpoint will be probably be >400ms for some people somewhere, enough to feel as a user.

that's without accounting for patchy reception (in a tunnel?), network blips, server blips (overworked?), etc.

i'm not saying everything should be optimistic, but for something like a hn upvote, i dont care if my public wifi freaked out and took 3 seconds for that 1 request, and i think more people are like that than not

pests
1 replies
3d21h

As a player of a game (Overwatch) that simulates ragdoll physics locally and non-deterministic, I wish they did it correctly.

What ends up happening is someone dies and their body flies off or gets stuck in a hilarious pose... and no one else saw it, nor can you rewatch it in the replays as every client renders it differently.

Unless you catch it with a live recording then it's lost forever. With a sometimes goofy game like Overwatch it's sad knowing no one else is seeing it.

wldcordeiro
0 replies
3d19h

In Counter Strike I've seen it be an actual issue too where a player's view is blocked by a body and their teammates don't have the same issue while spectating them.

chrisweekly
0 replies
3d16h

I've always heard of and referee to this pattern as "otimistic updates" - tho my frame of reference is 25y in webdev, w/ almost no exposure to game dev.

btown
0 replies
3d12h

Fighting games dial up these low-latency conflict resolution considerations to 11, and there’s an entire subfield of techniques for writing “netcode” to reconcile real-time event streams.

https://arstechnica.com/gaming/2019/10/explaining-how-fighti... is an incredible walkthrough of this! Discussion: https://news.ycombinator.com/item?id=34399790 and https://news.ycombinator.com/item?id=26289933

When it comes to server persistence, as in an MMO setting, you add I/O bottlenecks into the mix. https://prdeving.wordpress.com/2023/09/29/mmo-architecture-s... is a fascinating read for that end of things. Discussion: https://news.ycombinator.com/item?id=37702632

fizx
14 replies
4d3h

There is a (literal) speed of light limitation with this approach: your server can only be so close to your users.

The next step is to compile your server to WebAssembly and ship it to your clients. You can then use it to optimistically render responses while waiting for the real server to return.

Sounds a little crazy, but we've actually pulled it off for a project, and its magic.

postalrat
3 replies
4d2h

It sounds over-engineered. Lots of over-engineered stuff gets pulled off for no good reason.

thibaut_barrere
0 replies
3d22h

It's over-engineered unless done properly, it which case it becomes a detail. I wouldn't be surprised to see some automatic conversion of server-side code to front-end by LiveView in the future for events where this behaviour is applicable, actually!

paulgb
0 replies
4d2h

There is a good reason: it becomes much easier to think about state transitions, because optimistic and “verified” updates both follow the same code path.

I've built a turn-based game that worked this way, where essentially every player and the server contains a copy of the same state machine, and the server determines the order of state updates. Like OP said, once you have the framework in place, it's magic.

cosmojg
0 replies
3d4h

Well, in this case, most online multiplayer video games wouldn't work without it!

kevinak
3 replies
4d2h

Why wouldn't you just use a Service Worker for this?

AirMax98
2 replies
4d2h

There's a couple different ways to skin this cat, but WebAssembly is definitely the "coolest". I'd imagine even a Service Worker is overkill compared to just inlining an optimistic response in whatever rest client you're using.

Kind of similar to the content of this article, I wish people were more upfront with their reasoning. Doing something because "it's rad" is definitely fine by me!

kevinak
0 replies
3d22h

I just mutate the data object when using SvelteKit. Blasphemy, I know. But it works, and it works well. :P

ffsm8
0 replies
4d

WebAssembly is definitely the "coolest"

As always: when I doubt, it was likely because of CV-driven development

knallfrosch
1 replies
4d2h

Wait, what's the difference to a fat client then?

riskable
0 replies
4d2h

Ease of distribution.

willsmith72
0 replies
3d21h

local-first with a client-side sqlite db sounds way simpler for what seems like the same purpose

tyre
0 replies
4d2h

What about persistence? To me that’s the main purpose of the backend and running a server on the client doesn’t fix that. You still have the network latency of persistence, which is the ultimate server state that should win.

bogwog
0 replies
3d19h

That sounds a lot like the "client side prediction" that modern multiplayer games do.

CSSer
0 replies
4d3h

Wow, that sounds cool! Can you share any more details?

klabb3
13 replies
4d3h

Generally speaking this is the model I’ve always been wanted to build apps in. Event oriented, bidirectional realtime updates with server, ordered events, local and remote state...

I didn’t know about LiveView and never used erlang-family languages, but definitely they’re onto something. The traditional request-response model is many times causing a lot of subtle problems with consistency and staleness.

A wishful (probably also controversial) thought: if the last decade was about integrating FP concepts into mainstream languages, then I hope the next decade will be oriented around integrating stateful message-oriented (reactive?) programming into the mainstream full stack.

cess11
7 replies
4d2h

You can use asdf to install erlang/BEAM/OTP and elixir, takes a few minutes if you have some previous experience with the tool.

Either way it'll probably take about two hours to have your first rudimentary Phoenix chat application loaded in a browser if you follow some guides and tinker around a bit.

square_usual
6 replies
3d23h

I would strongly recommend using mise instead of asdf. It's a drop in replacement that is just flat out better for most people.

karmajunkie
5 replies
3d16h

curious, what makes it better? asdf has always just worked for me, but this is the first time i’ve heard of mise, so i’m wondering what i’m potentially missing out on

square_usual
3 replies
3d2h

The mise docs have a great comparison page: https://mise.jdx.dev/dev-tools/comparison-to-asdf.html

In personal experience, all of these have held true. Mise is just easier to use than asdf and I don't have to `asdf -h` every time I have to use it. The performance is significantly better and I can use mise exec in shebangs without sacrificing too much startup time. And it's easier to install because it's a SLSE. I wouldn't say it's worth deliberately switching from asdf (though I would, because asdf's CLI is an endless annoyance to me), but if you're on a new machine I don't think there is a reason to use asdf over mise.

jeremyjh
2 replies
2d4h

Anyone who wants "nodejs latest:20" in their .tool-versions file is probably confused. The point of .tool-versions is to pin a particular version in your source control so you know your whole team is on the same version in that branch. Getting the latest revision whenever you install means pointless drift.

square_usual
1 replies
2d2h

Are you really sensitive to the minor/patch versions of node? I don't think it should make a difference whether you're using 20.9 or 20.12, and to me to that would be a bigger red flag.

Either way, I don't think many people are checking in tool-versions files. Most JS templates specify a range of node versions they work on and leave it to you to specify which one. I don't think this is unusual for most other languages I use, like ruby.

jeremyjh
0 replies
1d2h

No, I’m not generally sensitive to a point release but it’s pointless to allow drift anyway. I have had point revisions of docker containers break CI pipelines before and it’s unpleasant.

cess11
0 replies
3d12h

Maybe it works better with tmux?

nmk
3 replies
4d3h

I hope the next decade will be oriented around integrating stateful message-oriented (reactive?) programming into the mainstream full stack

Also known as MVC before Rails decided to redefine the term.

klabb3
0 replies
4d2h

Maybe? Most good ideas have been out there for decades. Getting the execution right is what’s hard.

andrewflnr
0 replies
4d1h

No, MVC is at best orthogonal to message-passing.

DonHopkins
0 replies
4d3h

Java redefined the term MVC a long time after Smalltalk originally defined the term in the 70's. Rails didn't redefine the term, it much later copied the (poorly) redefined term from Java and tweaked it a little. Smalltalk MVC and Java/Rails MVC are EXTREMELY different. Java and Ruby's MVC are quite similar (and loosely based on a superficial misunderstanding Smalltalk's MVC).

submain
9 replies
4d3h

I am not familiar with LiveView, so I'm curious. Looks like it processes UI actions server side.

So, are all client interactions sent through the websocket? I remember years/decades ago we used to do that with ASP.NET, where every single component interaction was handled by the server. How is this different / better?

bcardarella
5 replies
4d3h

I never used ASP.net so I cannot offer a comparison about what is "better" but your assumption is correct that diffs are through a WS and merged client-side. What this has resulted in is actual order of magnitude of implementation time reduction over the crazier SPA complexity available today. Less time to build, less cost to the company, less bugs in the long run, and a single place to manage and reason about state. It's a win.

bornfreddy
4 replies
4d3h

But isn't there a delay in UI responses because of latency then?

cess11
2 replies
4d2h

Sure, it's not for a smooth user experience over 2G connection, if that's your audience you'd use ordinary template rendering with Phoenix, or use it for a JSON API and build a JS client that talks to it.

bornfreddy
1 replies
3d13h

I can see the next big thing coming: no latency! Render in client! :)

cess11
0 replies
3d12h

Seems maybe messy to have templating in the client but I'd take a look if someone has done it.

cmoski
0 replies
3d10h

You keep frontend UI changes that don't affect state in the frontend. If you don't need the server then you don't waste your user's time with trips to the server. If you need server, you can do things on the frontend at the same time you send the request, e.g. hide something, add/remove a class.

victorbjorklund
0 replies
4d2h

You could do that but in general no not in production. If you just have a UI update that server doesnt need to know about (for example opening a menu) then you do it with javascript.

cess11
0 replies
4d3h

This is run on the BEAM VM, so a LiveView is a lightweight process hooking up the view to the bells and whistles of the BEAM, including relatively easy Pub/Sub, soft-realtime monitoring and so on.

Some people think it's kind of nice to have one programming language for everything, including queues, cache, database queries, client layout, business logic.

Muromec
0 replies
4d1h

Vanilla live view does exactly that, but I think the point of trowing svelte (or any other frontend) into the mix is to keep some updates frontend-only.

It seems to go against the wisdom of the last few decades, but the network latency seems to permit it now. Throw some edge computing to the mix and maybe it's all a good idea.

debussyman
5 replies
4d3h

I love LiveView + Svelte!

(I gave the talk at ElixirConf 2022 on how to combine them, but the live_svelte contributors have done the work to make it a reality)

IMO there is always a need for client side state, especially for apps with rich UX. I also live in NYC where network connectivity is not a given, especially in transit.

One super powerful feature that the authors don't cover is being able to use Phoenix's pubsub, so that server-side state changes that occur on other servers also get pushed reactively to any client. It's pretty typical to have multiple web servers for handling mid/high levels of traffic.

logicallee
3 replies
4d2h

How usable are LiveView pages while offline? (Due to intermittent lack of network connectivity.)

victorbjorklund
0 replies
4d2h

Liveview does not work at all while offline. If you just experience a short temporary drop it can reconnect you automatically.

realusername
0 replies
4d1h

It's the same as any other web page, it doesn't work when you receive or send data from the server.

Liveview isn't that special, the liveview paradigm works best for what would already be online actions in a normal page.

matlin
0 replies
4d1h

Mobile networks in general also produce a lot of latency and often warrants a local first caching system on the client. If you want a LiveView like user experience but resilience to flaky network conditions, we just added Svelte 5 bindings to Triplit[1] which let's you write queries client-side and have them sync with a server over web-sockets. Probably not attractive to Elixir devs but if Svelte is your central focus, Triplit is a good option.

[1] https://www.triplit.dev/docs/frameworks/svelte

atonse
4 replies
4d3h

I've wanted something like this for Vue/React too with LiveView, because then you get access to this massive ecosystem of great components to put in your phoenix apps while still utilizing LiveView. (the LiveView component ecosystem is tiny).

So maybe this can be the start of a general bridge that can bridge LV with React or Vue components too? And make it easy to put these in a page and interact with LV events, etc.

bezieio
1 replies
3d22h

I've recently developed a LV/React bridge and am using it on a production app, but nothing open sourced at the moment. I'll open it up at some point soon and try and post back here!

atonse
0 replies
2d21h

Even putting it in a gist would be a great start! Looking forward to it.

dugmartin
0 replies
4d2h

I created a useLiveView() hook a couple of years ago for a couple of personal side projects. It was pretty straightforward and let me have bi-directional state.

The only big downside is you also need to setup server side React rendering for the initial load if you care about SEO.

Maybe I should dig it up and post it in a gist somewhere (I don't want to maintain an open source library for it).

POiNTx
4 replies
4d4h

I created LiveSvelte, let me know if you have any questions :)

wturner
3 replies
4d3h

Is there any reason you can think of to still use JS Hooks if using LiveSvelte?

victorbjorklund
1 replies
4d2h

If you just need one simple thing in your app that uses JS it might be overkill to bring in LiveSvelte only for that.

POiNTx
0 replies
4d

That's true. And another downside is that once you're in the Svelte environment you can't use your Phoenix components inside those Svelte components.

I've thought about adding a library of default Svelte components which mirror the core components you get from Phoenix out of the box. But then again you lose forms and changesets etc, it's just annoying.

Where I see LiveSvelte fit is where you really need a lot of complex client side state. Sprinkle it in, but keep using phoenix components as your default, even with hooks

POiNTx
0 replies
4d3h

With hooks you are able to render your html server side while still having some js functionality. Technically you can also render server side with LiveSvelte as it supports SSR out of the box, but this SSR uses Node and that introduces a slight performance decrease (around 3ms I believe).

I've been looking at Bun to see if it would help but it's still unclear to me.

Heex rendering is just way faster.

If you don't care about SSR for certain components (for example modals, they don't need SSR generally), then I don't see a clear advantage for hooks.

nvegater
3 replies
4d3h

I honestly don’t understand how’s this different from react server components with nextjs but with less features? It seems like the same thing but with an even clearer border between server/client components and therefore missing all the optimizations (like streaming). Like islands architecture but made out of different technologies, is my understanding correct ? I would appreciate some feedback :)

bcardarella
1 replies
4d3h

LiveView has streaming. I would argue that if you're stuck in the React way of thinking about application design then it's not worth trying to sell you on what LiveView is doing. But there are multiple case studies out there showing that it results in far less build times than React for no compromise on user experience.

nvegater
0 replies
15h33m

The build times for react are not a problem at all.

terandle
0 replies
4d2h

Yeah I wish this article had covered how this solution compares to React Server Components as it kind of looks like spaghetti of different techs compared to how next.js streamlines the same problem space into one consistent mental model.

baskind
3 replies
4d3h

Nice solution!

In my app, I use reusable Stimulus controllers alongside LiveView, and it works seamlessly as well.

On a general note, while it's a pleasure to build with LiveView, the more I use it in real-life scenarios, the more I realize the benefits of stateless HTTP frameworks like Hotwire, which feel more performant and resilient to reconnections, and avoid the need to place more servers close to users for stability.

sph
1 replies
4d3h

Yeah I use Stimulus and Live View together as well. It is the right level of complexity, while I feel Svelte deals with a lot of stuff which is not even an issue when paired with LV. All you need is vanilla JS or a thin layer on top of it, not an entire framework. You will not have to write a lot of JS after all.

My production app has no more than 200 lines of JS, and I could probably get rid of a couple Stimulus controllers. Live View is that good. I also made a very hacky Stimulus-Live View hook adapter, so my Stimulus controller can send events directly to the LV process.

EDIT: Live View does not require you to run geo-distributed servers at all, unless you have bought into the fly.io kool aid a little too much. And it deals with disconnections beautifully. I didn't even have to do anything to support zero-downtime updates. The client loses connection to the WebSocket and reconnects to the new version, restores the state, all that out of the box automatically. What more do you need?

baskind
0 replies
3d23h

I have the same view of an app implemented with Rails+Turbo and a duplicate in Elixir+LiveView.

Performance is comparable when I am close to the server (elixir is slightly faster), but when on another continent any content changes/navigation over websockets suddenly feel very laggy, while navigation over HTTP in the supposedly slower Ruby+Rails is actually consistently fast.

I’ve only recently discovered this as I went travelling to another continent, so will do more perf testing.

But the nature of the always-connected websockets hasn’t been a pleasurable one for me: for instance, a LiveView process crashes and you lose all the data in a big form, unless you write glue code for it to restore. And the experience of seeing the topbar getting stuck in the loading state the second your internet connection is spotty or if you are offline just gives me anxiety as a user.

MatthiasPortzel
0 replies
4d3h

When Stimulus/Turbo was first announced, I was really hoping it would help with the problems that the author describes.

Unfortunately, Stimulus doesn’t actually provide an elegant way to keep state on the client. “A stimulus application’s state lives as attributes in the DOM.” This means that it’s not better than vanilla JS or jQuery.

Edit: I haven’t used Stimulus for a real project; it’s possible their values and change callbacks are a better experience than I originally imagined.

sebastianconcpt
2 replies
4d

Have you guys considered htmx before going that way? If you can share pro/cons I'd love to hear.

_acco
1 replies
3d16h

Author here. We did not. We briefly tried Alpine, which I think is comparable?

I think Alpine is cool, but it didn't really stick with the team. I think that's because we were still writing our components in LiveView and sprinkling in Alpine. A big unlock with LiveSvelte was getting to move so much into `.svelte` files, but not converting the whole thing to a SPA. Working in a `.svelte` file gives you a lot of niceties that an Alpine-decorated LiveView component won't (prettier, intellisense, etc).

This approach could totally work with other paradigms, like Alpine and HTMX. I think the key is using LiveView as a backend-for-frontend, so writing all your components in `.htmx` files or whatever.

brushfoot
0 replies
3d4h

Alpine isn't really comparable to HTMX, though their names often come up together.

Alpine is something like "AngularJS lite," a lightweight JavaScript framework with some similarities to the original version of Angular.

HTMX is a collection of HTML attributes that simplify replacing HTML on your page with HTML partials from your server.

In the article's example of interdependent dropdowns, you'd have HTMX attributes for catching the change event of the first select, specifying the server URL to fetch updated HTML from, and specifying the target to put the HTML into (the second select).

Some have recommended using HTMX for less complicated client-side interactivity and adding AlpineJS to pages that need more. That said, people have built impressive apps just by leveraging HTMX.

inopinatus
2 replies
3d20h

If a new row comes in, you just need to push it to your table, and LiveView will update the client for you.

Don’t do this in line-of-business apps where those rows are interactive. The cognitive latency readily induces users into clicking the wrong thing, emailing the wrong customer, refunding the wrong transaction etc. My preferred UX instead is a sticky banner conveying “the data has changed, click here to refresh”. Or, in a pinch, ensure that new rows are rendered append-only with no change in scroll position.

EmilStenstrom
1 replies
3d20h

This CAN work, if you animate in the new lines slowly and (maybe) disable clicks while new data is coming in.

cosmojg
0 replies
3d4h

I definitely prefer this over click-to-refresh banners. I want to make as few decisions as possible when completing a well-defined task on a computer.

gardenhedge
2 replies
3d19h

Article misses a comparison to newer frameworks like Remix which are server side driven

chimen
1 replies
3d6h

Server side, Remix does not hold a candle against Elixir.

gardenhedge
0 replies
2d19h

Why not?

digdugdirk
2 replies
4d3h

For someone not in the web dev space, this looks like an interesting way to get started.

If one has Python and Rust experience, what would be a recommended "first principles" path to get started in understanding web development with LiveView and Svelte?

kevinak
0 replies
4d2h

Start with Phoenix and add live_svelte when you need to. No need to juggle both at the same time when you're starting out.

dinkleberg
0 replies
4d2h

I would advise against that path (unless you’re looking at a long time horizon). Phoenix is great, and it makes a lot of really hard things simple, it also introduces you to a lot of things that don’t exist anywhere else (relies heavily on OTP).

But then that approach has almost nothing to do with svelte or any other SPA tool so that is something else you’d have to learn.

Personally I’d start with either Phoenix and avoid any SPA tools until you get really comfortable and have hit the boundaries of what is possible with Phoenix. Or alternatively, I’d start with sveltekit and not think about Phoenix and save exploring that for another time.

Phoenix is super cool, but I’d suggest starting with SvelteKit, as you can build full stack apps and the same principles apply if you want to move to react or vue or anything else.

udkl
1 replies
3d18h

The cleanest way to handle the backend and frontend charade I've seen until now is using https://github.com/hyperfiddle/electric which is a clojure DSL on top of react

rc_mob
1 replies
4d4h

I'm a noob but it looks like this overlaps with nextjs a lot?

PKop
0 replies
4d2h

LiveView and Elixir/Erlang have stateful processes on the server unlike pure stateless HTTP request/response. This fundamental difference is what makes LiveView unique. Next.js isn't passing messages over a websocket nor holding stateful processes on the server.

peterisdirsa
1 replies
4d3h

Is LiveView same as Vaadin in Java world?

cess11
0 replies
4d3h

Not really, Vaadin is mainly a UI component lib and some convenience annotations, while LiveView is designed to allow soft-realtime UI views over WebSocket.

krainboltgreene
1 replies
4d2h

My own company has found significant advantage by using LiveView with Alpine, specifically in CSP mode since we can't allow `eval` to happen on the client. When I originally looked at LiveSvelte it seemed very new and untested. It also had some unsolved implications in strict CSP environments. Glad to see that it's become very useful!

Our own pattern (LiveAlpine?) is as follows:

Does the component need HTML? Then use an HTML Component. Does the component have server-side state? Then use a Live Component. Does the component need client-side behavior and/or state? Then also define an Alpine component. Does the component need to receive client-side events from the server or make HTTP requests? Then also define a Phoenix Hook.

scop
0 replies
3d21h

Thank you for providing your pattern.

xrd
0 replies
4d4h

I'm so excited about this, I've never been able to rectify LiveView and Svelte. I love hearing about the philosophies of Svelte builders so this will be an extra special episode.

jonnycat
1 replies
4d2h

This is great. LiveView is truly amazing and greatly speeds up development, but as the post describes, there are a couple of rough edges. They're all solvable, but sometimes there aren't clear or well-established patterns for how, so it can feel a bit ad hoc. While I'm not currently using Svelte for this kind of thing, I'm really glad to see people formalizing some of the issues + solutions.

atonse
0 replies
3d19h

My main complaint with LiveView is communication between components in a tree (like callbacks and sending data back). It’s still janky at a core syntax level between send, send update, etc. I feel this is something only the core team can fix because it probably requires some kind of use of macros etc.

jmull
1 replies
4d3h

I guess it's fun to build things, but this mashup is pretty messy.

If you like Svelte (I do) you're probably going to find sveltekit to be a lot simpler and more useful.

amsterdorn
0 replies
4d3h

+1, SPA should only be used as a last resort, not sure why the article compares only to it.

SvelteKit is much simpler and well-documented and addresses the issues mentioned.

enraged_camel
1 replies
4d2h

The biggest benefit to this would be gaining access to all the UI component libraries in the JavaScript ecosystem. The lack of such components in LV is one of the things that slows us down the most.

chimen
0 replies
3d6h

90% of those components are in/for React tho?

todotask
0 replies
4d2h

I wonder what happened when attempting to open this website, as it caused my Firefox browser to hang. I've never experienced such a problem before.

stsrki
0 replies
4d3h

Essentially, it works the same as Blazor Server. Or am I wrong?

mrcwinn
0 replies
3d16h

If you find the need to use Svelte with LiveView, I assure you, you’re doing it wrong.

moomoo11
0 replies
4d

Anyone else use this in production with a significant user base or business usage? How’s ur experience?

matt_s
0 replies
4d

LiveView makes a lot of things easy. But it also makes some easy things hard.

Having been involved with web applications for a couple decades there were some weird things LiveView did that weren't easy for me to pick up. Maybe its changed in the last 12-18 months but initial versions were built odd for some seemingly easy types of XHR things one would want to do.

lacoolj
0 replies
3d21h

When I watch people using Nextjs, it makes me cringe. "This is just PHP but less clear what happens where"

And yet for some reason the thought of Phoenix + LiveView + Svelte makes me want so badly to try it. Just the thought of playing with it has me giddy. This must be a mental disorder I'm experiencing.

Dissociative Framework Disorder.

kimi
0 replies
4d1h

What's most game-changing, though, is that you have a backend, stateful process that is collaborating with a frontend, stateful process.

...given that managing state is the thorniest of issues, what could go wrong with this approach?

gr4vityWall
0 replies
3d15h

I have the impression Meteor solves this problem really well. You get optimistic UI by default without any extra work.

fearthetelomere
0 replies
4d3h

This looks super promising...

I very much relate to the dropdown example, and I've found that complicated UX patterns can be extremely awkward to implement and maintain in LV.

One example from my experience that was prickly to implement in LV was graying out a chat message you sent if it didn't get acked or persisted by the channel/server for any reason.

Can't wait to try this out in my next project!

bcardarella
0 replies
4d4h

We use Svelte along with LiveView in BeaconCMS. There are certainly good use cases for wanting something that has more granular control of the UI on the client but I would caution teams from just going all-in on Svelte + LV for all things. Even with Phoenix using LiveView isn't always the answer as sometimes dead render pages are perfectly fine. Don't all-or-nothing everything.

As the article points out there are some good use cases for deviating from the 'LiveView Way'. I would argue that if you have 1,000ms round trips then there is something else to consider but geographically located servers could be unavailable to your team for a number of reasons (i.e. cost) so adding some client-side state management could be your solution.

aabbcc1241
0 replies
3d5h

You may give ts-liveview a try. It is easy to inline client-side script and possible to share the exact function between client and server.

I suppose it's also possible with remix