return to table of content

Liquid Layers

hopfog
11 replies
1d2h

The author is also working on a really impressive voxel physics engine: https://youtu.be/3259aycYcek

pornel
10 replies
1d1h

I'm surprised the simulation is run on the CPU! This problem has solutions that fit the GPU well, even for grid<>particle conversions.

troglobite
9 replies
23h29m

this kind of sims are better suited for CPU ;-), gpu are good to work on meshes, not really on pure particles. At GPU are super good for grid based hydro.

lukan
8 replies
13h33m

"gpu are good to work on meshes, not really on pure particles"

Why?

Having thousands of particles, all in need of doing the same operations on them in parallel screams GPU to me. It is just way harder to program a GPU, vs a CPU.

thegeomaster
7 replies
8h4m

Collision detection is usually a tree search, and this is a very branching workload. Meaning that by the time you reach the lowest nodes of the tree, your lanes will have diverged significantly and your parallelism will be reduced quite a bit. It would still be faster than CPU, but not enough to justify the added complexity. And the fact remains that you usually want the GPU free for your nice graphics. This is why in most AAA games, physics is CPU-only.

lukan
4 replies
6h14m

"Collision detection is usually a tree search"

Yes, because of the very limited numbers of CPU cores. With a GPU you can just assign one core to one particle.

Here is a simple approach to do it with WebGPU:

https://surma.dev/things/webgpu/

It uses the very simple approach, of testing every particle with EVERY other particle. Still very performant (the simulation, the choosen rendering with canvas is very slow)

I currently try to do something like this, but optimised. With the naive approache here and Pixi instead of canvas, I get to 20000 particles 120 fps on an old laptop. I am curious how far I get with an optimized version. But yes, the danger is in calculating and rendering blocking each other. So I have to use the CPU in a smart way, to limit the data being pushed to the GPU. And while I prepare the data on the CPU, the GPU can do the graphic rendering. Like I said, it is way harder to do right this way. When the simulation behaves weird, debugging is pain.

thegeomaster
1 replies
6h7m

This is a 2D simulation with only self-collisions, and not collisions against external geometry. The author suggests a simulation time of 16ms for 14000 particles. State of the art physics engines can do several times more, on the CPU, in 3D, while colliding with complex geometry with hundreds of thousands of triangles. I understand this code is not optimized, but I'd say the workload is not really comparable enough to talk about the benefits of CPU vs GPU for this task.

The O(n^2) approach, I fear, cannot really scale to much beyond this number, and as soon as you introduce optimizations that make it less than O(n^2), you've introduced tree search or spatial caching that makes your single "core" (WG) per particle diverge.

lukan
0 replies
5h45m

"that make it less than O(n^2), you've introduced tree search or spatial caching that makes your single "core" (WG) per particle diverge"

Well, like I said, I try to use the CPU side to help with all that. So every particle on the GPU checks maybe the 20 particles around it for collision (and other reactions) and not 14000, like it is currently.

That should give a different result.

Once done with this sideproject, I will post my results here. Maybe you are right and it will not work out, but I think a found a working compromise.

kotsoft
1 replies
5h35m

If you use WebGPU, for your acceleration structure, try to use the algorithm here presented in the Diligent Engine repo. This will allow you not to transfer data back and forth between CPU and GPU: https://github.com/DiligentGraphics/DiligentSamples/tree/mas...

Another reason I did it on CPU was because with WebGL you lack certain things like atomics and groupshared memory, which you now have with WGPU. For the Diligent Engine spatial hashing, atomics is required. I'm mainly using WebGL because of compatibility. iOS Safari still doesn't enable WGPU without special feature flags that user has to enable.

lukan
0 replies
5h2m

Thanks a lot, that is very interesting! I will check it out in detail.

But currently I will likely proceed with my approach where I do transfer data back and forth between CPU and GPU, so I can make use of the CPU to do all kinds of things. But my initial idea was also to keep it all on the GPU, I will see what works best.

And yes, I also would not recommend WebGPU currently for anything that needs to deploy soon to a wide audience. My project is intended as a long term experiment, so I can live with the limitations for now.

kotsoft
1 replies
6h19m

Yeah, pretty much this, I've experimented with putting on the GPU a bit but I would say particle based is 3x faster than a multithreaded & SIMD CPU implementation. Not 100x like you will see in Nvidia marketing materials, and on mobile, which this demo does run on, GPU often becomes weaker than CPU. Wasm SIMD only has 4 wide but the standard is 8 or 16 wide on most CPUs today.

But yeah, once you need to do graphics on top, that 3x pretty much goes away and is just additional frametime. I think they should work together. On my desktop stuff, I also have things like adaptive resolution and sparse grids to more fully take advantage of things that the CPU can do that are harder on GPU.

The Wasm demo is still in its early stages. The particles are just simple points. I could definitely use the GPU a bit more to do lighting and shading a smooth liquid surface.

thegeomaster
0 replies
6h3m

Agree with most of the comment, just to point out (I could be misremembering) 4-wide SIMD ops that are close together often get pipelined "perfectly" onto the same vector unit that would be doing 8- or 16-wide SIMD, so the difference is often not as much as one would expect. (Still a speedup, though!)

sBqQu3U0wH
3 replies
23h57m

Cool, the performance is significantly better than the previous time. Or is it a different kind of simulation?

kotsoft
2 replies
22h12m

This is still the same kind of simulation, based on the Particle-based Viscoelastic Fluid Simulation paper. I updated it to use Wasm SIMD more fully with the help of Clang Vector Extensions, Compiler Explorer and Wasm Analyzer. Compiler explorer to play around with patterns and Wasm Analyzer to double check the final compilation.

eigenvalue
1 replies
20h52m

Really awesome work. The WASM lets you show this off to a massively larger audience than a regular binary. I'd encourage you to make your Voxel project also using WASM, then you can easily turn it into a mobile/tablet game.

kotsoft
0 replies
20h0m

Thanks! I am definitely working on bringing more to the WASM world. I'd begun experimenting with 3D and multithreading and then last week decided to circle back to the 2d demo and polish it up a bit more.

d--b
3 replies
21h59m

I really wish these were 3D simulations. The 2D ones are always a bit off to me...

kotsoft
0 replies
4h6m

Thanks! With 3D the main challenge might be visualizing the layers separately. Ideally each of the phases would have some kind of metaball effect and also be transparent and even refract. Will be pretty tough to do and will have to fight with uncanny valley effect.

Sometimes for sandbox I feel like 2D can be more fun because people can target particles they interact with better.

Maybe WebXR will be good for the 3D version.

vanderZwan
2 replies
20h49m

Wasn't this shared a week/few weeks ago? Not that I mind it being posted again, just that if it was already posted it might be worth linking the old thread, in case there was some interesting discussion there.

kotsoft
1 replies
20h17m

Yes, the previous one is: https://news.ycombinator.com/item?id=40429878 There are some new features since then and some major speed improvements from using SIMD. I do still see complaints about the compressibility so I still need to work on some improvements for that. (dev)

vanderZwan
0 replies
20h14m

Thank you for digging up the link! And the bonus "release notes" ;)

owork138
2 replies
19h2m

I think the author is also working on an amazing voxel physics engine : https://libgenis.net

kotsoft
0 replies
4h20m

Hi, I've updated the home page on my site (https://grantkot.com) with links to my other socials, like the YouTube and itchio pages. Twitter for more casual frequent updates, and YouTube for longer summary updates. The itchio demos need to be optimized for a wider variety of machines.

anotheryou
2 replies
9h14m

Would be fun to add an emulsifier :) (particles that bond with 2 colors)

wiz21c
1 replies
8h36m

let's do a mayonnaise!!!

kotsoft
0 replies
7h4m

You actually can adjust the settings for this. In settings>simulation, instead of sameRestDensity being 8 make it 0 and make it higher for diffRestDensity. I recommend doing it with low gravity as well (you can get zero g by clicking enable accelerometer on computers without accel)

Nzen
2 replies
1d2h

tl;dr this is a liquid simulation toy. This starts with four colors of different densities (so they self-organize into layers). Clicking allows scooping up a ball of the liquid. I consider this pretty to watch, like a laval lamp that I can also throw globs of.

TeMPOraL
1 replies
22h13m

You also have a settings UI, where you can change (among many other things) what clicking does. I find it most interesting to switch to "repel" - the repulsion forcefield turns to be an universal tool for mixing and separating the different liquids, depending on how fast and precisely you operate it. Fascinating.

kotsoft
0 replies
22h7m

With keyboard there are also the shortcuts: 1-4 to emit the different materials a/r to attract repel from mouse x/c to rotate a bit

BugsJustFindMe
2 replies
1d1h

Are there compressible liquids? I'm seeing the whole thing bounce like the layers are compressing.

pornel
1 replies
1d1h

This is a common and difficult problem when simulating fluids using particles.

It's possible to simulate using a grid instead, and that computes pressure very precisely, but it has a downside of adding inaccurate viscosity.

And there are tricks that combine both approaches to balance out the errors.

kotsoft
0 replies
5h20m

Yeah agree. The objective was more to make a physics toy that would run on single core on a phone than something for actual scientific or industrial use. I could add additional iterations or do pressure projection but then there would be complaints about it being slow & choppy.

There are also some large density ratios between the materials which further increased the difficulty, and would also increase the number of pressure projection iterations on a grid. I tried to simulate buoyancy without cheating (e.g. giving different materials different acceleration to gravity)

yayitswei
1 replies
1d2h

Super fun to resize the window and watch the liquid slosh into the new space.

ssl-3
0 replies
1d1h

Super fun on mobile to dig into the settings and turn on the accelerometer and gyro, and give the phone a twist and a shake.

butterNaN
1 replies
9h5m

I know this is asking a lot from what is already a bundle of fun, but I wish it also made noises, in fact my brain is already simulating some

It's like a visual stim toy

whisk
0 replies
1d1h

Super fun!

troglobite
0 replies
23h30m

Awesome!

tomcam
0 replies
1h6m

Delightful. Runs like a bat out of hell on my older iPad mini 5th gen.

sphars
0 replies
22h15m

FYI, in the Interaction settings you can enable accelerometer and use gyro. Lots of fun on mobile devices! Works well for me on Android/Firefox

shinze
0 replies
22h14m

Impressive

miohtama
0 replies
20h54m

iPad accelerometer is 90 degrees off

kotsoft
0 replies
1h54m

UPDATE: I've added a "hacker mode" for you all! You can now specify a userUpdate function and it will run it each frame. See my twitter post for a demo of it. https://x.com/kotsoft/status/1806362956294189299

imzadi
0 replies
23h33m

This is interesting. I wish the other interactions were as dramatic as the drag one. You have to really crank up the brush size to see anything interesting.

hermitcrab
0 replies
22h42m

This is really impressive. Especially the performance in a browser.

dctaflin
0 replies
14h54m

It also takes into account surface tension. Bubbles of denser/lighter materials cling to the interfaces, rather than sinking or rising.

amelius
0 replies
1d

Was hoping for accurate simulation of laminar flow.

Anotheroneagain
0 replies
14h30m

When you set all materials to the same density, you get a cold surface, with hot matter below it.