return to table of content

Simulating fluids, fire, and smoke in real-time

cherryteastain
57 replies
23h42m

As a person who did a PhD in CFD, I must admit I never encountered the vorticity confinement method and curl-noise turbulence. I guess you learn something new every day!

Also, in industrial CFD, where the Reynolds numbers are higher you'd never want something like counteracting artificial dissipation of the numerical method by trying to applying noise. In fact, quite often people want artificial dissipation to stabilize high Re simulations! Guess the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.

Sharlin
33 replies
21h16m

Guess the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.

The first rule of real-time computer graphics has essentially always been "Cheat as much as you can get away with (and usually, even if you can't)." Also, it doesn't even have to look right, it just has to look cool! =)

Philpax
24 replies
18h55m

“Everything is smoke and mirrors in computer graphics - especially the smoke and mirrors!”

doctorhandshake
23 replies
17h38m

This was the big realization for me when I got into graphics - everything on the screen is a lie, and the gestalt is an even bigger lie. It feels similar to how I would imagine it feels to be a well-informed illusionist - the fun isn’t spoiled for me when seeing how the sausage is made - I just appreciate it on more levels.

chaboud
16 replies
17h1m

At some point every render-engine builder goes through the exercise of imagining purely physically-modeled photonic simulation. How soon one gives up on this computationally intractable task with limited marginal return on investment is a signifier of wisdom/exhaustion.

And, yes, I've gone way too far down this road in the past.

solardev
14 replies
16h22m

Not being a graphics person, is this what hardware ray tracing is, or is that something different?

bufferoverflow
6 replies
11h7m

Rayteacing doesn't simulate light, it simulates a very primitive idea of light. There's no diffraction, no interference patterns. You can't simulate the double-slit experiment in a game engine, unless you explicitly program it.

Our universe has a surprising amount of detail. We can't even simulate the simplest molecular interactions fully. Even a collision of two hydrogen atoms is too hard - time resolution and space resolution is insanely high, if not infinite.

zokier
1 replies
3h40m

Btw there is some movement towards including wave-optics into graphics instead of traditional ray-optics: https://ssteinberg.xyz/2023/03/27/rtplt/

carterschonwald
0 replies
2h11m

This is very cool. Thx for sharing

smolder
1 replies
7h56m

I wanted to make a dry joke about renderers supporting the double slit experiment, but in a sense you beat me to it.

hoseja
0 replies
6h17m

There sure has been a lot of slits simulated in Blender.

zokier
0 replies
7h8m

You can't simulate the double-slit experiment in a game engine, unless you explicitly program it.

Afaik you can't even simulate double-slit in high-end offline VFX renderers without custom programming

Sharlin
0 replies
10h16m

And what’s more, it simulates a very primitive idea of matter! All geometry is mathematically precise, still made of flat triangles that only approximate curved surfaces, every edge and corner is infinitely sharp, and everything related to materials and interaction with light (diffuse shading, specular highlights, surface roughness, bumpiness, transparency/translucency, snd so on) are still the simplest possible models that five a somewhat plausible look, raytraced or not.

ReactiveJelly
6 replies
15h54m

Even the ray tracing / path tracing is half-fake these days cause it's faster to upscale and interpolate frames with neural nets. But yeah in theory you can simulate light realistically

dexwiz
5 replies
14h46m

It’s still a model at the end of the day. Material properties like roughness are approximated with numerical values instead of being physical features.

Also light is REALLY complicated when you get close to a surface. A light simulation that properly handles refraction, diffraction, elastic and inelastic scattering, and anisotropic material properties would be very difficult to build and run. It’s much easier to use material values found from experimental results.

dahart
4 replies
10h37m

If I understood Feynman’s QED at all, light gets quite simple once you get close enough to the surface. ;) Isn’t the idea was that everything’s a mirror? It sounds like all the complexity comes entirely from all the surface variation - a cracked or ground up mirror is still a mirror at a smaller scale but has a complex aggregate behavior at a larger scale. Brian Green’s string theory talks also send the same message, more or less.

Sharlin
2 replies
10h25m

Sure, light gets quite simple as long as you can evaluate path integrals that integrate over the literally infinite possible paths that each contributing photon could possibly take!

Also, light may be simple but light interaction with electrons (ie. matter) is a very different story!

arbitrandomuser
1 replies
3h30m

Don't the different phases from all the random path more or less cancel out , and significant additions of phases only come from paths near the "classical" path? I wonder if this reduction would still be tractable on gpus to simulate diffraction

dahart
0 replies
3h9m

That’s what I remember from QED, the integrals all collapse to something that looks like a small finite-width Dirac impulse around the mirror direction. So the derivation is interesting and would be hard to simulate, but we can approximate the outcome computationally with extremely simple shortcuts. (Of course, with a long list of simplifying assumptions… some materials, some of the known effects, visible light, reasonable image resolutions, usually 3-channel colors, etc. etc.)

I just googled and there’s a ton of different papers on doing diffraction in the last 20 years, more that I expected. I watched a talk on this particular one last year: https://ssteinberg.xyz/2023/03/27/rtplt/

dexwiz
0 replies
2h11m

Orbital shapes and energies influence how light interacts with a material. Yes, once you get to QED, it’s simple. Before that is a whole bunch of layers of QFT to determine where the electrons are, and their energy. From that, there are many emergent behaviors.

Also QED is still a model. If you want to simulate every photon action, might as well build a universe.

hutzlibu
0 replies
8h8m

I have heard the hope expressed, that quantum computers might solve that one day, but I believe it, once I see it.

Till then, I have some hope, that native support for raytracing on the GPU will allow for more possibilities ..

jameshart
3 replies
14h14m

My favorite example of the lie-concealed-within-the-lie is the ‘Half-Life Alyx Bottles’ thing.

Like, watch this video: https://www.youtube.com/watch?v=9XWxsJKpYYI

This is a whole story about how the liquids in the bottles ‘isn’t really there’ and how it’s not a ‘real physics simulation’ - all just completely ignoring that none of this is real.

There is a sense in which the bottles in half-life Alyx are ‘fake’ - that they sort of have a magic painting on the outside of them that makes them look like they’re full of liquid and that they’re transparent. But there’s also a sense in which the bottles are real and the world outside them is fake. And another sense in which it’s all just tricks to decide what pixels should be what color 90 times a second.

rhn_mk1
1 replies
9h8m

I want to see that shader. How is sloshing implemented? Is the volume of the bottle computed on every frame?

Clearly, there's some sort of a physics simulation going on there, preserving the volume, some momentum, and taking gravity into account. That the result is being rendered over the shader pipeline rather than the triangle one doesn't make it any more or less "real" than the rest of the game. It's a lie only if the entire game is a lie.

worldsayshi
0 replies
4h18m

Is it really doing any sloshing though? Isn't it "just" using a plane as the surface of the liquid? And then adding a bunch of other effects, like bubbles, to give the impression of sloshing?

doctorhandshake
0 replies
2h47m

This is the perfect example of what I meant. So many great quotes in this about the tricks being stupid and also Math. Also the acknowledgment that it’s not about getting it right it’s about getting you to believe it.

hyperthesis
0 replies
10h59m

A VP at Nvidia has a cool (marketing) line about this, regarding frame generation of "fake" frames. Because DLSS is trained on "real" raytraces images, they are more real than conventional game graphics.

So the non-generated frames are the "fake frames".

dexwiz
0 replies
14h56m

Had the same realization that game engines were closer to theater stages than reality after a year of study. The term “scene” should have tipped me off to that fact sooner.

FirmwareBurner
6 replies
20h20m

Why not cheat? I'm not looking for realism in games, I'm looking for escapism and to have fun.

digging
2 replies
20h4m

I don't think this is a great argument because everybody is looking for some level of realism in games, but you may want less than many others. Without any, you'd have no intuitive behaviors and the controls would make no sense.

I'm not saying this just to be pedantic - my point is that some people do want some games to have very high levels of realism.

someletters
1 replies
19h5m

Some of the best games I've played in my life were the text based and pixel art games in MS DOS. Your imagination then had to render or enhance the visuals, and it was pretty cool to come up with the pictures in your head.

mewpmewp2
0 replies
18h16m

I mean yes, but, ultimately we want to be able to simulate ourselves to the truest to be able to understand our origins, which are likely simulated as well, right.

solardev
0 replies
16h20m

It's kinda cool when it feels real enough to be used a gameplay mechanism, such as diverting rivers and building hydroelectric dams in the beaver engineering simulator Timberborn: https://www.gamedeveloper.com/design/deep-dive-timberborn-s-...

kevincox
0 replies
5h30m

The reason not to cheat is that visual artifacts and bugs can snap you out of immersion. Think of realizing that you don't appear in a mirror or that throwing a torch into a dark corner doesn't light it up. Even without "bugs" people tend to find more beautiful and accurate things more immersive. So if you want escapism, graphics that match the physics that you are used to day-to-day can help you forget that you are in a simulation.

The reason to cheat is that we currently don't have the hardware or software techniques to physically simulate a virtual world in real-time.

Sharlin
0 replies
19h7m

I didn't mean to imply that cheating is bad. Indeed it's mandatory if you want a remotely real-time performance.

corysama
0 replies
19h0m

Also, it doesn't even have to look right, it just has to look cool! =)

The refrain is: Plausible. Not realistic.

Doesn't have to be correct. Just needs to lead the audience to accept it.

Yeah. And, cheat as much as you can. And, then just a little more ;)

jwoq9118
8 replies
23h9m

Was the PhD worth it in your opinion?

jgeada
3 replies
19h48m

Hard to say whether economically a PhD always makes sense, but it certainly can open doors that are otherwise firmly closed.

randomdata
2 replies
10h27m

Well, sure. Everything you end up doing in life will open doors that would have otherwise remained firmly closed if you did something else instead.

hutzlibu
1 replies
9h7m

"Everything you end up doing in life will open doors that would have otherwise remained firmly closed"

Oh no. It is also quite possible to do things, that will very firmly close doors, that were open before, or making sure some doors will never open ..

(quite some doors are also not worth going through)

randomdata
0 replies
3h32m

Of course. In fact, it is widely recognized that in a poor economy PhDs can be in an especially bad economic place as any jobs that are available deem them "overqualified".

cherryteastain
2 replies
22h48m

From a purely economic standpoint, difficult to say. I was able to build skills that are very in demand in certain technical areas, and breaking into these areas is notoriously difficult otherwise. On the other hand, I earned peanuts for many years. It'll probably take some time for it to pay off.

That said, never do a PhD for economic reasons alone. It's a period in your life where you are given an opportunity to build up any idea you would like. I enjoyed that aspect very much and hence do not regret it one bit.

On the other hand, I also found out that academia sucks so I now work in a completely different field. Unfortunately it's very difficult to find out whether you'll like academia short of doing a PhD, so you should always go into it with a plan B in the back of your head.

mckn1ght
0 replies
21h47m

I feel this comment. I did a masters with a thesis option because I was not hurting for money with TA and side business income, so figured I could take the extra year (it was an accelerated masters). Loved being able to work in that heady material, but disliked some parts of the academic environment. Was glad I could see it with less time and stress than a PhD. Even so, I still never say never for a PhD, but it’d have to be a perfect confluence of conditions.

godelski
0 replies
17h51m

On the other hand, I also found out that academia sucks

I also found that I don't want to be a good academic, but a good researcher. Most importantly, that these are two very different things.

dahart
0 replies
2h41m

What does worth it mean? FWIW PhDs are really more or less required if you want to teach at a university, or do academic research, or do private/industrial research. Most companies that hire researchers look primarily for PhDs. The other reason would be to get to have 5 years to explore and study some subject in depth and become an expert, but that tends to go hand-in-hand with wanting to do research as a career. If you don’t want to teach college or do research, then PhD might be a questionable choice, and if you do it’s not really a choice, it’s just a prerequisite.

The two sibling comments both mention the economics isn’t guaranteed - and of course nothing is guaranteed on a case by case basis. However, you should be aware that people with PhDs in the US earn an average of around 1.5x more than people with bachelor’s degrees, and the income average for advanced degrees is around 3x higher than people without a 4-year degree. Patents are pretty well dominated by people with PhDs. There’s lots of stats on all this, but I learned it specifically from a report by the St Louis Fed that examined both the incomes as well as the savings rates for people with different educational attainment levels, and also looked at the changing trends over the years, and the differences by race. Statistically speaking, the economics absolutely favor getting an advanced degree of some kind, whether it’s a PhD or MD or JD or whatever.

kqr
5 replies
9h32m

Maybe you're not the right person to ask, but I'll go anyway: I would like to learn the basics of CFD not because I expect to do much CFD in life, but because I believe the stuff I would have to learn in order to be able to understand CFD are very useful in other domains.

The problem is my analysis is very weak. My knowledge about linear algebra, differential equations, numerical methods, and so on is approximately limited at the level of an introductory university course. What would you suggest as a good start?

I like reading, but I also like practical exercises. The books I tried to read before to get into CFD were lacking in practical exercises, and when I tried to invent my own exercises, I didn't manage to adapt them to my current level of knowledge, so they were either too hard or too easy. (Consequently, they didn't advance my understanding much.)

wiz21c
2 replies
9h1m

You'll be able to understand the equations I guess. The hard part is the numerical analysis: how do you prove your computations will: 1/ reach a solution (badly managed computations will diverge and never reach any solution) 2/ reach a solution that is close to reality ?

For me that's the hard part (which I still don't get).

You could start with Saint Venant equations, although they look complicated they're actually within reach. But you'll have to understand the physics behind first (conservation of mass, of quantity of movement, etc.)

kqr
0 replies
7h2m

Understand the equations, yes. However, I'm sufficiently out of practise that it takes me a lot of effort to. So I guess you could say I'm not fluent to be able to grasp the meaning of these things as fast as I think I would need to in order to properly understand them.

cherryteastain
0 replies
3h57m

Regarding your two questions, some terms you could look up are: Courant number, von Neumann stability analysis, Kolmogorov length and time scales

With respect to 2, the standard industry practice is a mesh convergence study and comparing your solver's output to experimental data. Sadly, especially with Reynolds-averaged Navier Stokes, there is no guarantee you'll get a physically correct solution.

fransje26
1 replies
8h28m
kqr
0 replies
7h3m

This does look really good at a first glance. It seems like it uses mathematics that I'm not fully comfortable with yet -- but also takes a slow and intuitive enough approach that I may be able to infer the missing knowledge with some effort. I'll give it a shot! Big thanks.

smcameron
3 replies
23h2m

I think the curl noise paper is from 2007: https://www.cs.ubc.ca/~rbridson/docs/bridson-siggraph2007-cu...

I've used the basic idea from that paper to make a surprisingly decent program to create gas-giant planet textures: https://github.com/smcameron/gaseous-giganticus

dahart
1 replies
11h12m

Hey that paper references me. ;) I published basic curl noise a few years before that in a Siggraph course with Joe Kniss. Bridson’s very cool paper makes curl noise much more controllable by adding the ability to insert and design boundary conditions, in order words, you can “paint” the noise field and put objects into the noise field and have particles advect around them. Mine and Joe’s version was a turbulence field based on simply taking the curl of a noise field because curl has the property of being incompressible. Thought about it after watching some effects breakdown on X-men’s Nightcrawler teleport effect and they talked about using a fluid simulation IIRC to get the nice subtleties that incompressible “divergence-free” flows give you. BTW I don’t remember exactly how they calculated their turbulence, I have a memory of it being more complicated than curl noise, but maybe they came up with the idea, or maybe it predates X-men too; it’s a very simple idea based on known math, fun and effective for fake fluid simulation.

We liked to call it “curly noise” when we first did it, and I used it on some shots and shared it with other effects animators at DreamWorks at the time. Actually the very first name we used was “curl of noise” because the implementation was literally curl(noise(x)), but curly noise sounded better/cuter. Curly noise is neat because it’s static and analytic, so you can do a fluid-like looking animation sequence with every frame independently. You don’t need to simulate frame 99 in order to render frame 100, you can send all your frames to the farm to render independently. On the other hand, one thing that’s funny about curly noise is that it’s way more expensive to evaluate at a point in space than a voxel grid fluid update step, at least when using Perlin noise which is what I started with. (Curly noise was cheaper than using PDI’s (Nick Foster’s) production fluid solver at the time, but I think the Stam semi-Lagrangian advection thing started making it’s way around and generally changed things soon after that.)

BTW gaseous giganticus looks really neat! I browsed the simplex noise code for a minute and it looks gnarly, maybe more expensive than Perlin even?

smcameron
0 replies
6h2m

the simplex noise code for a minute and it looks gnarly, maybe more expensive than Perlin even?

In 2014 when I wrote it, 3D Perlin noise was still patent encumbered. Luckily at the same time I was working on it, reddit user KdotJPG posted a Java implementation of his Open Simplex noise algorithm on r/proceduralgeneration (different than Ken Perlin's Simplex noise), and I ported that to C. But yeah, I think Perlin is a little faster to compute. I think the patent expired just last year.

Also Jan Wedekind recently implemented something pretty similar to gaseous-giganticus, except instead of doing it on the CPU like I did, managed to get it onto the GPU, described here: https://www.wedesoft.de/software/2023/03/20/procedural-globa...

shahar2k
0 replies
21h32m

reminds me of this - https://www.taron.de/forum/viewtopic.php?f=4&t=4

it's a painting program where the paint can be moved around with a similar fluid simulation

physicsguy
0 replies
8h43m

I went to a talk by a guy who worked in this space in the film industry and said one of the funniest questions he ever got asked was "Can you make the water look more splashy"

nautilius
0 replies
1h56m

Too bad they don't reference the actual inventor of vorticity confinement, Dr. John Steinhoff of the University of Tennessee Space Institute:

https://en.wikipedia.org/wiki/John_Steinhoff

https://en.wikipedia.org/wiki/Vorticity_confinement

And some papers:

https://www.researchgate.net/publication/239547604_Modificat...

https://www.researchgate.net/publication/265066926_Computati...

dagw
0 replies
8h49m

This is unfortunately a discussion I've had to have many times. "Why does your CFD software take hours to complete a simulation when Unreal Engine can do it in seconds" has been asked more than once.

ChuckMcM
0 replies
23h10m

... the requirements in computer graphics are more inclined towards making something that looks right instead of getting the physics right.

That is exactly correct. That said, as something of a physics nerd (it was a big part of the EE curriculum in school) people often chuckled at me in the arcade pointing out things which were violating various principles of physics :-). And one of the fun things was listening to a translated interview with the physicist who worked on Mario World at Nintendo who stressed that while the physics of Mario's world were not the same as "real" physics, they were consistent and had rules just like real physics did, and how that was important for players to understand what they could and could not do in the game (and how they might solve a puzzle in the game).

bee_rider
30 replies
23h16m

They mention simulating fire and smoke for games, and doing fluid simulations on the GPU. Something I’ve never understood, if these effects are to run in a game, isn’t the GPU already busy? It seems like running a CFD problem and rendering at the same time is a lot.

Can this stuff run on an iGPU while the dGPU is doing more rendering-related tasks? Or are iGPUs just too weak, better to fall all the way down to the CPU.

softfalcon
17 replies
22h28m

Something I’ve never understood, if these effects are to run in a game, isn’t the GPU already busy?

Short answer: No, it's not "already busy". GPU's are so powerful now that you can do physics, fancy render passes, fluid sims, "Game AI" unit pathing, and more, at 100+ FPS.

Long answer: You have a "frame budget" which is the amount of time between rendering the super fast "slide show" of frames at 60+ FPS. This gives you between 5 and 30 ms to do a bunch of computation to get the results you need to compute state and render the next frame.

That could be moving units around a map, calculating fire physics, blitting terrain textures, rendering verts with materials. In many game engines, you will see a GPU doing dozens of these separate computations per frame.

GPU's are basically just a secondary computer attached to your main computer. You give it a bunch of jobs to do every frame and it outputs the results. You combine results into something that looks like a game.

Can this stuff run on an iGPU while the dGPU is doing more rendering-related tasks?

Almost no one is using the iGPU for anything. It's completely ignored because it's usually completely useless compared to your main discrete GPU.

bee_rider
9 replies
22h22m

It looks like the GPU is doing most of the work… from that point of view when do we start to wonder if the GPU can “offload” anything to the whole computer that is hanging off of it, haha.

softfalcon
6 replies
22h13m

It looks like the GPU is doing most of the work

Yes. The GPU is doing most of the work in a lot of modern games.

It isn't great at everything though, and there are limitations due to its architecture being structured almost solely for the purpose of computing massively parallel instructions.

when do we start to wonder if the GPU can “offload” anything to the whole computer that is hanging off of it

The main bottleneck for speed on most teams is not having enough "GPU devs" to move stuff off the CPU and onto the GPU. Many games suffer in performance due to folks not knowing how to use the GPU properly.

Because of this, nVidia/AMD invest heavily in making general purpose compute easier and easier on the GPU. The successes they have had in doing this over the last decade are nothing less than staggering.

Ultimately, the way it's looking, GPU's are trying to become good at everything the CPU does and then some. We already have modern cloud server architectures that are 90% GPU and 10% CPU as a complete SoC.

Eventually, the CPU may cease to exist entirely as its fundamental design becomes obsolete. This is usually called a GPGPU in modern server infrastructure.

vlovich123
2 replies
22h1m

I’m pretty sure CPUs destroy GPUs at sequential programming and most programs are written in a sequential style. Not sure where the 90/10 claim comes from but there’s plenty of cloud servers with no GPU installed whatsoever and 0 servers without a CPU.

softfalcon
1 replies
21h42m

Yup, and until we get a truly general purpose compute GPU that can handle both styles of instruction with automated multi-threading and state management, this will continue.

What I've seen shows me that nVidia is working very hard to eliminate this gap though. General purpose computing on the GPU has never been easier, and it gets better every year.

In my opinion, it's only a matter of time before we can run anything we want on the GPU and realize various speed gains.

As for where the 90/10 comes from, it's from the emerging architectures for advanced AI/graphics compute like the DGX H100 [0].

[0] https://www.nvidia.com/en-us/data-center/dgx-h100/

vlovich123
0 replies
17h13m

AI is different. Those servers are set up to run AI jobs & nothing else. That’s still a small fraction of overall cloud machines at the moment. Even if in volume they overtake, that’s just because of the huge surge in demand for AI * the compute requirements associated with it eclipsing the compute requirements for “traditional” cloud compute that is used to keep businesses running. I don’t think you’ll see GPUs running things like databases or the Linux kernel. GPUs may even come with embedded ARM CPUs to run the kernel & only run AI tasks as part of the package as a cost reduction, but I think that’ll take a very long time because you have to figure out how to do cotenancy. It’ll depend on if the CPU remains a huge unnecessary cost for AI servers. I doubt that GPUs will get much better at sequential tasks because it’s an essential programming tradeoff (e.g. it’s the same reason you don’t see everything written in SIMD as SIMD is much closer to GPU-style programming than the more general sequential style)

kqr
1 replies
21h58m

This is somewhat reassuring. A decade ago when clock frequencies had stopped increasing and core count started to increase I predicted that the future was massively multicore.

Then the core count stopped increasing too -- except only if you look in the wrong place! It has in CPUs, but they moved to GPUs.

Sharlin
0 replies
7h23m

SIMD parallelism has been improving on the CPU too – although the number of lanes hasn’t increased that much since the MMX days (128 to 512 bits), the spectrum of available vector instructions has grown a lot. And being able to do eight or sixteen operations at the price of one is certainly nothing to scoff at. Autovectorization is a really hard problem, though, and manual vectorization is still something of a dark art, especially due to the scarcity of good abstractions. Programming with cryptically named, architecture-specific intrinsics and doing manual feature detection is not fun.

dahart
0 replies
2h27m

Eventually, the CPU may cease to exist as its fundamental design becomes obsolete. This is usually called a GPGPU in modern server infrastructure.

There’s no reason yet to think CPU designs are becoming obsolete. SISD (Single Instruction, Single Data) is the CPU core model and it’s easier to program and does lots of things that you don’t want to use SIMD for. SISD is good for heterogenous workloads, and SIMD is good for homogeneous workloads.

I thought GPGPU was waning these days. That term was used a lot during the period when people were ‘hacking’ GPUs to do general compute when the APIs like OpenGL didn’t offer general programmable computation. Today with CUDA, and compute shades in every major API, it’s a given that GPUs are for general purpose computation, and it’s even becoming an anachronism that the G in GPU stands for graphics. My soft prediction is that GPU might get a new name & acronym soon that doesn’t have “graphics” in it.

TillE
1 replies
22h0m

A game is more than just rendering, and modern games will absolutely get bottlenecked on lower-end CPUs well before you reach say 144 fps. GamersNexus has done a bunch of videos on the topic.

softfalcon
0 replies
21h49m

You are not wrong that there are many games that are bottle-necked on lower end CPUs.

I would argue that for many CPU bound games, they could find better ways to utilize the GPU for computation and it is likely they just didn't have the knowledge, time, or budget to do so.

It's easier to write CPU code, every programmer can do it, so it's the most often reached for tool.

Also, at high frame rates, the bottleneck is frequently the CPU due to it not feeding the GPU fast enough, so you lose frames. There is definitely a real world requirement of having a fast enough CPU to properly utilize a high end video card, even if it's just for shoving command buffers and nothing else.

vlovich123
3 replies
22h4m

No one is using the iGPU for anything. It's completely ignored because it's usually completely useless compared to your main discrete GPU.

Modern iGPUs are actually quite powerful is my understanding. I think the reason no one does this is that the software model isn’t actually there/standardized/able to work cross vendor since the iGPU and the discrete card are going to be different vendors typically. There’s also little motivation to do this because not everyone has an iGPU which dilutes the economy of scale of using it.

It would be a neat idea to try to run lighter weight things on the iGPU to free up rendering time on the dGPU and make frame rates more consistent, but the incentives aren’t there.

softfalcon
1 replies
21h54m

I agree the incentives aren't there. Also agree that it is possible to use the integrated GPU for light tasks, but only light tasks.

In the high performance scenarios where there is all three (discrete GPU, integrated GPU, and CPU) and we try and use the integrated GPU alongside the CPU, it often causes thermal throttling on the shared die between iGPU and CPU.

This slows the CPU down from executing well, keeping up with state changes, and sending needed data to the keep the discrete GPU utilized. In short, don't warm up the CPU, we want it to stay cool, if that means not doing iGPU stuff, don't do it.

When we have multiple discrete GPU's available (render farm), this on-die thermal bottleneck goes away and there are many render pipelines that are made to handle hundreds, even thousands of simultaneous GPU's working on a shared problem set of diverse tasks, similar to trying to utilize both iGPU and dGPU on the same machine but bigger.

Whether or not to use the iGPU is less about scheduling and more about thermal throttling.

vlovich123
0 replies
15h9m

That’s probably a better point as to why it’s not invested in although most games are not CPU bound so thermal throttling wouldn’t apply then as much. I think it’s multiple factors combined.

The render pipelines you refer to are all offline non-realtime rendering though for movies/animation/etc right? Somewhat different UX and problem space than realtime gaming.

zokier
0 replies
6h59m

It still annoys me how badly AMD botched their Fusion/HSA concept, leading igpus to be completely ignored in most cases.

johnnyanmac
1 replies
20h32m

In theory it's perfectly possible to do all you describe in 8ms (i.e. VR render times). In reality we're

1. still far from properly utilizing modern graphics APIs as is. Some of the largest studios are close but knowledge is tight lipped in the industry.

2. even when those top studios can/do, they choose to focus more of the budget on higher render resolution over adding more logic or simulation. Makes for superficially better looking games to help sell.

3. and of course there are other expensive factors right now with more attention like Ray traced lighting which can only be optimized so much on current hardware.

I'd really love to see what the AA or maybe even indie market can do with such techniques one day. I don't have much faith that AAA studios will ever prioritize simulation.

mrcode007
0 replies
16h5m

Simulation is often already done. Except it’s done offline and then replayed in the game over and over.

smolder
0 replies
6h9m

I actually have my desktop AMD iGPU enabled despite using a discrete Nvidia card. I use the iGPU to do AI noise reduction for my mic in voice and video calls. I'm not sure if this is really an ideal setup, having both GPUs enabled with both driver packages installed (as compared to just running Nvidias noise reduction on the dGPU, I guess,) but it seems to all work with no issue. The onboard HDMI can even be used at the same time as the discrete cards monitor ports.

Valgrim
6 replies
22h42m

Now that LLMs run on GPU too, future GPUs will need to juggle between the graphics, the physics and the AI for NPCs. Fun times trying to balance all that.

My guess is that the load will become more and more shared between local and remote computing resources.

teh_infallible
5 replies
22h30m

Maybe in the future, personal computers will have more than one GPU, one for graphics and one for AI?

MaxBarraclough
3 replies
22h19m

Many computers already have 2 GPUs, one integrated into the CPU die, and one external (and typically enormously more powerful).

To my knowledge though it's very rare for software to take advantage of this.

softfalcon
2 replies
22h0m

In high performance scenarios, the GPU is running full blast while the CPU is running at full blast just feeding data and pre-process work to the GPU.

The GPU is the steam engine hurtling forward, the CPU is just the person shoveling coal into the furnace.

Using the integrated GPU heats up the main die where the CPU is because they live together on the same chip. The die heats up, CPU thermal throttles, CPU stops efficiently feeding data to the GPU at max speed, GPU slows down from under utilization.

In high performance scenarios, the integrated GPU is often a waste of thermal budget.

MaxBarraclough
1 replies
20h58m

Doesn't this assume inadequate cooling? A quick google indicates AMD's X3D CPUs begin throttling around 89°C, and that it's not overly challenging to keep them below 80 even under intense CPU load, although that's presumably without any activity on the integrated GPU.

Assuming cooling really is inadequate for running both the CPU cores and the integrated GPU: for GPU-friendly workloads (i.e. no GPU-unfriendly preprocessing operations for the CPU) it would surely make more sense to use the integrated GPU rather than spend the thermal budget having the CPU cores do that work.

softfalcon
0 replies
19h39m

What the specs say they'll do and what they actually do are often very different realities in my experience.

I've seen thermal throttling happening at 60°C because overall, the chip is cool, but one core is maxing or two cores are maxing. Which is common in game dev with your primary thread feeding a GPU with command buffer queues and another scheduling the main game loop.

Even when water cooled or the high end air cooling on my server blades, I see that long term, the system just hits a trade-off point of ~60-70°C and ~85% max CPU clock even when the cooling system is industry grade, loud as hell, and has an HVAC unit backing it. Probably part of why scale out is so popular to distribute load.

When I give real work to any iGPU's on these systems, I see the temps bump 5-10°C and clocks on the CPU cores drop a bit. Could be drivers, could be temp curves, I would think these fancy cooling systems I'm running are performing well though. shrug

Cthulhu_
0 replies
9h6m

What do you mean in the future? Multiple GPU chips is already pretty common (on the same card or having multiple cards at the same time). Besides, GPUs are massively parallel chips as well, with specialized units for graphics and AI operations.

photoGrant
3 replies
22h51m

Welp. It used to be PhysX math would run on a dedicated gpu of choice. I remember assigning or realising this during the Red Faction game with forever destructing walls.

Almost Minecraft, but with rocket launchers on mars

jayGlow
1 replies
12h56m

I'm fairly certain red faction didn't use Nvidia physX at all.

photoGrant
0 replies
4h7m

It didn’t!

bee_rider
0 replies
22h31m

I remember PhysX, but my feeling at the time was “yeah I guess NVIDIA would love for me to buy two graphics cards.” On the other hand, processors without any iGPUs are pretty rare by now.

dexwiz
0 replies
22h45m

You don't have to be gaming to use a GPU. Plenty of rendering software have a GPU mode now. But writing GPU aglos are often different from a CPU simulation algo, because it is highly parallelized.

llm_nerd
15 replies
23h28m

Page is basically unusable on my Intel MBP, and still a beastly consumer on my AS Mac. I presume simulations are happening somewhere on the page, but it would be a good idea to make them toggled. Ideally via user interaction, but if one insists on auto-play when scrolled into the viewport.

kqr
2 replies
22h0m

Same thing on a modern flagship smartphone.

andygeorge
1 replies
20h51m

Pixel 8 Pro here, using DDG browser, smooth for me.

chefandy
0 replies
20h10m

What I can use works great on FF on my S22 Ultra, but it's not interactive because the shader fields won't register taps as click events.

andygeorge
2 replies
20h53m

Same hardware(s) as you, smooth and 0 issues. Pebkac

russelg
1 replies
18h21m

Both you and the GP failed to mention what browser you're using.

andygeorge
0 replies
5h1m

Firefox and DDG browser

pdabbadabba
1 replies
21h51m

Interestingly, no noticeable performance issues on my M2 MacBook Air. Maybe he's already made some of the changes you recommended?

whalesalad
0 replies
21h30m

Testament to the insane power and efficiency of Apple silicon

dplavery92
1 replies
20h59m

I was encountering the same problem on my Intel MBP, and per another one of the comments here, find that switching from Chrome to Safari to view the page allows me to view the whole page, view it smoothly, and without my CPU utilization spiking or my fans spinning up.

tonyarkles
0 replies
19h26m

Yeah, I just checked to see what this machine is... Mid 2015 15" Retina MBP with an AMD Radeon R9 and the page is buttery smooth in Safari.

seb1204
0 replies
21h1m

Maybe it is one for the many other tabs you have open

pudquick
0 replies
21h13m

Runs buttery smooth on my M2 here in Safari on macOS 14.2.1

Tried them out in Chrome and they're mostly all the same though I do notice a slight jitter to the rendering in the smoke example.

lelandbatey
0 replies
20h56m

Works well for me on my Intel MBP, though I'm using Firefox inside of Ubuntu running inside of Parallels; maybe I'm not hitting the same issue that you are. The author may have put in a new "only runs when you click" setting since you wrote your original comment.

jhallenworld
0 replies
19h55m

Runs fine in Ubuntu Linux using Firefox, but not in Google Chrome on my Lenovo Core i7-12700H 32GB laptop with Nvidia T600.

aeyes
0 replies
21h31m

Seems to work only in Safari, using Chrome I can't scroll past the first quarter of the page and CPU is at 100%.

kragen
6 replies
23h57m

it blows my mind that cfd is a thing we can do in real time on a pc

s-macke
4 replies
23h36m

Yes, we can since a long time. They were already a thing 14 years ago on the PS3 in the game "Pixeljunk shooter" [0].

But real time simulations often use massive simplifications. They aim to look real, not to match exact solutions of the Navier-Stokes equations.

[0] https://www.youtube.com/watch?v=qQkvlxLV6sI

mcphage
2 replies
21h5m

Heck, the Wii had Fluidity around the same time period (2010), and that's a lot weaker than the PS3. Fluidity was pretty neat—you played as a volume of water, changing states as you moved through a level that looked like a classic science textbook:https://youtu.be/j7IooyXp3Pc?si=E79rCrq2mdyZSKoF&t=120

s-macke
1 replies
11h10m

Well, technically yes.

Both use the same technique: Smoothed Particle Hydrodynamics.

But the PS3 was able to fill the whole screen with these particles. Hundreds of them. The game Fluidity seems to have approx. 20.

mcphage
0 replies
5h27m

The game Fluidity seems to have approx. 20.

It had more than 20, but... not by much.

kragen
0 replies
20h8m

thanks, i had no idea about pixeljunk shooter! it looks like those fluids are 2-d tho. otoh it's apparently performing a multiphysics simulation including thermodynamics

btw, almost unrelatedly, you have no idea how much i appreciate your exegesis of the voxelspace algorithm

mhh__
0 replies
16h43m

I remember reading about finite element tire models and how complicated they were, they can now run in real time give or take a bit.

berniedurfee
6 replies
14h58m

I wrote a super simple flame simulation a long time ago as a toy in C after reading an article somewhere.

You just set each pixel’s brightness to be the average brightness of the immediately adjacent pixels. Calculate from bottom to top.

Add a few “hot” pixels moving back and forth along the bottom and boom, instant fire.

Looks very cool for a tiny amount of code and no calculus. :)

shnock
1 replies
13h19m

Could you share the repo?

berniedurfee
0 replies
4h19m

I will, but I need to dig it out of my archives. It was from a time before repos. :)

hyperthesis
1 replies
10h51m

This is the laplacian operator (in 1D, just the second derivative, or curvature). The sharper the crest, the more negative; the sharper the trough, the more positive. If you change the value there, by that much, the effect is averaging (and the discretized form is literally averaging).

You've been doing calculus the whole time. There's a difference between knowing the path, and walking the path.

Here's a 3Blue1Brown video with intuitive graphics on it https://youtube.com/watch?v=ToIXSwZ1pJU

berniedurfee
0 replies
4h19m

Bah! I knew it! :)

fjkdlsjflkds
1 replies
12h19m

no calculus

"set each pixel’s brightness to be the average brightness of the immediately adjacent pixels" sounds like a convolution ;)

Cthulhu_
0 replies
9h16m

It's a trick! Calculus puts me right off, but explain it in code and visualizable ways and I'm on board.

askonomm
6 replies
23h30m

Interesting. I have 64GB of RAM and yet this page managed to kill the tab entirely.

Fervicus
2 replies
22h49m

Weird. I only have 16GB but seems to run fine for me?

askonomm
1 replies
22h29m

Maybe it's Windows? Seems on MacOS it runs fine with much less RAM.

Fervicus
0 replies
21h25m

I am on Windows.

xboxnolifes
1 replies
17h46m

Do you have hardware acceleration disabled in your browser?

askonomm
0 replies
8h55m

I checked and I have it enabled. But, I don't have a dedicated GPU, just whatever is inside of i7-14700k. Maybe it's just not up for the task?

ghawkescs
0 replies
21h34m

FWIW, works great and animates smoothly on a Surface Laptop 4 with 16 GB RAM (Firefox).

CaptainOfCoit
3 replies
23h28m

EmberGen is absolutely crazy software that does simulation of fire and smoke in real-time on consumer GPUs, and supports a node-based workflow which makes it so easy to create new effects.

Seriously, my workflow probably went from spending hours on making something that now takes minutes to get right.

https://jangafx.com/software/embergen/

I was sure that this submission would be about EmberGen and I'm gonna be honest, I'm a bit sad EmberGen never really got traction on HN (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...)

(Not affiliated with EmberGen/JangaFX, just a happy customer)

PKop
1 replies
22h41m
imiric
0 replies
22h11m

Odin is such a neat language.

I was equally impressed by Spall: https://odin-lang.org/showcase/spall/

ilrwbwrkhv
0 replies
10h21m

Yes this is fantastic. One of the best software currently available in the market.

jvans
2 replies
21h58m

Does anyone have any recommendations for a former math major turned SWE to get into CFD simulations? I find this material fascinating but it's been a while since I've done any vector calculus or PDEs so my math is very rusty.

physicsguy
0 replies
8h41m

Apply for jobs at a place like StarCCM or similar

chefandy
0 replies
19h56m

If you're more interested in the physics simulation for research, I can't help ya. However, SideFX Houdini is tough to beat if you're going more for entertainment-focused simulations.

https://www.youtube.com/watch?v=zxiqA8_CiC4

Their free non-commercial "Apprentice" version is only limited in its rendering and collaboration capabilities. It's pretty... uh... deep though. Coming from software and moving into this industry, the workflow for learning these sorts of tools is totally different. Lots of people say Houdini is more like an IDE than a 3D modelling program, and I agree in many ways. Rather than using the visual tools like in, say, blender, it's almost entirely based on creating a network of nodes, and modifying attributes and parameters. You can do most stuff in Python more cleanly than others like 3ds Max, though it won't compile, so the performance is bad in big sims. Their own C-like language, vex, is competent, and there's even a more granular version of their node system for the finer work with more complex math and such. It's almost entirely a data-oriented workflow on the technical side.

However, if you're a "learn by reading the docs" type, you're going to have to learn to love tutorials rather quickly. It's very, very different from any environment or paradigm I've worked with, and the community at large, while generally friendly, suffers from the curse of expertise big-time.

holoduke
2 replies
20h34m

What will be quicker. Realtime raytraced volumetric super realistic particle effects or somekind of diffusion model shaderlike implementation.

postalrat
1 replies
17h53m

Real smoke and flames.

Cthulhu_
0 replies
8h14m

Nolan should've used a real nuke

riidom
1 replies
20h55m

While not the main point of the article, the introductory premise is a bit off IMO. When you choose simulation, you trade artistic control for painful negotiation via an (often overwhelming) amount of controls.

For a key scene like the balrog, you will probably never decide for simulation and go for full control on every frame instead.

To stay in the tolkien fantasy example, a landscape scene with a nice river, doing a lot of bends, some rocks inside, the occasional happy fish jumping out of water - that would suit a simulation much better.

johnnyanmac
0 replies
20h43m

Not impossible to do both, but very few tools are built where simulation comes for "free". At least not free and real-time as of now. Maybe one day.

But I agree with you. You'd ideally reserve key moments for full control and leave simulation for the background polish (assuming your work isn't bound by a physically realistic world).

JaggerJo
1 replies
23h36m

does not work on ios for me.

bee_rider
0 replies
23h19m

It works fine on iOS/safari for me, in the sense that the text is all readable, but the simulations don’t seem to all execute. Which… given that other folks are reporting that the site is slowed to a crawl, I guess because it is running the simulations, I’ll take the iOS experience.

throw10920
0 replies
5h2m

I've never seen (x, t) anywhere other than the Schrödinger equation - seeing it used here for a dye density function was weird, but it makes sense!

sieste
0 replies
8h8m

If you like this, you'll also enjoy Ten Minute Physics, especially chapter 17 "How to write an Eulerian Fluid Simulator with 200 lines of code."

https://matthias-research.github.io/pages/tenMinutePhysics/i...

randyrand
0 replies
20h59m

this is so useful thank you!

pictureofabear
0 replies
17h30m

If you need a refresher on diff eq... https://www.youtube.com/watch?v=ly4S0oi3Yz8

pbowyer
0 replies
22h8m

I'm really impressed by the output of the distill.pub template and page building system used for this article. It's a shame it was abandoned in 2021 and is no longer maintained.

nox100
0 replies
20h11m

This is very nice! Another person explaining this stuff is "10 Minute Physics"

https://matthias-research.github.io/pages/tenMinutePhysics/i...

jbverschoor
0 replies
21h45m

Good explanation why CG explosions suck: https://www.youtube.com/watch?v=fPb7yUPcKhk

jamestweb
0 replies
12h27m

Looks amazing!

danybittel
0 replies
11h48m

It looks to me like you did not add gamma correction to the final color?

andybak
0 replies
8h15m

I think I'm probably smart enough to follow this, but maths notation is so opaque that I get stuck the moment the greek letters come out.

Phelinofist
0 replies
20h16m

I recently watched this video about implementing a simple fluid simulation and found it quite interesting: https://www.youtube.com/watch?v=rSKMYc1CQHE