return to table of content

CityGaussian: Real-time high-quality large-scale scene rendering with Gaussians

forrestthewoods
34 replies
1d

Can someone convince me that 3D gaussian splatting isn't a dead end? It's an order of magnitude too slow to render and order of magnitude too much data. It's like raster vs raytrace all over again. Raster will always be faster than raytracing. So even if raytracing gets 10x faster so too will raster.

I think generating traditional geometry and materials from gaussian point clouds is maybe interesting. But photogrammetry has already been a thing for quite awhile. Trying to render a giant city in real time via splats doesn't feel like "the right thing".

It's definitely cool and fun and exciting. I'm just not sure that it will ever be useful in practice? Maybe! I'm definitely not an expert so my question is genuine.

peppertree
4 replies
22h29m

Mesh based photogrammetry is a dead end. GS or radiance field representation is just getting started. Not just rendering but potentially a highly compact way to store large 3D scenes.

forrestthewoods
2 replies
22h20m

potentially a highly compact way to store large 3D scenes.

Is it? So far it seems like the storage size is massive and the detail is unacceptably low up close.

Is there a demo that will make me go “holy crap I can’t believe how well this scene compressed”?

peppertree
1 replies
19h34m

Here is a paper if you are interested. https://arxiv.org/pdf/2311.13681.pdf

The key is not to compress but to leverage the property of neural radiance fields and optimize for entropy. I suspect NERF can yield more compact storage since it's volumetric.

Not sure what you mean by "unacceptably low up close". Most GS demos don't have LoD lol.

forrestthewoods
0 replies
18h5m

Not sure what you mean by "unacceptably low up close". Most GS demos don't have LoD lol.

When the camera gets close the "texture" resolution is extremely low. Like, roughly 1/4 what I would expect. Maybe even 1/8. Aka it's very blurry.

jayd16
0 replies
1h36m

Saying its a dead end considering the alternative has no concept of animation or the ability for an artist to remix the asset? That just makes the comment seem naive.

gmerc
4 replies
22h59m

It's not an order of magnitude slower. You can easily get 200-400 fps in Unreal or Unity at the moment.

100+FPS in browser? https://current-exhibition.com/laboratorio31/

900FPS? https://m-niemeyer.github.io/radsplat/

We have 3 decades worth of R&D in traditional engines, it'll take a while for this to catch up in terms of tooling and optimization but when you look where the papers come from (many from Apple and Meta), you see that this is the technology destined to power the MetaVerse/Spatial Compute era both companies are pushing towards.

The ability to move content at incredibly low production costs (iphone movie) into 3d environments is going to murder a lot of R&D made in traditional methods.

araes
1 replies
22h40m

Don't know the hardware involved, yet that first link is most definitely not 100 FPS on all hardware. Slideshow on the current device.

jasonjmcghee
0 replies
18h8m

Maybe not, but it's relatively smooth on my 3 year old phone, which is crazy impressive

Edit: I was in low power mode, it runs quite smoothly

101008
1 replies
21h31m

Does anyone know how the first link is made?

pierotofy
3 replies
22h33m

Photogrammetry struggles with certain types of materials (e.g. reflective surfaces). It's also very difficult to capture fine details (thin structures, hair). 3DGS is very good at that. And people are working on improving current shortcomings, including methods to extract meshes that we could use in traditional graphics pipelines.

somethingsome
2 replies
20h21m

3DGS is absolutely not good with non Lambertian materials..

After testing it, if fails in very basic cases. And it is normal that it fails, non Lambertian materials are not reconstructed correctly with SfM methods.

andybak
1 replies
17h23m

I don't understand the connection you're making between SfM (Structure from Motion) and surface shading.

I might be misunderstanding what you're trying to say. Could you elaborate?

somethingsome
0 replies
12h26m

You use SfM to find the first point cloud. However SfM is based on the hypothesis that the same point 'moves' linearly in between any two views. This hypothesis is important because it allows you to match a point in two pictures, and given the distance between the two images, you can triangulate the point in space. Therefore find it's depth.

However, non-Lambertian points move non linearly in viewing space (eg a specular point depends on the viewer pose).

So, automatically, their positions in space will be false, and you'll have floating points.

Gaussian 'splats' may have the potential to render non-Lambertian stuff using for example the spherical harmonics (even if I don't think the viewer use them if I'm not mistaken). But, capturing non-Lambertian points is very difficult and an open research problem.

jerf
3 replies
22h40m

You have to ask about what it's a dead end for. It seems pretty cool for the moral equivalent of fully 3D photographs. That's a completely legitimate use case.

For 3D gaming engines? I struggle to see how the fundamental primitive can be made to sing and dance in the way that they demand. People will try, though. But from this perspective, gaussians strike me more as a final render format than a useful intermediate representation. If they are going to use gaussians there's going to have to be something else invented to make them practical to use for engines in the meantime, and there's still an awful lot of questions there.

For other uses? Who knows.

But the world is not all 3D gaming and visual special effects.

gmerc
1 replies
10h28m

You are missing where this is coming from.

Many of the core papers for this came from Meta's VR team (codec avatars), Apple ML (Spatial Compute) and Nvidia - companies deeply invested in VR/Spatial compute. It's clear that they see it as a key technology to further their interests in the space, and they are getting plenty of free help:

After being open sourced in May last year, there were 79 papers overall published on the topic.

It's more than 150 this year, more than one a day, advancing this "dead end" forward.

A small selection:

https://animatable-gaussians.github.io/ https://nvlabs.github.io/GAvatar/ https://research.nvidia.com/labs/toronto-ai/AlignYourGaussia... https://github.com/lkeab/gaussian-grouping

jerf
0 replies
4h13m

Goals aren't results. Maybe gaussian splatting will be the wave of the future and in 10 years it'll be the only graphics tech around.

In the meantime, if it isn't, it will hardly be the first promising new graphics technology to turn out to be completely unsuited for all the things people hoped for.

Most of what you linked to appears to correspond to what I intuitively described as them being an output format rather than useful directly; the last paper appear to go in the other direction to extract information from them but again doesn't function on the splats directly. The actual work isn't being done in the gaussians themselves, and the interesting results are precisely in what is not being done through the splats... but pointing that out explicitly that's not how you get funding nowadays. Two otherwise-identical proposals, but one that sings the praises of the buzzwords while the other is phrased to be critical of it, will have very different outcomes.

jayd16
0 replies
1h45m

How can it be a legitimate use case for a "3D photo"? Realistically how long does it take to capture the photos needed to construct the scene?

maxglute
2 replies
23h40m

Hardware evolves with production in mind. If method saves 10x time/labour even using 50x more expensive compute/tools then industry will figure out way to optimize/amortize compute cost on that task over time and eventually deseminate into consumer hardware.

forrestthewoods
1 replies
23h16m

Maybe. That implies that hardware evolution strictly benefits Bar and not Foo. But what has happened so far is that hardware advancements to accelerate NewThing also accelerate OldThing.

maxglute
0 replies
17h51m

I think hardware evolution has to benefit Bar and Foo for production continuity anyways, OldThing still has to be supported until it becomes largely obsolete to both industry and consumer. In which case fringe users have to hold on to old hardware to keep processes going.

jonas21
2 replies
21h21m

How is it too slow? You can easily render scenes at 60fps in a browser or on a mobile phone.

Heck, you can even train one from scratch in a minute on an iPhone [1].

This technique has been around for less than a year. It's only going to get better.

[1] https://www.youtube.com/watch?v=nk0f4FTcdmM

somethingsome
0 replies
20h19m

This technique exists from more than 10 years, and real time renderers exist too from very long.

mthoms
0 replies
20h41m

That's pretty cool. It's not clear if it's incorporating Lidar data or not though. It's very impressive if not.

mschuetz
1 replies
23h52m

It's currently unparalleled when it comes to realism as in realistic 3D reconstruction from the real world. Photogrammetry only really works for nice surfacic data, whereas gaussian splats work for semi-volumetric data such as fur, vegetation, particles, rough surfaces, and also for glossy/specular surfaces and volumes with strong subdivision surface properties, or generally stuff with materials that are strongly view-dependent.

tedd4u
0 replies
12h13m

This seems like impressive work. You mention glossy / specular. I wonder why nothing in the city (first video) is reflective, not even the glass box skyscrapers. I noticed there is something funky in the third video with the iron railway rails from about :28 to :35 seconds. They look ghostly and appear to come in and out. Overall these three videos are pretty devoid of shiny or reflective things.

fngjdflmdflg
1 replies
23h13m

But photogrammetry has already been a thing for quite awhile.

Current photogrammetry to my knowledge requires much more data than NeRfs/Gaussian splatting. So this could be a way to get more data for the "dumb" photogrammetry algorithms to work with.

Tajnymag
0 replies
8h0m

Right? I'm surprised I don't hear this connection more often. Is it perhaps because photogrammetry algorithms require sharp edges, which the splats don't offer?

thfuran
0 replies
20h44m

much data. It's like raster vs raytrace all over again. Raster will always be faster than raytracing. So even if raytracing gets 10x faster so too will raster.

And? It's always going to be even faster to not have lighting at all.

rallyforthesun
0 replies
23h50m

In regards of contentproduction for virtual production, it is quicker to capture a scene and process the images into a cloud of 3d-gaussians, but on the other hand it is harder to edit the scene after its shot. Also, the light is already captured and baked into it. The tools to edit scenes will probably rely a lot on ai, like delighting and change of settings. right now there are just a few, the process is more like using knife to cut out parts and remove floaters. You can replay this of course with the unreal engine, but in the long term you could run it in a browser. So in short, if you want to capture a place as it is with all its tiny details, 3dgaussians are a quicker and cheaper way to afford this than using modelling and texturing.

kfarr
0 replies
1d

Yes this has tons of potential. It's analogous but different to patented techniques used by Unreal engine. Performance is not the focus in most research at the moment. There isn't even alignment on unified format with compression yet. The potential for optimization is very clear and straightforward to adapt to many devices, it's similar to point cloud LOD, mesh culling, etc. Splat performance could be temporary competitive advantage for viewers, but similar to video decompression and other 3d standards that are made available via open source, it will likely become commonplace in a few years to have high quality high fps splat viewing on most devices as tablestakes. The next question is what are the applications thereof.

chankstein38
0 replies
21h41m

I'll be honest, I don't have a ton of technical insights into these but anecdotally, I found that using KIRI Engine's Gaussian Splatting scans (versus Photogrammetry scans) the GS scans were way more accurate and true to life and required a lot less cleanup!

bodhiandphysics
0 replies
21h24m

Try animating a photogrammetric model! How about one that changes its shape? You get awful geometry from photogrammetry…

In practice the answer to will this be useful is yes! Subdivision surfaces coexist with nurbs for different applications.

Legend2440
0 replies
1d

Nothing comes close to this for realism, it's like looking at a photo.

Traditional photogrammetry really struggles with complicated scenes, and reflective or transparent surfaces.

chpatrick
32 replies
1d

"The average speed is 36 FPS (tested on A100)."

Real-Time if you have $8k I guess.

jsheard
13 replies
1d

Good ol' "SIGGRAPH realtime", when a graphics paper describes itself as achieving realtime speeds you always have to double check that they mean actually realtime and not "640x480 at 20fps on the most expensive hardware money can buy". Anything can be realtime if you set the bar low enough.

oivey
5 replies
21h58m

Depending on what you’re doing, that really isn’t a low bar. Saying you can get decent performance on any hardware is the first step.

PheonixPharts
4 replies
21h39m

get decent performance

The issue is that in Computer Science "real-time" doesn't just mean "pretty fast", it's a very specific definition of performance[0]. Doing "real-time" computing is generally considered hard even for problems that are themselves not too challenging, and involves potentially severe consequences for missing a computational deadline.

Which leads to both confusion and a bit of frustration when sub-fields of CS throw around the term as if it just means "we don't have to wait a long time for it to render" or "you can watch it happen".

[0] https://en.wikipedia.org/wiki/Real-time_computing

dekhn
2 replies
18h55m

I don't think it's reasonable to expect the larger community to not use "real time" to mean things other than "hard real time as understood by a hardware engineer building a system that needs guaranteed interrupt latencies".

Mtinie
1 replies
16h18m

I think it’s reasonable to assume that it means what you described on this site.

dekhn
0 replies
15h35m

Of course. I'm in the "Reality is just 100M lit, shaded, textured polygons per second" kind of guy- realtime is about 65 FPS with no jank.

aleksiy123
0 replies
19h35m

That link defines it in terms of simulation as well: "The term "real-time" is also used in simulation to mean that the simulation's clock runs at the same speed as a real clock." and even states that was the original usage of the term.

I think that pretty much meets the definition of "you can watch it happen".

Essentially there is real-time systems and real-time simulation. So it seems that they are using the term correctly in the context of simulation.

phkahler
1 replies
23h37m

> Anything can be realtime if you set the bar low enough.

I was doing "realtime ray tracing" on Pentium class computers in the 1990s. I took my toy ray tracer and made an OLE control and put it inside a small Visual Basic app which handled keypress-navigation. It could run in a tiny little window (size of a large icon) at reasonable frame rates. Might even say it was using Visual Basic! So yeah "realtime" needs some qualifiers ;-)

TeMPOraL
0 replies
20h34m

Fair, but today it could probably run 30FPS full-screen at 2K resolution, without any special effort, on an average consumer-grade machine; better if ported to take advantage of the GPU.

Moore's law may be dead in general, but computing power still increases (notwithstanding the software bloat that makes it seem otherwise), and it's still something to count on wrt. bleeding edge research demos.

cchance
1 replies
22h50m

I mean A100's were cutting edge a year or so ago now we're at what H200 and B200 or is it 300's like it may be a year or 2 more but the A100 speed will trickle down to the average consumer as well.

TeMPOraL
0 replies
20h31m

And, from the other end, research demonstrations tend to have a lot of low-hanging fruits wrt. optimization, which will get picked if the result is interesting enough.

VelesDude
1 replies
20h0m

Microsoft once set the bar for realtime as 640x480 @ 10fps. But this was just for research purposes. You can make out what it is trying to do and the update rate was JUST acceptable enough to be interactive.

harles
0 replies
19h18m

I’d actually call that a good bar. If you’re looking 5-10 years down the line for consumers, it’s reasonable. If you think the results can influence hardware directions sooner than that (for better performance) it’s also reasonable.

mateo1
0 replies
21h35m

It can be run real time. Might be 640x480 or 20 fps, but many algorithms out there could never been run on an $10k graphics card or even a computing cluster in real time.

RicoElectrico
3 replies
1d

"Two more papers down the line..." ;)

Fauntleroy
2 replies
1d

Indeed, this very much looks like what we'll likely see from Google Earth within a decade—or perhaps half that.

xyproto
0 replies
15h42m

2 years tops, since the technology is there, it would be a considerable improvement to Google Maps, and Google has the required resources.

mortenjorck
0 replies
20h39m

I’ve seen very impressive Gaussian splatting demos of more limited urban geographies (a few city blocks) running on consumer hardware, so the reason this requires research-tier Nvidia hardware right now is probably down to LOD streaming. More optimization on that front, and this could plausibly come to Google Earth on current devices.

“What a time to be alive” indeed!

pierotofy
2 replies
22h31m

A lot of 3DGS/Nerf research is like this unfortunately (ugh).

Check https://github.com/pierotofy/OpenSplat for something you can run on your 10 year old laptop, even without a GPU! (I'm the author)

somethingsome
1 replies
20h38m

I know, I don't get the fuzz either, I've coded real-time gaussian splat renderers >7 years ago with LOD and they were able to show any kind of point cloud.

They worked with a basic 970 GTX on a big 3d screen and also on oculus dk2.

kookamamie
0 replies
1h58m

It's the old story of a an outsider group (AI researchers, in this case) re-inventing the wheel discovered ages ago by experts of the domain.

littlestymaar
2 replies
1d

I chuckled a bit too when I saw it.

By the way, what's the compute power difference between an A100 and a 4090?

entropicdrifter
0 replies
23h8m

4090 is faster in terms of compute, but the A100 has 40GB of VRAM.

enlyth
0 replies
23h6m

I believe the main advantage of the A100 is the memory bandwidth. Computationally the 4090 has a higher clock speed and more CUDA cores, so in that way it is faster.

So for this specific application it really depends on where the bottleneck is

mywittyname
1 replies
1d

Presumably, this is can be used as the first stage in a pipeline. Take the models and textures generated from source data using this, cached it, and stream that data to clients for local rendering.

Consumer GPUs are probably 2-3 generations out from being as capable as an A100.

Legend2440
0 replies
23h59m

There are no models or textures, it's just a point cloud of color blobs.

You can convert it to a mesh, but in the process you'd lose the quality and realism that makes it interesting.

aurareturn
1 replies
14h22m

I'm going to guess that the next-gen consumer GPU (5090) will be twice as fast as A100 and will not cost $8k.

So I don't know see an insurmountable problem.

diggan
0 replies
5h49m

No, not unless Nvidia is thinking about financial suicide. The current split between "pro" and "consumer" isn't because it was impossible to avoid, it's because Nvidia is doing market segmentation in order to extract more money from the pro segment.

rallyforthesun
0 replies
1d

As it seems the first 3DGS which uses Lods and blocks, there might be place for optimization. This might become useful for use cases in Virtual Production, probably not for mobiles.

datascienced
0 replies
19h39m

Just wait 2 years it’ll be on your phone.

anigbrowl
0 replies
16h53m

You gotta start somewhere

speps
16 replies
22h5m

Note that the dataset from the video is called Matrix city. It's highly likely extracted from the Unreal Engine 5 Matrix demo released a few years ago. The views look very similar, so it's photorealistic but not from photos.

EDIT: here it is, and I was right! https://city-super.github.io/matrixcity/

speps
8 replies
21h58m

Replying to myself with a question, as someone could have the answer: Would it be possible to create the splats without the training phase? If we have a fully modelled scene in Unreal Engine for example (like Matrix city), you shouldn't need to spend all the time training to recreate the data...

fudged71
1 replies
21h36m

Are you referring to the gaussian splat rasterizer?

sorenjan
0 replies
20h59m

I'm referring to using the modeled scene to bind gaussian splats to an existing mesh.

Binding New 3D Gaussians to the Mesh

This binding strategy also makes possible the use of traditional mesh-editing tools for editing a Gaussian Splatting representation of a scene
kfarr
1 replies
21h48m

Yes, and then it gets interesting to think about procedurally generated splats, such as spawning a randomized distribution of grass splats on a field for example

dkjaudyeqooe
0 replies
15h58m

To me the big issue is image quality versus generative efficiency. If splats make rending complicated scenes efficient without requiring a lot of data/calculation "scaffolding" then you could do almost everything procedurally, maybe using AI models to fill in definitional detail.

fudged71
1 replies
21h40m

I could be wrong, but being able to remove the step of estimating the camera position would save a large amount of time. You’re still going to need to train on the images to create the splats

dkjaudyeqooe
0 replies
16h3m

If we have a fully modelled scene in Unreal Engine for example...

No images involved, so no training required.

somethingsome
0 replies
20h50m

Of course! And this was done many times in the past, probably with better results than current deep learning based gaussian splatting where they use way too many splats to render a scene.

Basically the problem with sparse pictures and point clouds in general is their lack of topology and not precise spatial position. But when you already have the topology (eg with a mesh), you can extract (optimally) a set of points and compute the radius of the splats such that there are no holes in the final image (and their color). That is usually done with the curvature and the normal.

The 'optimally' part is difficult, an easier and faster approach is just to do a greedy pass to select good enough splats.

affgrff2
2 replies
14h20m

Just want to add that using data from a game engine gives you perfect camera poses associated to each image. That makes training of nerfs and GS a little easier since there is no error from camera pose optimization. That's also the reason early Nerf papers used the famous yellow Lego excavator rendered with blender.

MrSkyNet
1 replies
10h48m

How so?

Through pixel perfect alignment? Resolution?

arussellsaw
0 replies
10h14m

one of the fundamental problems in photogrammetry is determining the position of the camera in 3D space, with a game engine you just have a concrete value for your camera position, removing that entire problem. I don't know too much about photogrammetry but i'd imagine once your camera position is 100% accurate it's a lot easier to construct the point cloud accurately.

jsheard
1 replies
21h42m

Epic acquired the photogrammetry company Quixel a while ago, so it's quite likely they used their photo-scanned asset library when building the Matrix city. Funnily that would mean the OP is doing reconstructions of reconstructions of real objects.

reactordev
0 replies
20h26m

Or just rendering it mixed with some splats, we don’t know because they didn’t release their source code. I’m highly skeptical of their claims, their dataset, and the fact that it’s trivial to export it into some other viewer to fake it.

ttmb
0 replies
21h28m

Not all of the videos are Matrix City, some are real places.

markedathome
0 replies
7h57m

The Matrix Awakens demo had some of the assets released to the Unreal Engine marketplace as separate packs, [1]the map, [2]buildings, crowds, and vehicles.

The matrixcity map is different, but is somewhat similar to that of the map in Matrix Awakens. You can see from the design breakdown on [3] this page by a technical lead on the Matrix Awakens project.

edit. If you look further into the [4]github codebase, under the MatrixPlugin section it explicitly states that they used the city-sample project.

[1] https://www.unrealengine.com/marketplace/en-US/product/city-... [2] https://www.unrealengine.com/marketplace/en-US/learn/city-sa... [3] https://quentinmarmier.artstation.com/projects/xYeKNO [4] https://github.com/city-super/MatrixCity

cchance
1 replies
22h51m

Thats really cool is there a github with the code...

getting errors on that first link in devtools

Uncaught (in promise) Error: Failed to fetch resource https://tile.googleapis.com/v1/3dti...

aaroninsf
1 replies
19h40m

Are you on Bluesky?

Would love to follow. But not, you know, over there.

spothedog1
0 replies
1h32m

twitter only shows the first tweet if you're not logged in, can you post the repo here

sbarre
0 replies
23h37m

This is super cool! Congrats on the PoC ...

ctrlw
0 replies
10h58m

That looks great! I‘ve been playing around with Aframe and OSM building footprints, but this looks so much better. Will have a look at aframe-loader-3dtiles-component.

aantix
0 replies
22h25m

Wow, amazing work!

UncleOxidant
0 replies
55m

Could you provide a link here to your code repo for those of us not on xitter?

999900000999
7 replies
1d

Excited to see what license this is released under. Would love to see some open source games using this.

jsheard
5 replies
1d

Performance aside, someone needs to figure out a generalizable way to make the scenes dynamic before it will really be usable for games. History is littered with alternatives to triangles meshes that looked promising until we realised there's no efficient way to animate them.

CuriouslyC
2 replies
23h59m

Even if this doesn't replace triangles everywhere, I'm guessing it's still going to be the easiest way to generate a large volume of static art assets, which means we will see hybrid rendering pipelines.

jsheard
1 replies
23h42m

AIUI these algorithms currently bake all of the lighting into the surface colors statically, which mostly works if the entire scene is constructed as one giant blob where nothing moves but if you wanted to render an individual NeRF asset inside an otherwise standard triangle-based pipeline then it would need to be more adaptable than that. Even if the asset itself isn't animated it would need to adapt to the local lighting at the bare minimum, which I haven't seen anyone tackle yet, the focus has been on the rendering-one-giant-static-blob problem.

For hybrid pipelines to work the splatting algorithm would probably need to output the standard G-Buffer channels (unlit surface color, normal, roughness, etc) which can then go through the same lighting pass as the triangle-based assets, rather than the splatting algorithm trying to infer lighting by itself and inevitably getting a result that's inconsistent with how the triangle-based assets are lit.

Think of those old cartoons where you could always tell when part of the scenery was going to move because the animation cel would stick out like a sore thumb against the painted background, that's the kind of illusion break you would get if the lighting isn't consistent.

somethingsome
0 replies
20h25m

For NeRF this problems exists. However, in the past it was already solved for gaussian splatting. Usually you define a normal field over the (2D) splat, This allows you to have phong shading at least.

It is not too difficult to go to a 2D normal field over the 3D gaussians..

999900000999
1 replies
1d

Can you explain what a dynamic is ?

I was more thinking you'd run this tool, and then have an algorithm convert it( bake the mesh).

lawlessone
0 replies
23h49m

They probably mean animated, changeable etc. Like movement, or changes in lighting.

KaiserPro
0 replies
5h30m

Thats super hard as this is basically a very large pointcloud with oddly shaped points.

The objects represented in the point cloud have no inherent metadata embedded (ie its a chair, table, person etc) so any kind of interaction is super hard.

Its not impossible, but currently not practical.

More over its not that optimised for real-time rendering. Yes, a lot of points have been pruned, but its far more optimal to have lower resolution meshes

jnsjjdk
5 replies
23h51m

This does not look significantly better then e.g. cities skylines, especially since they neither zoomed in or out, always showing only a very limited frame

Am I missing something?

cchance
1 replies
22h47m

LOL this isn't a game engine, its real life photos being converted into gausian 3d views.

jayd16
0 replies
1h57m

Actually it sounds like its from renders of an Unreal demo. So synthetic photos from a game engine. LOL

neuronexmachina
0 replies
23h40m

This is a 3D reconstruction, rather than a game rendering.

dartos
0 replies
23h50m

This was rendered from photographs, I believe

chankstein38
0 replies
21h38m

All 3 of the other commenters are replying without having done any actual thought or research. The paper repeatedly references MatrixCity and another commenter above found this https://city-super.github.io/matrixcity/ which also, I'd like to add, calls out that it's fully Synthetic. And, from what I understand, is extracted from Unreal Engine.

mhuffman
3 replies
19h33m

Quick question for anyone that may have more technical insight, is Gaussian Splatting the technology that Unreal Engine has been using to have such jaw dropping demos with their new releases?

rmccue
0 replies
18h26m

Unreal Engine 5 is a combination of a few technologies:

* Virtualised geometry (Nanite) allowing very detailed models

* Very high quality models and textures from photogrammetry (Megascans)

* Real-time global illumination (Lumen)

Combining these is what allows the very high fidelity demos, as they’re each step changes from the previous techniques in Unreal. Megascans (and the Quixel library) are a big part of the “photorealness” of these demos, because they’re basically literally photos.

andybak
0 replies
19h27m

No. Mostly unrelated.

dukeofdoom
2 replies
12h58m

Does anyone know how to add motion blur to a game. I'm learning pygame. Say I'm making Mario in pygame, and when Mario jumps, I want him to look blurry. I mean I can take an average of 9 pixels, and create a blurry version of Mario. But is that how it's that usually done in other games. Since like a lot of games are supper sharp, with no motion blur. I'm wondering if its even done. It's kind of big deal in film, and the need to shoot at 25fps to achieve cinematic motion blur.

jayd16
1 replies
12h28m

Render the motion vector of objects to another render texture. (ie calculate the velocity of each object and render that as a color) Use that to define the amplitude and direction of your blur effect in a post pass.

And you might want it to be the motion relevant to the camera. For Mario, probably not, but for an FPS you want to edges of the screen to blur as the camera moves forward.

dukeofdoom
0 replies
4h32m

Thank you.

satvikpendem
1 replies
19h15m

Funny to see just how prolific Gauss was since so many things are named after him and continue to be newly named after him, such as this example of Gaussian splatting, which, while he obviously didn't directly invent it, contributed to the mathematics of it significantly.

syrusakbary
0 replies
1d

Gaussian splatting is truly amazing for 3d reconstruction.

I can't wait to see once it's applied to the world of driverless vehicles and AI!

rallyforthesun
0 replies
1d

Really advanced approach to render larger scenes with 3DGaussians, cant wait to test the code :-)

boywitharupee
0 replies
22h59m

what's the memory and compute requirements for this?