return to table of content

Voxel Displacement Renderer – Modernizing the Retro 3D Aesthetic

zellyn
11 replies
1d3h

That's a very appealing aesthetic (possibly because I'm old enough to remember downloading games over a 2400 baud modem!). Very nice work!

That video looked great, and (at least for me) felt very evocative. In the section where the roof was too low, I started feeling claustrophobic and cramped.

Those rock and sandy-floored caverns, and the cavern with boulders (which are gorgeous!) made me think I'd love playing a Myst or LucasArts-style adventure game using this as the renderer. Spelunking through caves, or archeological digs, etc.

Can't wait to see where you take this!

Varriount
5 replies
1d

I agree. To me this is very evocative of the pixel-art/retro look, even moreso than the low-poly Doom/Wolfenstein look.

anthk
2 replies
19h56m

No polygons in Doom/Wolf3D.

The example from Back To Saturn X2 can be played on a software rendered engine with no concept of polygons at all.

prox
1 replies
9h53m

Isn’t it referred to as 2.5D ?

anthk
0 replies
9h1m

More like a bunch of cardboard figures raised extruded and raised up to look 3D. You couldn't stack floors in Doom for instance.

2.5D would be the semi top down games such as most beatem-ups allowing you to roam around instead of just going left/right as the typical platform or action game, or most SNES RPG's.

underlipton
0 replies
18h40m

It looks like what those older games felt like.

prox
0 replies
20h3m

If the author realizes it, he could make the next Valheim hit.

gsliepen
2 replies
23h40m

Imagine a 3D version of Noita with this aesthetic, and perhaps using smoothed particle hydrodynamics to make the falling sand engine scale to 3D.

sph
0 replies
8h51m

3D voxel Noita is something I never knew I needed.

But the engine would need to be crazy optimised to handle decent sand/fluid voxels in 3D space. It would be a technical achievement in itself.

Aeolun
0 replies
18h20m

It’d be really hard to keep track of what is happening in first person, but so totally awesome xD

I don’t think it’d make for a very good game though.

adamredwoods
0 replies
17h45m

Warez on the BBSs. Go Eaglesoft!

abluecloud
0 replies
23h8m

completely agree with the aesthetics!

my issue with it that it looked so good, but the architecture felt wrong. in the part with the low ceiling, it felt like those blocks hanging from the roof were defying physics

omoikane
10 replies
1d1h

I wonder how this approach compares to Unreal Engine 5 nanites, or maybe Unreal Engine is actually doing something similar?

I remember one motivation for using voxels in older games (like Comanche[1]) is that you can get seemly more complex terrains that, when modelled using triangle meshes, would have been more expensive on similar hardware. The author mentions 110FPS on a RX 5700 XT, I am not sure how that compares to other approaches.

[1] https://en.wikipedia.org/wiki/Comanche_(video_game_series)

Scaevolus
3 replies
1d

Basically unrelated. He's using geometry shaders to generate voxel mesh details-- perhaps with some LOD optimizations, while Nanite is a GPU-driven rendering technology that adapts triangle density to aim for a given fidelity target.

Nanite can use displacement maps and perform tessellation, but it uses an alternate pathway that's not necessarily more efficient than feeding it a high-poly asset to render.

nicebyte
2 replies
1d

I'd be very surprised if they were actually using geometry shaders (at least this conclusion can't be drawn from their post). Geom shaders are basically dead weight, a holdover from a bygone time, and are better avoided for modern renderers.

The techniques they're drawing on mentioned in the post - parallax mapping, shell mapping - do not generate explicit geometry, rather they rely on raymarching through heightfields in the frag shader. It's more likely that they're doing something like that.

scheeseman486
0 replies
21h3m

Parallax mapping breaks on the edges of polygons though, while this technique seems to actually add geometry since the edges of surfaces are appropriately detailed and bumpy.

Scaevolus
0 replies
20h40m

Err, right, they're doing the transformations CPU-side. The blog hints that it's related to shell maps, so maybe the new mesh geometry is densest on the sharp edges?

fho
2 replies
1d

Iirc Comanche used a ray tracing approach to render the terrain [1]. No Voxels there, just a 2D height map that is sampled.

(They called it "VoxelSpace" ... so some confusion is warranted)

[1] https://github.com/s-macke/VoxelSpace

usrusr
0 replies
19h2m

My mind parses any mention of "voxel" that isn't accompanied by a screenshot from one of the original Comanche levels as a lie.

The reality is that Comanche was more like a displacement-map twist on the Doom "2.5D" than something that really deserves the term "voxel". But it was so magic, did anything else ever come close?

low_tech_love
0 replies
22h32m

Sorry for being pedantic, but it’s ray “casting” not tracing. It’s similar in some ways but very different in others.

torginus
1 replies
20h0m

My guess its the exact same technique used in the Comanche games - or at least the same results can be achieved with it.

Contrary to popular belief, those games didn't use true 3D voxels - they used a heightmap that stored a color and height value in the terrain texture which they raymarched into.

You could recreate the same look with raymarching into the texture in a shader which I suspect would look very similar to what the blog post achieved..

anthk
0 replies
28m

There's a libre game which does that but in 3D and OpenGL, Trigger Rally.

jayd16
0 replies
1d

Its hard to say exactly because the OP doesn't go into the runtime mesh used but my guess is its quite different form Nanite.

Nanite assumes high poly authoring of objects and works to stream in simplified chunks such that the rendered triangles are not less than a pixel wide. Displacement maps are a bit redundant because geometry can be naturally very detailed, there's no reason to use a texture map for it. (There is a case for Landscapes but that's a unique case)

This seems to be using a displacement and a low poly mesh to generate high poly but 'voxelized' geo on load.

aidenn0
7 replies
18h31m

For someone who has done zero 3D graphics in a couple of decades, why do displacement maps exist? At first blush they don't appear to be more computationally efficient than more complex geometry.

MindSpunk
3 replies
17h45m

It's less about being "less work" and more about what GPUs are and are not good at.

Displacement maps can closely approximate the geometric detail of having 1 triangle per pixel. All the work for displacement maps happens on a per-pixel basis inside a fragment shader (in simple terms, a little program that runs for each pixel of a triangle). You can wrap this displacement map over a single, large triangle and get the visual appearance of a much denser mesh.

The alternative approach of subdividing the mesh is orders of magnitude less efficient because GPUs are _very_ bad at drawing tiny polygons. It's just how GPUs and the graphics pipelines are implemented. A 'tiny polygon' is determined from how many pixels it covers, as you start dropping below a couple dozen pixels per triangle you start hitting nasty performance cliffs inside the GPU because of the specific ways triangles are drawn. Displacement maps works around this problem because you're logically only drawing single big polygons but doing the work in a shader where the GPU is much more efficient.

Etherlord87
2 replies
8h25m

Displacement maps can closely approximate the geometric detail of having 1 triangle per pixel. All the work for displacement maps happens on a per-pixel basis inside a fragment shader (in simple terms, a little program that runs for each pixel of a triangle). You can wrap this displacement map over a single, large triangle and get the visual appearance of a much denser mesh.

I think you're wrong here... Can you provide an example of an engine where this is actually the case? To my knowledge (and I checked on Wikipedia [1] to make sure), a displacement map simply displaces VERTICES along their normals (so white color moves a maximum distance along normal, and black either doesn't move at all or moves maximum distance along the inverted normal - inwards). This means you need to heavily subdivide your geometry. Or you can remesh your geometry, with the simplest remeshing algorithms being just voxelizers - and that's what we see the OP doing - except in the place where all the 3D graphics complain on the limitation of voxelization, he hypes it as this new retro look he invented.

What seems to confirm my interpretation of all of this is how the OP describes this being done on the CPU, and - while he tries to downplay it - using rather decent hardware - I mean, Steam Deck is not exactly ancient hardware, and for an extremely simple scene with nothing else going on, he's happy being above 60 FPS on 800p resolution!

[1] https://en.wikipedia.org/wiki/Displacement_mapping

avallach
1 replies
5h59m

The magic lies in tessellation. Tessellation is an efficient GPU process of heavily subdividing your mesh, so that displacement maps can add visible geometric details afterwards. And because it's dynamic you can selectively apply it only to the meshes that are close to the camera. These are reasons why it's better than subdividing the mesh at preprocessing stage and "baking in " the displacement into vertex positions.

Etherlord87
0 replies
3h34m

Good point! It's not just the LOD, it may be also the fact that the parallelization makes GPUs a better fit for subdivision than CPUs, and surely there's a matter of connection bandwith to the GPUs: the vertex coordinates, as well as lots of other vertex-specific data, needs only to be sent for the base vertices, the new vertices get their new values from interpolating old values, and the textures like displacement map control the difference from interpolated value to desired value. Of course a texture would have been just some weird compromise of resolution of the sent data, except you don't have to provide a texture for every attribute, and more importantly, such a texture might be static (e.g. if encoded in normal space it may work throughout an animation of an object).

pixelesque
0 replies
16h9m

It allows you to defer subdivision of the mesh (maybe down to micropoly level for displacement) until the final render stage, rather than baking in the subdivision earlier and having to deal with dense geometry for things like animation where it's better to have lighter-weight meshes.

modeless
0 replies
15h53m

I think displacement maps are not that frequently used in game graphics these days. The typical thing is regular geometry with normal maps. However, triangle soup is very difficult to modify in any way. The cool thing about displacement maps is that they can downsampled trivially, so if you're doing runtime tesselation you can get smooth LOD scaling (at least down to your base mesh) which is nice. Less usefully, they can also be tiled, blended, interpolated, scrolled, animated, etc.

On the other hand, UE5's Nanite achieves the same LOD scaling but far better, without the limitation of not being able to scale down past the base mesh and with a built-in workaround for the issues GPUs have with small triangles. It's possible that people might use displacement maps in the authoring process, which can then be baked into static meshes for Nanite. Then it would just be a convenient-ish way for artists to author and distribute complex materials.

gcr
0 replies
18h18m

The reason that comes to my mind is to save memory to transfer from the CPU to the GPU.

If you do your tesselation inside the vertex shader, then you can send a low-poly mesh to the graphics card, which saves a lot on vertex buffers (e.g. uv coordinates and other per-vertex attributes). The vertex shader still emits the same number of vertices to the rest of the pipeline, but inter-stage bandwidth is more plentiful than CPU-GPU bandwidth so I can see that coming out ahead.

I’m not an expert though. Perhaps someone with a better understanding can clear this up, I’m curious too…

rspoerri
4 replies
1d4h

Looks nice, but without shadows it's missing an important (and often very complex) feature.

pcdoodle
1 replies
1d4h

Sometimes the lack of something shifts our focus elsewhere. Could be a feature.

septune
0 replies
1d

The benefits of having less is underrated.

JKCalhoun
1 replies
1d4h

Yeah, thinking it may need a second pass to produce shadows. It's not clear to me that you could "bake them in" since textures are reused.

Stretch goal, dynamic lighting — like someone carrying a torch ahead of you in a tunnel and illuminating as they went.

To be sure though, the retro vibe is 100% nailed as is.

ImHereToVote
0 replies
1d2h

Just use a second uv for the lighting.

Animats
4 replies
22h7m

There's another approach - Deep Bump.[1] Addresses the same problem, but in a totally different way.

Deep Bump is a machine-learning tool which takes texture images and creates plausible normal maps from them. It's really good at stone and brick textures like the ones this voxel displacement renderer is using. It's OK at clothing textures - it seems to be able to recognize creases, pockets, and collars, and gives them normals that indicate depth. It's sort of OK on bark textures, and not very good on plants. This probably reflects the training set.

So if you're upgrading games of the Doom/Wolfenstein genre, there's a good open source tool available.

[1] https://github.com/HugoTini/DeepBump

zellyn
2 replies
21h54m

If I read that blog post correctly, it's a model to infer normal maps from textures, not a new way to render geometry.

The article in this thread is more about a small-voxel-based representation of displacement maps. A tool like Deep Bump could conceivably be used to aid in the creation of texture assets for the system discussed in this thread.

Animats
1 replies
21h40m

Yes, it's a model to infer normal maps from textures. Then you can use a modern PBR renderer on old content and have a better illusion of depth. It doesn't introduce the blockiness of voxels.

ricardobeat
0 replies
21h5m

It seems the blockiness of voxels is the whole point. Applying normal maps to low-res textures doesn’t look good and completely changes the look. Generating high-res textures from the originals makes it even worse.

This voxel approach preserves the aesthetics of the old pixelated (now voxelated) graphics in a much more pleasing way.

pixelesque
0 replies
18h53m

As the article points out though, normal maps don't "work" in all situations though, as they don't actually change the geometry just the lighting and the illusion of the geo, so viewed at the edges of meshes displacement is still the better high-fidelity option.

DeepBump might be able to extract 1D (height only, not full 3D vector displacement) maps to use with traditional displacement though.

purple-leafy
3 replies
15h7m

Looks amazing. My dream outcome would be something like warlords battlecry 3 in a voxel style.

How the hell do people get into graphics programming or voxels? Seems very difficult as a dirty ol webdev

viraptor
2 replies
13h57m

It's just experience. You start with basics things. There's likely lots of game devs saying "how do you even get into web development, seems difficult as a dirty ol pixel pusher".

There's a lot of tutorials around. You can also join a game jam for some motivation / community.

purple-leafy
1 replies
12h46m

I hope to become a dirty ol pixel/voxel pusher hehe.

Time to make a start!

viraptor
0 replies
12h37m

If you want some friendly people to ask for advice and find examples/resources, I'd meta-recommend checking the Pirate Software discord https://discord.com/invite/piratesoftware There's lots of people there who are/were in a similar position and can help you out.

tcsenpai
2 replies
23h46m

Sorry if I am dumb but...I was wondering if this in practice would theoretically be able to convert or render directly old games and graphics (provided the starting point is compatible) in such a "remastered" way.

In any case I am at the third read and I think I understood enough to say "wow, looks and feels great. Would play TES Arena like that."

avallach
1 replies
10h21m

No, at least not in the "automatic" way the Nvidia RTX Remix does. You would not only need to generate the displacement maps for textures, but the most importantly port the game to this new rendering engine. It's an extremely complicated task if done by reverse engineering and hacking the executable, without ability to read and recompile the source code.

tcsenpai
0 replies
6h49m

Perfectly clear. Who knows, maybe in the future this could be a step for a mechanism that does so; anyway, impressive results in itself! Kudos!

vsuperpower2020
1 replies
23h21m

This is really cool, but with all homemade voxel engines I have to ask the same thing. Where is the game?

scheeseman486
0 replies
21h1m

It's not a real voxel engine, the world geometry itself isn't much different from Quake.

klooney
1 replies
12h47m

Does anyone else remember Sauerbraten? Gives me similar vibes. Should see if it will still run on a modern system.

cma
0 replies
22h56m

The article has a footnote about Voxel Doom, but more about Voxel Doom's environment approach and not monsters:

Now that I’ve laid out all this context, I want to give a shout out to the Voxel Doom mod for classic Doom. The mod’s author replaced the game’s monsters and other sprites with voxel meshes to give them more depth, some very impressive work. Then, in late 2022, he began experimenting with using parallax mapping to add voxel details to the level geometry. This part of the mod didn’t look as good, in my opinion — not because of the author’s artwork, but because of the fundamental limitations that come from using parallax mapping to render it. This mod wasn’t the inspiration for my project — I was already working on it — but seeing the positive response the mod received online was very encouraging as I continued my own efforts. ↩

It does say though that the approach supports animated doors and stuff so combined with mesh and texture flipbook I think it could be used for original doom looking monsters too, but sharp curvature areas I think have the most artifacts with shell mapping and he mentions limitations to the meshing of levels so maybe not.

phendrenad2
0 replies
1d3h

This is inspiring. Makes me want to try to duplicate the results using old-fashioned bump mapping.

nikolay
0 replies
1d1h

I wish this was open-sourced! I wish!

navjack27
0 replies
16h36m

I got textures I worked on for a temporarily paused project that I was working on just as a hobby trying to put the game Strife in UE5. https://github.com/navjack/strife-rtx-pbr-textures I handmade a ton of height maps or displacement maps for the textures and they might work really good with this.

kls0e
0 replies
13h15m

hugely impressive.

keyle
0 replies
10h49m

This looks a lot like what Notch is working on (see his twitter feed). Another kind of voxel rendering. This, however is using C++/Vulkan and looks stunning!

hatenberg
0 replies
1d3h

I love love love it. Ultima Underworld comes to mind.

ggm
0 replies
12h7m

Could you use the algo to render a photo as a carved face or profile? Like a netpbm filter to sculpt from image

anthk
0 replies
8h45m

On an actual voxel engine, look up Outcast.

Also, Duke Nukem Forever 2013 with Eduke32 had some voxel models I think.

SuperHeavy256
0 replies
20h53m

This should be the future of modernizing retro 3D games. God, this is beautiful.

7bit
0 replies
1d4h

Impressive.