return to table of content

SDL3 new GPU API merged

caspar
57 replies
1d19h

SDL3 is still in preview, but the new GPU API is now merged into the main branch while SDL3 maintainers apply some final tweaks.

As far as I understand: the new GPU API is notable because it should allow writing graphics code & shaders once and have it all work cross-platform (including on consoles) with minimal hassle - and previously that required Unity or Unreal, or your own custom solution.

WebGPU/WGSL is a similar "cross-platform graphics stack" effort but as far as I know nobody has written console backends for it. (Meanwhile the SDL3 GPU API currently doesn't seem to support WebGPU as a backend.)

shmerl
48 replies
1d18h

Why is SDL API needed vs gfx-rs / wgpu though? I.e. was there a need to make yet another one?

dottrap
13 replies
1d16h

The old SDL 2D API was not powerful enough. It was conceived in the rectangle sprite blitting days, when video hardware was designed very differently and had drastically different performance characteristics. If you wanted anything more, OpenGL used to be 'the best practice'. But today, the landscape competes between Vulkan, Metal, and Direct3D, and hardware is centered around batching and shaders. Trying to target OpenGL is more difficult because OpenGL fragmented between GL vs. GLES and platform support for OpenGL varies (e.g. Apple stopped updating GL after 4.1).

A good example demonstrating where the old SDL 2D API is too limited is with the 2D immediate mode GUI library, Nuklear. It has a few simple API stubs to fill in so it can be adapted to work with any graphics system. But for performance, it wants to batch submit all the vertices (triangle strip). But SDL's old API didn't support anything like that.

The reluctance was the SDL maintainers didn't want to create a monster and couldn't decide where to draw the line, so the line was held at the old 2D API. Then a few years ago, a user successfully changed the maintainers' minds after writing a demonstration showing how much could be achieved by just adding a simple batching API to SDL 2D. So that shifted the mindset and led to this current effort. I have not closely followed the development, but I think it still aims to be a simple API, and you will still be encouraged to pick a full blown 3D API if you go beyond 2D needs. But you no longer should need to go to one of the other APIs to do 2D things in modern ways on modern hardware.

jandrese
4 replies
23h36m

I was messing around a bit with SDL2 and either I was doing something wrong or it was just plain slow. My machine is plenty fast, but even just blitting a few dozen PNGs around a screen 60 times a second was pushing its limits. I freely admit I may have been doing something wrong, but I was surprised at just how inefficient it was at a task that we used to do without too much trouble on 1Mhz CPUs.

Maybe SDL_RenderCopy is the wrong API to use to blit things from a sprite sheet onto a display? The docs didn't give any warning if this is the case.

krapp
3 replies
23h26m

How recent a version were you using? Plenty of games and graphical apps use SDL2 under the hood, and rendering rects from a spritesheet is trivial. Recent versions use the geometry API for rendering rects, so it should be able to handle tons of sprites without much effort.

jandrese
2 replies
22h35m

I'm using SDL2 2.30.0. The main loop is pretty simple, it does a few SDL_RenderFillRects to create areas, then several SDL_RenderCopy where the source is a SDL_Texture created from a SDL_Surface using SDL_CreateTextureFromSurface that was loaded from files at boot. A final call to SDL_RenderPresent finishes it off. They do include an alpha channel however.

I was expecting the sprite blitting to be trivial, but it is surprisingly slow. The sprites are quite small, only a few hundred pixels total. I have a theory that it is copying the pixels over the X11 channel each time instead of loading the sprite sheets onto the server once and copying regions using XCopyArea to tell the server to do its own blitting.

krapp
0 replies
21h50m

Whatever the problem is, it probably isn't SDL. Here's a test project I worked on[0], and I'm using a garbage laptop. The sprites aren't that big but if you're just using a single texture it shouldn't matter, since SDL does sprite batching anyway.

Your theory might be right - the first thing I would look for was something allocating every frame.

You might ask the SDL Discourse forum and see what they think: https://discourse.libsdl.org/

[0]https://cdn.masto.host/krappmastohost/media_attachments/file...

dottrap
0 replies
21h52m

This should be plenty fast. SDL_RenderCopy generally should be doing things the 'right' way for on any video card made roughly in the last 15ish years (basically binding a texture in GPU RAM to a quad).

You probably need to due some debugging/profiling to find where your problem is. Make sure you aren't creating SDL_Textures (or loading SDL_Surfaces) inside your main game play loop. You also may want to check what backend the SDL_Renderer is utilizing (e.g. OpenGL, Direct3D, Vulkan, Metal, software). If you are on software, that is likely your problem. Try forcing it to something hardware accelerated.

Also, I vaguely recall there was a legacy flag on SDL_Surfaces called "hardware" or "SDL_HWSURFACE" or "SDL_HWACCEL" or something. Don't set that. It was a a very legacy hardware from like 25 years ago that is slow on everything now.

phaedrus
3 replies
1d1h

Does SDL3 still use integers for coordinates? I got annoyed enough by coordinates not being floating point in SDL2 that I started learning WebGPU, instead. This was even though the game I was working on was 2D.

The issue is, if you want complete decoupling (in the sense of orthogonality) among all four of:

- screen (window) size & resolution (especially if game doesn't control)

- sprite/tile image quantization into pixels (scaling, resolution)

- sprite display position, with or without subpixel accuracy

- and physics engine that uses floating point natively (BulletPhysics)

then to achieve this with integer drawing coordinates requires carefully calculating ratios while understanding where you do and do not want to drop the fractional part. Even then you can still run into problem such as, accidentally having a gap (one pixel wide blank column) between every 10th and 11th level tile because your zoom factor has a tenth of a pixel overflow, or jaggy movement with wiggly sprites when the player is moving at a shallow diagonal at the same time as the NPC sprites are at different floating point or subpixel integer coords.

A lot of these problems could be (are) because I think of things from bottom up (even as my list above is ordered) where a physics engine, based on floating point math, is the source of Truth, and everything above each layer is just a viewport abstracting something from the layer beneath. I get the impression SDL was written by and for people with the opposite point of view, that the pixels are important and primary.

And all (most) of these have solutions in terms of pre-scaling, tracking remainders, etc. but I have also written an (unfinished) 3D engine and didn't have to do any of that because 3D graphics is floating point native. After getting the 2D engine 90% done with SDL2 (leaving 90% more to go, as we all know), I had a sort of WTF am I even doing moment looking at the pile of work-arounds for a problem that shouldn't exist.

And I say shouldn't exist because I know the final output is actually using floating point in the hardware and the driver; the SDL1/2 API is just applying this fiction that it's integers. (Neither simple, nor direct.) It gets steam coming out my ears knowing I'm being forced to do something stupid to maintain someone else's fiction, so as nice as SDL otherwise is, I ultimately decided to just bite the bullet and learn to program WebGPU directly.

krapp
1 replies
1d

Does SDL3 still use integers for coordinates?

No, they added float versions for most functions and I think they plan on deprecating the int API in the future. The only exception I can think of offhand is still needing an integer rect to set a viewport.

phaedrus
0 replies
1d

That's good, then. Honestly an integer rect for the viewport is, "not wrong."

shortrounddev2
0 replies
20h28m

SDL2 added floating point versions of most rendering functions

HexDecOctBin
2 replies
1d8h

I think you are getting confused between SDL_Render and SDL_GPU. SDL_Render is the old accelerated API that was only suitable for 2D games (or very primitive looking 3D ones). SDL_GPU is a fully-featured wrapper around modern 3D APIs (well, the rasteriser and compute parts anyway, no raytracing or mesh shaders there yet).

dottrap
1 replies
1d7h

I was referencing the historical motivations that led to where we are today. Yes, I was referring in part to the SDL_Render family APIs. These were insufficient to support things like Nuklear and Dear ImGui, which are reasonable use cases for a simple 2D game, which SDL hoped to help with by introducing the SDL_Render APIs in SDL 2.0 in the first place.

https://www.patreon.com/posts/58563886

Short excerpt:

    One day, a valid argument was made that basic 2D triangles are pretty powerful in themselves for not much more code, and it notably makes wiring the excellent Dear Imgui library to an SDL app nice and clean. Even here I was ready to push back but the always-amazing Sylvain Becker showed up not just with a full implementation but also with the software rendering additions and I could fight no longer. In it went.
The next logical thing people were already clamoring for back then was shader support. Basically, if you can provide both batching (i.e. triangles) and shaders, you can cover a surprising amount of use cases, including many beyond 2D.

So fast forwarding to today, you're right. Glancing at the commit, the GPU API has 80 functions. It is full-featured beyond its original 2D roots. I haven't followed the development enough to know where they are drawing the lines now, like would raytracing and mesh shaders be on their roadmap, or would those be a bridge too far.

HexDecOctBin
0 replies
1d6h

where they are drawing the lines now

From what I understand, they are only going to support features that are widely supported and standardised. Thus, even bindless didn't make the cut. Raytracing, mesh shaders, work-graphs, etc. almost certainly won't make it until SDL4 10 years from now; but I am not part of the development team, so don't quote me.

shmerl
0 replies
1d15h

I see, interesting.

adrift
8 replies
1d18h

Having a C API like that is always nice. I don't wanna fight Rust.

westurner
3 replies
1d16h

wgpu supports WebGPU: https://github.com/gfx-rs/wgpu :

While WebGPU does not support any shading language other than WGSL, we will automatically convert your non-WGSL shaders if you're running on WebGPU.
dartos
2 replies
1d14h

That’s just for the shading language

flohofwoe
1 replies
1d9h

The Rust wgpu project has an alternative C API which is identical (or at least closely matches, I haven't looked in detail at it yet) the official webgpu.h header. For instance all examples in here are written in C:

https://github.com/gfx-rs/wgpu-native/tree/trunk/examples

There's definitely also people using wgpu from Zig via the C bindings.

westurner
0 replies
1d2h

I found:

shlomnissan/sdl-wasm: https://github.com/shlomnissan/sdl-wasm :

A simple example of compiling C/SDL to WebAssembly and binding it to an HTML5 canvas.

erik-larsen/emscripten-sdl2-ogles2: https://github.com/erik-larsen/emscripten-sdl2-ogles2 :

C++/SDL2/OpenGLES2 samples running in the browser via Emscripten

IDK how much work there is to migrate these to SDL3?

Are there WASM compilation advantages to SDL3 vs SDL2?

kbolino
0 replies
1d4h

It exists, but IMO it's not a good choice.

First of all, it doesn't support RenderGeometry or RenderGeometryRaw, which are necessary for high-performance 2D rendering (absent the new GPU API). I doubt it will support any of the GPU API at this rate, as the geometry rendering is a much simpler API. Maybe both will land all at once, though. To wit, the relevant issue hasn't seen much activity: https://github.com/Rust-SDL2/rust-sdl2/issues/1180

Secondly, the abstractions chosen by rust-sdl2 are quite different from those of SDL2 itself. There seems to have been an aggressive attempt by the Rust library authors to make something more Rust-friendly, which maybe has made it more approachable for people who don't know SDL2 already, but it has IMO made it less approachable for people who do know SDL2. The crate gets plenty of downloads, so maybe it's just me.

throwup238
6 replies
1d16h

SDL the library is over a quarter century old. It powers tons of existing software. Why wouldn't people keep working on it?

01HNNWZ0MV43FF
5 replies
1d16h

They already broke compat for 2.x, and existing games don't have shaders in 1.x or 2.x, right? So why make their own API?

badsectoracula
4 replies
1d16h

Yes and no. SDL 2.x is not backwards compatible with SDL 1.x (and that was an annoyance of mine) but at some point someone wrote an SDL 1.x implementation on top of SDL 2.x that got official blessing, so at least games using SDL 1.x can be made to use SDL 2.x "under the hood" be it in source code form or binary-only form.

Though you can't take an SDL 1.x game and convert it piecemeal to SDL 2.x as the APIs are not backwards compatible, it is an all-or-nothing change.

gmueckl
1 replies
1d12h

The API breaks in SDL2 were sorely needed, if you asked me. SDL1 painted itself into a corner in a few places, e.g. simultaneous use of multiple displays/windows.

badsectoracula
0 replies
20h42m

I don't think they were needed but i value not breaking existing programs and code more than some abstract and often highly subjective form of code purity.

The compatibility layer that was introduced a few years later did solve the "SDL1 apps running under SDL2 under the hood (though with some regressions)" compatibility issue, it did somewhat solve the "compile existing code that uses SDL1 with SDL2" (depending on your language and SDL bindings, i had to compile the real SDL 1.2 library to have Free Pascal's bindings work since they didn't work with sdl12-compat) but it did not solve the "updating existing code to use the new features without rewriting everything" compatibility issue (there was even some user in the PR or Twitter asking about future plans for compatibility because he had spent years updating his code from SDL 1.2 to SDL 2.0 and didn't want to repeat the process again - FWIW the answer was that it probably wont be any less than 10 years for a new major backwards incompatible version).

badsectoracula
0 replies
20h45m

Yes, that is what i wrote in the very first paragraph.

WhereIsTheTruth
5 replies
1d12h

SDL is for gamedevs, it supports consoles, wgpu is not, it doesn't

hnlmorg
3 replies
1d11h

SDL is for everyone. I use it for a terminal emulator because it’s easier to write something cross platform in SDL than it is to use platform native widgets APIs.

westurner
1 replies
16h25m

Can the SDL terminal emulator handle up-arrow /slash commands, and cool CLI things like Textual and IPython's prompt_toolkit readline (.inputrc) alternative which supports multi line editing, argument tab completion, and syntax highlighting?, in a game and/or on a PC?

debugnik
0 replies
9h10m

I think you're confusing the roles of terminal emulator and shell. The emulator mainly hosts the window for a text-based application: print to the screen, send input, implement escape sequences, offer scrollback, handle OS copy-paste, etc. The features you mentioned would be implemented by the hosted application, such as a shell (which they've also implemented separately).

WhereIsTheTruth
0 replies
1d10h

You are right, I was too focused on the gamedev argument that it made me use an incorrect statement

anthk
0 replies
1d9h

SDL it's great for embedded machines with limited displays.

modeless
4 replies
1d18h

The more the merrier if you ask me. Eventually one will win but we need more experimentation in this space. The existing GPU APIs are too hard to use and/or vendor-specific.

TillE
3 replies
1d17h

Writing bits of Vulkan or D3D12 really isn't that bad if you're working within an engine which does most of the setup for you, which is nearly always the case for practical work. If you're doing everything yourself from scratch, you're probably either a hobbyist tinkering or a well-compensated expert working for a AAA game developer.

monocasa
0 replies
1d15h

If you're targeting SDL, then you probably don't have an engine, or you are the engine.

gmueckl
0 replies
1d12h

Nit: there are also really sophisticated graphics engines for serious applications. It's not only games.

flohofwoe
0 replies
1d9h

Using something like Unreal or Unity if you just want to blast a couple of triangles to the screen in a cross-platform application is a bit overkill.

TinkersW
2 replies
1d13h

WebGPU would be alot more useful if it hadn't gone with such a needlessly different shader language syntax, makes it much harder to have any single src between the C++ and it.

wiz21c
0 replies
1d11h

can't you use naga to reuse OpenGL shaders for example ?

truckerbill
0 replies
1d11h

The bigger problem is the lack of bindless textures etc

flohofwoe
1 replies
1d9h

One important point I haven't seen mentioned yet is that SDL is the defacto minimal compatibility layer on Linux for writing a windowed 'game-y' application if you don't want to directly wrestle with X11, Wayland, GTK or KDE.

ahartmetz
0 replies
22h44m

Yeah - getting an OpenGL (and presumably same for Vulkan) context is surprisingly annoying if you don't have a library to help you. It also works quite differently on X11, Wayland, or directly on kernel APIs. Many games that don't otherwise use SDL2 (such as ones ported from other platforms, i.e. most games) use it just for that.

flohofwoe
0 replies
1d9h

While WebGPU isn't a bad API, it also isn't exactly the '3D API to end all 3D APIs'.

WebGPU has a couple of design decisions which were necessary to support Vulkan on mobile devices, which make it a very rigid API and even (desktop) Vulkan is moving away from that rigid programming model, while WebGPU won't be able to adapt so quickly because it still needs to support outdated mobile GPUs across all operating systems.

badsectoracula
0 replies
1d15h

SDL provides various "backend agnostic" APIs for a variety of needs, including window creation, input (with a gamepad abstraction), audio, system stuff (e.g. threads), etc so that programs written against SDL can work on a variety of systems - and if linked against it dynamically (or using the "static linking but with a dynamic override" that allows a statically linked version to use a newer dynamic version of the library) can use newer/better stuff (which is sometimes needed, e.g. some older gaming using old version of SDL1.x need the DLL/.so replaced to a new version to work on new OSes, especially on Linux).

Exposing a modern (in the sense of how self-proclaimed modern APIs like Vulkan, D3D12 and Metal work) GPU API that lets applications written against it to work with various backends (D3D11, D3D12, Vulkan, Metal, whatever Switch and PS5 uses, etc) fits perfectly with what SDL already does for every other aspect of making a game/game engine/framework/etc.

As if it was "needed", it was needed as much as any other of SDL's "subsystems": strictly speaking, not really as you could use some other library (but that could be said for SDL itself) but from the perspective of what the SDL wants to provide (an API to target so you wont have to target each underlying API separately) it was needed for the sake of completeness (previously OpenGL was used for this task if you wanted 3D graphics but that was when OpenGL was practically universally available for the platforms SDL itself officially supported - but nowadays this is not the case).

jcelerier
0 replies
1d8h

Qt RHI too. Shaders are normal vulkan-compatible GLSL and then you get D3D11, 12, GL and Metal

benlwalker
0 replies
22h18m

Compared to libraries like bgfx and sokol at least, I think there are two key differences.

1) SDL_gpu is a pure C library, heavily focused on extreme portability and no depedencies. And somehow it's also an order of magnitude less code than the other options. Or at least this is a difference from bgfx, maybe not so much sokol_gfx.

2) The SDL_gpu approach is a bit lower level. It exposes primitives like command buffers directly to your application (so you can more easily reason about multi-threading), and your software allocates transfer buffers, fills them with data, and kicks off a transfer to GPU memory explicitly rather than this happening behind the scenes. It also spawns no threads - it only takes action in response to function calls. It does take care of hard things such as getting barriers right, and provides the GPU memory allocator, so it is still substantially easier to use than something like Vulkan. But in SDL_gpu it is extremely obvious to see the data movements between CPU and GPU (and memory copies within the CPU), and to observe the asynchronous nature of the GPU work. I suspect the end result of this will be that people write far more efficient renderers on top of SDL_gpu than they would have on other APIs.

runevault
2 replies
1d12h

Worth noting Godot also has cross platform shaders. Its GDShader language is based heavily on OpenGL shader language though not a 1:1 copy and gets compiled for the target platform. Though for PS5 and XBox you have to work with a 3rd party (Someone released the Nintendo build for anyone who's signed the Nintendo NDA).

jb1991
1 replies
1d12h

SDL is also offering cross platform GPU compute, is that also available in Godot?

runevault
0 replies
1d12h

So I haven't used compute shaders though I remembered Godot having them and double checked. Interestingly they are direct glsl which makes me wonder if they only work in OGL contexts. Which would be... weird because Godot 4.3 shipped easy DirectX output support. I'm sort of tempted to test out making a compute shader and compiling to DX and see if it works.

Edit: Doing more digging according to the end of this forum thread they get compiled to SPIR-V and then to whatever backend is needed, be it GLSL, HLSL, etc.

https://forum.godotengine.org/t/compute-shaders-in-godot/461...

bgbernovici
0 replies
1d8h

I previously integrated bgfx [1], which allows you to write graphics code and shaders once and supports consoles, with SDL2 stack and Swift [2]. It was quite a nice experience, especially for someone who had never worked with any of these tools before. I'm excited for SDL3 as it introduces console abstractions, eliminating the need for additional dependencies for the GPU API. Moreover, Godot officially supports the Steam Deck, and hopefully, more consoles will be supported in the future. On a related note, Miguel de Icaza is advocating for Swift adoption in Godot, and he is working on porting the editor to SwiftUI on iPad. It is interesting to see the progress [3].

[1] https://bkaradzic.github.io/bgfx/overview.html

[2] https://github.com/bgbernovici/myndsmith

[3] https://blog.la-terminal.net/xogot-code-editing/

immibis
15 replies
1d17h

Feels like SDL3 suffers the second system effect. (SDL2 was just SDL1 with explicit window handles, so SDL3 is the second system, not the third). SDL1/2 is a thin layer that wraps the platform-specific boilerplate of opening a window and handling input events, so you can get to the OpenGL rendering stuff that you actually wanted to write.

caspar
8 replies
1d17h

If you only want to support Windows/Linux/Android, then sure, you can definitely argue that the SDL GPU API is bloat.

But if you want to support Apple's operating systems then you're stuck with OpenGL 4.1 (officially deprecated by Apple 5 years ago) - so no modern GPU features like compute shaders.

You can go the Vulkan route and use MoltenVK for Apple systems, but Vulkan is quite a step up in complexity from OpenGL ("1000 lines of code for a triangle" as people like to say). The goal for SDL3's GPU API is to give you a more approachable (but still plenty flexible) alternative to that.

And similar story for consoles, presumably.

Apparently lots of people asked for "SDL_render but can you add shader support that works for all platforms", so that's the origin story.

SDL3 does also add a higher level audio API - I don't know much about its merits.

hgs3
3 replies
1d16h

But why does the GPU API need to be in mainline SDL? Couldn't it be a separate project like SDL_net, SDL_mixer, SDL_image, and SDL_ttf? I would think that as a separate project "SDL_gpu" could be versioned independently, evolve independently, and not be obligated to support every platform SDL itself supports. In fact if "SDL_gpu" only required a windowing context, then it could presumably integrate with SDL2 and non-SDL applications!

caspar
1 replies
1d14h

AFAICT, if you don't want to use it then you don't have to - just like you didn't have to use SDL_render in SDL2. That is what was pitched by maintainer Ryan Gordon[0][1] at least.

[0]: https://github.com/libsdl-org/SDL_shader_tools/blob/main/doc... , though the approach that ended up getting merged was an initially-competing approach implemented by FNA folks instead and they seem to have made some different decisions than what was outlined in that markdown doc.

gary_0
0 replies
1d14h

While using SDL for drawing is optional (and seldom done if you're doing 3D) I would like to add that its drawing API is useful to have out-of-the-box so that new/basic users can get stuff on screen right away without having to write their own high-level graphics engine first.

gary_0
0 replies
1d15h

See dottrap's comment: https://news.ycombinator.com/item?id=41397198

SDL needs to be able to render graphics efficiently, but the SDL2 way is no longer sufficient. Since SDL3 is a major version change, it makes sense to overhaul it while a variety of other API-breaking improvements are being made.

creata
2 replies
1d10h

Slightly off-topic, but where's the complexity of Vulkan (the 1000 lines) coming from? My memory tells me that most of the misery is from the window system integration, and that the rest is pretty pleasant.

cyber_kinetist
0 replies
1d6h

Counter-intuitively, when you actually start caring about performance (easy to write "working" Vulkan code, hard to write efficient Vulkan code that competes with DX11 driver magic)

caspar
0 replies
1d17h

Ah, I managed to dig up the original announcement post[0]; relevant snippet:

But this is terrible advice in 2021, because OpenGL, for all intents and purposes, is a deprecated API. It still works, it's still got some reasonably modern features, but even if you add up the 22 years Microsoft spent trying to kill it with Apple's seven-or-maybe-twenty, it doesn't change the fact that the brains behind OpenGL would rather you migrate to Vulkan, which is also terrible advice.

It seems bonkers to tell people "write these three lines of code to make a window, and then 2000 more to clear it," but that's the migration funnel--and meat grinder--that SDL users are eventually going to get shoved into, and that's unacceptable to me.

[0]: https://www.patreon.com/posts/new-project-top-58563886

gary_0
2 replies
1d14h

SDL2 was not "just SDL1 with explicit window handles". There were a variety of changes and new features all over the API, including (like SDL3) major changes to the graphics subsystem (SDL1 used software rendering, SDL2 added hardware acceleration).

Also, SDL2 has evolved considerably since 2.0.0, and SDL3 continues that evolution while allowing API-breaking changes. SDL3 is not a from-scratch re-write and as an SDL user I dont anticipate migrating from SDL2 to SDL3 will be that difficult.

[edit] And SDL1/2 was never so "thin" that it didn't have its own high-level graphics system, which is useful to have out-of-the-box so that new/basic users can get stuff on screen right away.

[edit2] As ahefner points out, SDL1 was pretty "thin" by modern standards, but it still gave you enough to draw basic stuff on screen without writing your own pixel math, which was pretty helpful back in the 90's.

ahefner
1 replies
1d13h

SDL1 had no high-level graphics system - you either got a raw framebuffer, or an OpenGL context.

gary_0
0 replies
1d13h

True, now that I think back, all it had was a blit function, and nowadays that's not a graphics system. (But back in the old days, I was impressed that it handled alpha blending for me! Fancy!)

flohofwoe
2 replies
1d9h

The problem is that OpenGL is (pretty much) dead, while Vulkan is a poor replacement for OpenGL when it comes to ease of use.

sylware
0 replies
20h53m

I don't run GL games anymore on elf/linux. And it has been a while. Most cross-platform game engines have a vulkan backend now.

Very small teams are able to show games running the latest UE5.x engine on native elf/linux, vulkan ("vein", "shmaragon" something).

But the steam client... is still 32bits and x11/GL hard dependent...

I still plan to code my own wayland compositor once the steam client is ELF64 and does proper wayland->x11/vulkan->CPU fallbacks. It will feel weird to have a clean 64bits system.

shortrounddev2
0 replies
20h21m

Yeah, there needs to be a DirectX 11-like API between Vulkan an OpenGL

ivars
12 replies
1d11h

How did they managed to pull this off so quickly? Given how long WebGPU native is in development and still not finalized, you would think it will take SDL GPU API even longer because it supports more platforms.

HexDecOctBin
7 replies
1d8h

The reason WebGPU took so long was that they decided to write their own shading language instead of using SPIR-V. SDL didn't make that mistake, you bring your own shader compilers and translation tools.

There is a sister project for a cross-platform shading language [1] and another for translating existing ones between each other [2] , but they get done when they get done, and the rest of the API doesn't have to wait for them.

WebGPU was made by a committee of vendors and language-lawyers (standards-lawyers?) with politics and bureaucracy, and it shows. SDL_GPU is made by game developers who value pragmatism above all (and often are looked down upon from the ivory tower because of that).

[1]: https://github.com/libsdl-org/SDL_shader_tools [2]: https://github.com/flibitijibibo/SDL_gpu_shadercross

grovesNL
4 replies
1d4h

I don't think that's accurate. Creating a shading language is obviously a huge effort, but there were already years of effort put into WebGPU as well as implementations/games building on top of the work-in-progress specification before the shading language decision was made (implementations at the time accepted SPIR-V).

HexDecOctBin
3 replies
1d3h

The PoC was made in 2016, the work started in 2017, but the first spec draft was released on 18 May 2021. [1] This first draft already contained references to WGSL. There is no reference to SPIR-V.

Why did it take this long to release the first draft? Compare it to SDL_GPU timeline, start to finish in 6 months. Well, because the yak shaving on WGSL had already begun, and was eating up all the time.

[1]: https://www.w3.org/TR/2021/WD-webgpu-20210518/

grovesNL
2 replies
1d2h

SPIR-V was never in the specification, but both wgpu and Dawn used SPIR-V in the meantime until a shading language decision was made.

HexDecOctBin
1 replies
21h34m

Sure, but that proves my point. They took so long to decide upon the shading language that implementations had to erect a separate scaffolding just to be able to test things out.

grovesNL
0 replies
19h28m

Scaffolding wasn’t a problem at all. Both used SPIRV-Cross for shader conversions at the time and focused on implementing the rest of the API. The shading language barely matters to the rest of the implementation. You can still use SPIR-V with wgpu on its Vulkan backend today for example.

hmry
1 replies
1d7h

Yeah, legal strikes again. Unfortunately SPIR-V was never going to be an option for WebGPU, because Apple refuses to use any Khronos projects due to a confidential legal dispute between them.[0] If WebGPU used SPIR-V, it just wouldn't be available in Safari.

See also: Not supporting Vulkan or OpenXR at all, using USD instead of glTF for AR content even though it's less well suited for the task, etc. (Well, they probably don't mind that it helps maintain the walled garden either... There's more than one reason for everything)

0: https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_25...

davemp
0 replies
1d6h

# Attendance

## Khronos

Neil Trevett

## Apple

Dean Jackson Myles C. Maxfield Robin Morisset Maciej Stachowiak Saam Barati

## Google

Austin Eng Corentin Wallez Dan Sinclair David Neto James Darpinian Kai Ninomiya Ken Russell Shrek Shao Ryan Harrison

## Intel

Yunchao He

## Mozilla

Dzmitry Malyshau

## W3C

François Daoust Dominique Hazael-Massieux

## Timo de Kort [sic?]

———

I get that Apple/Google have significantly more resources than most organizations on the planet but if these demographics are representative of other (web) standards committees that’s depressing.

bartwe
2 replies
1d10h

No committee and motivated devs that need the result for their projects. Especially the FNA folks.

flohofwoe
1 replies
1d9h

Also tbf, the WebGPU peeps did a lot of investigations for what is the actual set of common and web-safe features across D3D, Vulkan and Metal, and all those investigation results are in the open.

In that sense the WebGPU project is an extremely valuable resource for other wrapper APIs, and saves those other APIs a ton of time.

jms55
0 replies
1d1h

Yeah. SDL went the path of "wrap native APIs". WebGPU went the path of "exactly what level of floating point precision can we guarantee across all APIs" along with "how do we prevent absolutely all invalid behavior at runtime, e.g. out of bounds accesses in shaders, non-dynamically uniform control flow at invalid times, indirect draws that bypass the given limits, preventing too-large shaders that would kill shader compilers, etc".

WebGPU spends a _lot_ of time investigating buggy driver behavior and trying to make things spec-conformant across a lot of disparate and frankly janky platforms. There's a big difference between writing an RHI, and writing a _spec_.

kevingadd
0 replies
1d

The core contributors of the SDL3 GPU project have experience with two cross-platform (PC + consoles) GPU abstraction layers, FNA3D and Refresh, which provided a lot of knowledge and existing open source code to use as a springboard to assemble this quickly with high quality.

kookamamie
6 replies
1d8h

Sorry, but the proposal for the included shading language looks pretty braindead to me.

See for yourself: https://github.com/libsdl-org/SDL_shader_tools/blob/main/doc...

Deviations from C-language families, such as "Flow control statements don't need parentheses." are completely unnecessary, I think. Same goes for "Flow control statements must use braces."

e4m2
2 replies
1d7h

The current SDL GPU API does not intend to use this shader language. Instead, users are expected to provide shaders in the relevant format for each underlying graphics API [1], using whatever custom content pipeline they desire.

One of the developers made an interesting blog post motivating this decision [2] (although some of the finer details have changed since that was written).

There is also a "third party" solution [3] by another one of the developers that enables cross-platform use of SPIR-V or HLSL shaders using SPIRV-Cross and FXC/DXC, respectively (NB: It seems this currently wouldn't compile against SDL3 master).

[1] https://github.com/libsdl-org/SDL/blob/d1a2c57fb99f29c38f509...

[2] https://moonside.games/posts/layers-all-the-way-down

[3] https://github.com/flibitijibibo/SDL_gpu_shadercross

kookamamie
1 replies
1d6h

Thanks for the clarification. From the sparse documentation of SDL_GPU it was somewhat difficult to understand which parts are part of the SDL 3 merge, and which parts are something else.

I did find an example of using the GPU API, but I didn't see any mention of selecting a backend (Vk, etc.) in the example - is this possible or is the backend selected e.g. based on the OS?

e4m2
0 replies
1d5h

is this possible or is the backend selected e.g. based on the OS?

Selected in a reasonable order by default, but can be overridden.

There are three ways to do so:

- Set the SDL_HINT_GPU_DRIVER hint with SDL_SetHint() [1].

- Pass a non-NULL name to SDL_CreateGPUDevice() [2].

- Set the SDL_PROP_GPU_DEVICE_CREATE_NAME_STRING property when calling SDL_CreateGPUDeviceWithProperties() [3].

The name can be one of "D3D11", "D3D12", "Metal" or "Vulkan" (case-insensitive). Setting the driver name for NDA platforms would presumably work as well, but I don't see why you would do that.

The second method is just a convenient, albeit limited, wrapper for the third, so that the user does not have to create and destroy their own properties object.

The global hint takes precedence over the individual properties.

[1] https://wiki.libsdl.org/SDL3/SDL_HINT_GPU_DRIVER

[2] https://wiki.libsdl.org/SDL3/SDL_CreateGPUDevice

[3] https://wiki.libsdl.org/SDL3/SDL_CreateGPUDeviceWithProperti...

mahkoh
0 replies
1d7h

If control flow statements don't require parentheses to be parseable, doesn't that mean that it is the parentheses that are completely unnecessary?

dkersten
0 replies
1d3h

I, on the other hand, find the C way brain dead and would be very happy with these changes.

BigJono
0 replies
1d7h

Deviating from conventions to avoid footguns is so misguided. I've been writing C family languages for like 15 years and never once accidentally done a if (foo); whatever;

The convention itself IS the thing that stops you from fucking that up. It's the kind of thing you do once 2 days into a 30 year career and never again.

I still think it's dumb in Javascript, where you could be using the language on day 2 of learning programming. But in a GPU shader language that it would be almost impossible to understand with no programming experience? It's actually insane.

Having said that everything else about this project looks pretty good, so I guess they can get a pass lol.

kvark
0 replies
2h37m

This rubs me the wrong way. Resource renaming is a high level concept, which was one of the features of OpenGL that all the modern APIs have successfully dropped. A modern low level GPU API should not do that.

bartwe
2 replies
1d10h

Glad to have contributed to the dx12 part :)

corysama
0 replies
1d2h

What resources would you recommend for learning DX12?

HexDecOctBin
0 replies
1d4h

Bravo, thanks! Since I'll be targeting modern HLSL, your backend is the one I'll be using to begin with. Hopefully DXC produces decent SPIR-V at the end.

davikr
1 replies
1d17h

Are there any examples?

dottrap
0 replies
1d12h

This is a separate thing with the same name. Although both share some common ideas. The grimfang4/sdl-gpu is a separate library used with SDL, while the new SDL GPU API is directly part of SDL. grimfang4/sdl-gpu is much older and works with today's SDL 2.

The grimfang4/sdl-gpu was one good way to take advantage of modern GPUs in a simple way and workaround the holes/limitations of the old SDL 2D API. The new SDL 3 GPU API will likely make the need for things like grimfang4/sdl-gpu redundant.

JoeyJoJoJr
1 replies
1d9h

I’d love to see Raylib get an SDL GPU backend. I’d pick it up in a heartbeat.

rudedogg
0 replies
1d13h

It's exciting to see how this all shakes out. Hopefully we end up with more options for building custom game engines and apps.

I've been going down the Vulkan rabbit hole. It's been fun/enlightening to learn, but the nature of Vulkan makes progress feel slow. I think if SDL3 were available when I started, I would have happily went that route and have more to show for the amount of time I've invested.

kvark
0 replies
2h24m

Time will tell if this API is usable. In particular, the way resource synchronization works, and object renaming. Time will tell if it will perform better than WebGPU or other abstractions. Time will tell if it remains small in the presence of driver bugs that need working around…

I’m also sceptical about their new bytecode for a shading language. Parsing shaders at runtime was not a concern with WebGPU - it’s very fast. Even producing native shaders is fast [1]. It’s the pipeline creation that’s slow, and this bytecode wouldn’t help here.

[1] http://kvark.github.io/naga/shader/2022/02/17/shader-transla...

jb1991
0 replies
1d13h

I’ve never used this library before, but I’m very interested to see some examples of its cross-platform GPU compute abilities, if I understand from the link thread that they are now available. Does anyone have a suggestion on where to get started?

ammar-DLL
0 replies
1d9h

i'm looking forward to wayland native support

Ono-Sendai
0 replies
1d11h

I might try this out. SDL I have found to be high quality software - compiles fast, compiles easily on multiple platforms, always works. So I have some hopes for this new API.

JKCalhoun
0 replies
1d1h

Huge fan of SDL generally.

When I went looking for a cross-platform gaming library, SDL and its API struck the right balance for me. I just wanted a C (++) library I could call to create windows and graphical contexts — a fast sprite rendering framework. I didn't need a whole IDE or a bloated library, didn't want to learn a new language, etc.