SDL3 is still in preview, but the new GPU API is now merged into the main branch while SDL3 maintainers apply some final tweaks.
As far as I understand: the new GPU API is notable because it should allow writing graphics code & shaders once and have it all work cross-platform (including on consoles) with minimal hassle - and previously that required Unity or Unreal, or your own custom solution.
WebGPU/WGSL is a similar "cross-platform graphics stack" effort but as far as I know nobody has written console backends for it. (Meanwhile the SDL3 GPU API currently doesn't seem to support WebGPU as a backend.)
Why is SDL API needed vs gfx-rs / wgpu though? I.e. was there a need to make yet another one?
The old SDL 2D API was not powerful enough. It was conceived in the rectangle sprite blitting days, when video hardware was designed very differently and had drastically different performance characteristics. If you wanted anything more, OpenGL used to be 'the best practice'. But today, the landscape competes between Vulkan, Metal, and Direct3D, and hardware is centered around batching and shaders. Trying to target OpenGL is more difficult because OpenGL fragmented between GL vs. GLES and platform support for OpenGL varies (e.g. Apple stopped updating GL after 4.1).
A good example demonstrating where the old SDL 2D API is too limited is with the 2D immediate mode GUI library, Nuklear. It has a few simple API stubs to fill in so it can be adapted to work with any graphics system. But for performance, it wants to batch submit all the vertices (triangle strip). But SDL's old API didn't support anything like that.
The reluctance was the SDL maintainers didn't want to create a monster and couldn't decide where to draw the line, so the line was held at the old 2D API. Then a few years ago, a user successfully changed the maintainers' minds after writing a demonstration showing how much could be achieved by just adding a simple batching API to SDL 2D. So that shifted the mindset and led to this current effort. I have not closely followed the development, but I think it still aims to be a simple API, and you will still be encouraged to pick a full blown 3D API if you go beyond 2D needs. But you no longer should need to go to one of the other APIs to do 2D things in modern ways on modern hardware.
I was messing around a bit with SDL2 and either I was doing something wrong or it was just plain slow. My machine is plenty fast, but even just blitting a few dozen PNGs around a screen 60 times a second was pushing its limits. I freely admit I may have been doing something wrong, but I was surprised at just how inefficient it was at a task that we used to do without too much trouble on 1Mhz CPUs.
Maybe SDL_RenderCopy is the wrong API to use to blit things from a sprite sheet onto a display? The docs didn't give any warning if this is the case.
How recent a version were you using? Plenty of games and graphical apps use SDL2 under the hood, and rendering rects from a spritesheet is trivial. Recent versions use the geometry API for rendering rects, so it should be able to handle tons of sprites without much effort.
I'm using SDL2 2.30.0. The main loop is pretty simple, it does a few SDL_RenderFillRects to create areas, then several SDL_RenderCopy where the source is a SDL_Texture created from a SDL_Surface using SDL_CreateTextureFromSurface that was loaded from files at boot. A final call to SDL_RenderPresent finishes it off. They do include an alpha channel however.
I was expecting the sprite blitting to be trivial, but it is surprisingly slow. The sprites are quite small, only a few hundred pixels total. I have a theory that it is copying the pixels over the X11 channel each time instead of loading the sprite sheets onto the server once and copying regions using XCopyArea to tell the server to do its own blitting.
Whatever the problem is, it probably isn't SDL. Here's a test project I worked on[0], and I'm using a garbage laptop. The sprites aren't that big but if you're just using a single texture it shouldn't matter, since SDL does sprite batching anyway.
Your theory might be right - the first thing I would look for was something allocating every frame.
You might ask the SDL Discourse forum and see what they think: https://discourse.libsdl.org/
[0]https://cdn.masto.host/krappmastohost/media_attachments/file...
This should be plenty fast. SDL_RenderCopy generally should be doing things the 'right' way for on any video card made roughly in the last 15ish years (basically binding a texture in GPU RAM to a quad).
You probably need to due some debugging/profiling to find where your problem is. Make sure you aren't creating SDL_Textures (or loading SDL_Surfaces) inside your main game play loop. You also may want to check what backend the SDL_Renderer is utilizing (e.g. OpenGL, Direct3D, Vulkan, Metal, software). If you are on software, that is likely your problem. Try forcing it to something hardware accelerated.
Also, I vaguely recall there was a legacy flag on SDL_Surfaces called "hardware" or "SDL_HWSURFACE" or "SDL_HWACCEL" or something. Don't set that. It was a a very legacy hardware from like 25 years ago that is slow on everything now.
Does SDL3 still use integers for coordinates? I got annoyed enough by coordinates not being floating point in SDL2 that I started learning WebGPU, instead. This was even though the game I was working on was 2D.
The issue is, if you want complete decoupling (in the sense of orthogonality) among all four of:
- screen (window) size & resolution (especially if game doesn't control)
- sprite/tile image quantization into pixels (scaling, resolution)
- sprite display position, with or without subpixel accuracy
- and physics engine that uses floating point natively (BulletPhysics)
then to achieve this with integer drawing coordinates requires carefully calculating ratios while understanding where you do and do not want to drop the fractional part. Even then you can still run into problem such as, accidentally having a gap (one pixel wide blank column) between every 10th and 11th level tile because your zoom factor has a tenth of a pixel overflow, or jaggy movement with wiggly sprites when the player is moving at a shallow diagonal at the same time as the NPC sprites are at different floating point or subpixel integer coords.
A lot of these problems could be (are) because I think of things from bottom up (even as my list above is ordered) where a physics engine, based on floating point math, is the source of Truth, and everything above each layer is just a viewport abstracting something from the layer beneath. I get the impression SDL was written by and for people with the opposite point of view, that the pixels are important and primary.
And all (most) of these have solutions in terms of pre-scaling, tracking remainders, etc. but I have also written an (unfinished) 3D engine and didn't have to do any of that because 3D graphics is floating point native. After getting the 2D engine 90% done with SDL2 (leaving 90% more to go, as we all know), I had a sort of WTF am I even doing moment looking at the pile of work-arounds for a problem that shouldn't exist.
And I say shouldn't exist because I know the final output is actually using floating point in the hardware and the driver; the SDL1/2 API is just applying this fiction that it's integers. (Neither simple, nor direct.) It gets steam coming out my ears knowing I'm being forced to do something stupid to maintain someone else's fiction, so as nice as SDL otherwise is, I ultimately decided to just bite the bullet and learn to program WebGPU directly.
No, they added float versions for most functions and I think they plan on deprecating the int API in the future. The only exception I can think of offhand is still needing an integer rect to set a viewport.
That's good, then. Honestly an integer rect for the viewport is, "not wrong."
SDL2 added floating point versions of most rendering functions
I think you are getting confused between SDL_Render and SDL_GPU. SDL_Render is the old accelerated API that was only suitable for 2D games (or very primitive looking 3D ones). SDL_GPU is a fully-featured wrapper around modern 3D APIs (well, the rasteriser and compute parts anyway, no raytracing or mesh shaders there yet).
I was referencing the historical motivations that led to where we are today. Yes, I was referring in part to the SDL_Render family APIs. These were insufficient to support things like Nuklear and Dear ImGui, which are reasonable use cases for a simple 2D game, which SDL hoped to help with by introducing the SDL_Render APIs in SDL 2.0 in the first place.
https://www.patreon.com/posts/58563886
Short excerpt:
The next logical thing people were already clamoring for back then was shader support. Basically, if you can provide both batching (i.e. triangles) and shaders, you can cover a surprising amount of use cases, including many beyond 2D.So fast forwarding to today, you're right. Glancing at the commit, the GPU API has 80 functions. It is full-featured beyond its original 2D roots. I haven't followed the development enough to know where they are drawing the lines now, like would raytracing and mesh shaders be on their roadmap, or would those be a bridge too far.
From what I understand, they are only going to support features that are widely supported and standardised. Thus, even bindless didn't make the cut. Raytracing, mesh shaders, work-graphs, etc. almost certainly won't make it until SDL4 10 years from now; but I am not part of the development team, so don't quote me.
I see, interesting.
Having a C API like that is always nice. I don't wanna fight Rust.
WebGPU has a (mostly) standardized C API: https://github.com/webgpu-native/webgpu-headers
wgpu supports WebGPU: https://github.com/gfx-rs/wgpu :
That’s just for the shading language
The Rust wgpu project has an alternative C API which is identical (or at least closely matches, I haven't looked in detail at it yet) the official webgpu.h header. For instance all examples in here are written in C:
https://github.com/gfx-rs/wgpu-native/tree/trunk/examples
There's definitely also people using wgpu from Zig via the C bindings.
I found:
shlomnissan/sdl-wasm: https://github.com/shlomnissan/sdl-wasm :
erik-larsen/emscripten-sdl2-ogles2: https://github.com/erik-larsen/emscripten-sdl2-ogles2 :
IDK how much work there is to migrate these to SDL3?
Are there WASM compilation advantages to SDL3 vs SDL2?
There are rust SDL2 bindings: https://github.com/Rust-SDL2/rust-sdl2#use-sdl2render
use::sdl2render, gl-rs for raw OpenGL: https://github.com/Rust-SDL2/rust-sdl2?tab=readme-ov-file#op...
*sdl2::render
src/sdl2/render.rs: https://github.com/Rust-SDL2/rust-sdl2/blob/master/src/sdl2/...
SDL/test /testautomation_render.c: https://github.com/libsdl-org/SDL/blob/main/test/testautomat...
It exists, but IMO it's not a good choice.
First of all, it doesn't support RenderGeometry or RenderGeometryRaw, which are necessary for high-performance 2D rendering (absent the new GPU API). I doubt it will support any of the GPU API at this rate, as the geometry rendering is a much simpler API. Maybe both will land all at once, though. To wit, the relevant issue hasn't seen much activity: https://github.com/Rust-SDL2/rust-sdl2/issues/1180
Secondly, the abstractions chosen by rust-sdl2 are quite different from those of SDL2 itself. There seems to have been an aggressive attempt by the Rust library authors to make something more Rust-friendly, which maybe has made it more approachable for people who don't know SDL2 already, but it has IMO made it less approachable for people who do know SDL2. The crate gets plenty of downloads, so maybe it's just me.
SDL the library is over a quarter century old. It powers tons of existing software. Why wouldn't people keep working on it?
They already broke compat for 2.x, and existing games don't have shaders in 1.x or 2.x, right? So why make their own API?
Yes and no. SDL 2.x is not backwards compatible with SDL 1.x (and that was an annoyance of mine) but at some point someone wrote an SDL 1.x implementation on top of SDL 2.x that got official blessing, so at least games using SDL 1.x can be made to use SDL 2.x "under the hood" be it in source code form or binary-only form.
Though you can't take an SDL 1.x game and convert it piecemeal to SDL 2.x as the APIs are not backwards compatible, it is an all-or-nothing change.
The API breaks in SDL2 were sorely needed, if you asked me. SDL1 painted itself into a corner in a few places, e.g. simultaneous use of multiple displays/windows.
I don't think they were needed but i value not breaking existing programs and code more than some abstract and often highly subjective form of code purity.
The compatibility layer that was introduced a few years later did solve the "SDL1 apps running under SDL2 under the hood (though with some regressions)" compatibility issue, it did somewhat solve the "compile existing code that uses SDL1 with SDL2" (depending on your language and SDL bindings, i had to compile the real SDL 1.2 library to have Free Pascal's bindings work since they didn't work with sdl12-compat) but it did not solve the "updating existing code to use the new features without rewriting everything" compatibility issue (there was even some user in the PR or Twitter asking about future plans for compatibility because he had spent years updating his code from SDL 1.2 to SDL 2.0 and didn't want to repeat the process again - FWIW the answer was that it probably wont be any less than 10 years for a new major backwards incompatible version).
SDL2 has a compat library for SDL1:
https://github.com/libsdl-org/sdl12-compat
Yes, that is what i wrote in the very first paragraph.
SDL is for gamedevs, it supports consoles, wgpu is not, it doesn't
SDL is for everyone. I use it for a terminal emulator because it’s easier to write something cross platform in SDL than it is to use platform native widgets APIs.
Can the SDL terminal emulator handle up-arrow /slash commands, and cool CLI things like Textual and IPython's prompt_toolkit readline (.inputrc) alternative which supports multi line editing, argument tab completion, and syntax highlighting?, in a game and/or on a PC?
I think you're confusing the roles of terminal emulator and shell. The emulator mainly hosts the window for a text-based application: print to the screen, send input, implement escape sequences, offer scrollback, handle OS copy-paste, etc. The features you mentioned would be implemented by the hosted application, such as a shell (which they've also implemented separately).
You are right, I was too focused on the gamedev argument that it made me use an incorrect statement
SDL it's great for embedded machines with limited displays.
The more the merrier if you ask me. Eventually one will win but we need more experimentation in this space. The existing GPU APIs are too hard to use and/or vendor-specific.
Writing bits of Vulkan or D3D12 really isn't that bad if you're working within an engine which does most of the setup for you, which is nearly always the case for practical work. If you're doing everything yourself from scratch, you're probably either a hobbyist tinkering or a well-compensated expert working for a AAA game developer.
If you're targeting SDL, then you probably don't have an engine, or you are the engine.
Nit: there are also really sophisticated graphics engines for serious applications. It's not only games.
Using something like Unreal or Unity if you just want to blast a couple of triangles to the screen in a cross-platform application is a bit overkill.
WebGPU would be alot more useful if it hadn't gone with such a needlessly different shader language syntax, makes it much harder to have any single src between the C++ and it.
can't you use naga to reuse OpenGL shaders for example ?
The bigger problem is the lack of bindless textures etc
One important point I haven't seen mentioned yet is that SDL is the defacto minimal compatibility layer on Linux for writing a windowed 'game-y' application if you don't want to directly wrestle with X11, Wayland, GTK or KDE.
Yeah - getting an OpenGL (and presumably same for Vulkan) context is surprisingly annoying if you don't have a library to help you. It also works quite differently on X11, Wayland, or directly on kernel APIs. Many games that don't otherwise use SDL2 (such as ones ported from other platforms, i.e. most games) use it just for that.
While WebGPU isn't a bad API, it also isn't exactly the '3D API to end all 3D APIs'.
WebGPU has a couple of design decisions which were necessary to support Vulkan on mobile devices, which make it a very rigid API and even (desktop) Vulkan is moving away from that rigid programming model, while WebGPU won't be able to adapt so quickly because it still needs to support outdated mobile GPUs across all operating systems.
SDL provides various "backend agnostic" APIs for a variety of needs, including window creation, input (with a gamepad abstraction), audio, system stuff (e.g. threads), etc so that programs written against SDL can work on a variety of systems - and if linked against it dynamically (or using the "static linking but with a dynamic override" that allows a statically linked version to use a newer dynamic version of the library) can use newer/better stuff (which is sometimes needed, e.g. some older gaming using old version of SDL1.x need the DLL/.so replaced to a new version to work on new OSes, especially on Linux).
Exposing a modern (in the sense of how self-proclaimed modern APIs like Vulkan, D3D12 and Metal work) GPU API that lets applications written against it to work with various backends (D3D11, D3D12, Vulkan, Metal, whatever Switch and PS5 uses, etc) fits perfectly with what SDL already does for every other aspect of making a game/game engine/framework/etc.
As if it was "needed", it was needed as much as any other of SDL's "subsystems": strictly speaking, not really as you could use some other library (but that could be said for SDL itself) but from the perspective of what the SDL wants to provide (an API to target so you wont have to target each underlying API separately) it was needed for the sake of completeness (previously OpenGL was used for this task if you wanted 3D graphics but that was when OpenGL was practically universally available for the platforms SDL itself officially supported - but nowadays this is not the case).
Unreal/Unity are not the only solutions. There is also bgfx (https://github.com/bkaradzic/bgfx), which is quite popular and sokol gfx (https://github.com/floooh/sokol) which I know of. Of course there are many more lesser known ones.
There is NVIDIA's NVRHI:
https://github.com/NVIDIAGameWorks/nvrhi
Qt RHI too. Shaders are normal vulkan-compatible GLSL and then you get D3D11, 12, GL and Metal
Compared to libraries like bgfx and sokol at least, I think there are two key differences.
1) SDL_gpu is a pure C library, heavily focused on extreme portability and no depedencies. And somehow it's also an order of magnitude less code than the other options. Or at least this is a difference from bgfx, maybe not so much sokol_gfx.
2) The SDL_gpu approach is a bit lower level. It exposes primitives like command buffers directly to your application (so you can more easily reason about multi-threading), and your software allocates transfer buffers, fills them with data, and kicks off a transfer to GPU memory explicitly rather than this happening behind the scenes. It also spawns no threads - it only takes action in response to function calls. It does take care of hard things such as getting barriers right, and provides the GPU memory allocator, so it is still substantially easier to use than something like Vulkan. But in SDL_gpu it is extremely obvious to see the data movements between CPU and GPU (and memory copies within the CPU), and to observe the asynchronous nature of the GPU work. I suspect the end result of this will be that people write far more efficient renderers on top of SDL_gpu than they would have on other APIs.
Worth noting Godot also has cross platform shaders. Its GDShader language is based heavily on OpenGL shader language though not a 1:1 copy and gets compiled for the target platform. Though for PS5 and XBox you have to work with a 3rd party (Someone released the Nintendo build for anyone who's signed the Nintendo NDA).
SDL is also offering cross platform GPU compute, is that also available in Godot?
So I haven't used compute shaders though I remembered Godot having them and double checked. Interestingly they are direct glsl which makes me wonder if they only work in OGL contexts. Which would be... weird because Godot 4.3 shipped easy DirectX output support. I'm sort of tempted to test out making a compute shader and compiling to DX and see if it works.
Edit: Doing more digging according to the end of this forum thread they get compiled to SPIR-V and then to whatever backend is needed, be it GLSL, HLSL, etc.
https://forum.godotengine.org/t/compute-shaders-in-godot/461...
I previously integrated bgfx [1], which allows you to write graphics code and shaders once and supports consoles, with SDL2 stack and Swift [2]. It was quite a nice experience, especially for someone who had never worked with any of these tools before. I'm excited for SDL3 as it introduces console abstractions, eliminating the need for additional dependencies for the GPU API. Moreover, Godot officially supports the Steam Deck, and hopefully, more consoles will be supported in the future. On a related note, Miguel de Icaza is advocating for Swift adoption in Godot, and he is working on porting the editor to SwiftUI on iPad. It is interesting to see the progress [3].
[1] https://bkaradzic.github.io/bgfx/overview.html
[2] https://github.com/bgbernovici/myndsmith
[3] https://blog.la-terminal.net/xogot-code-editing/