I think vulkan is great, but its only purpose is to take full advantage of advanced GPU features. It also leads to better performance when using advanced GPU features compared to OpenGL.
Generally, I feel OpenGL is the recommended route if you don't really aim for advanced rendering techniques.
There are plenty 2D /lowpoly/ps1-graphics games right now, and those don't need to use vulkan.
Vulkan is an example of how the AAA gaming industry is skewed towards rendering quality and appearance. AAA game studios justify their budget with those very advanced engines and content, but there is a growing market of 2D/low poly game, because players are tired and realized they want gameplay, not graphics.
Also if you are a game developer, you don't want to focus on rendering quality, you want to focus on gameplay and features.
A middle ground is webgpu. It is much less verbose than Vulkan and is guaranteed to run anywhere, including the browser. At the same time, it has access to "modern" features like compute shaders which would not be available in webgl. It also doesn't have much legacy cruft leading to multiple ways of doing the same thing, unlike opengl. The main advantage is that it's new, so there are many fewer tutorials available for it, and that is a very serious disadvantage.
It's a JavaScript API, it only runs in the browser. It's not real.
That's just not true. https://wgpu.rs/
I'm literally writing native code running on linux with winit and wgpu right now.
Wait, is that the same as using WebGPU? It says outside of browsers it uses opengl and vulkan and directx.
wgpu is not WebGPU. One is a Web specification, the other is a Rust library.
wgpu is also not a graphics API, it's a library that abstracts the underlying graphics API.
Even more confused now, lol.
(edit: thanks for editing your post. the revised version clears it up!)
Vulkan, Direct3D, Metal and OpenGL are graphics APIs - the implementation comes with your GPU driver, and they're as close as you can reasonably get to writing code "directly for the GPU". When you call a Vulkan function you're directly calling driver code.
wgpu is a regular library that uses the native APIs above and abstracts them from you. I don't like calling it a graphics API because it implies it's the same as the vendor-provided APIs - it's a completely different beast.
WebGPU and WebGL are Web standards that the browser implements, and you program them via JS. Similarly to wgpu, they're implemented on top of the native graphics APIs.
The relationship between wgpu and WebGPU is that they're basically made by the same people, and in Firefox WebGPU is implemented on top of wgpu.
But saying "WebGPU runs everywhere" is plain wrong - it's a browser-exclusive API, and on top of that, at the point of writing this it doesn't even run on all browsers (71% support according to https://caniuse.com/webgpu)
I think that’s understating the relationship a bit. The wgpu API follows the WebGPU spec quite closely. It’s like saying “taffy isn’t flexbox”. You can make a technical argument that it’s not identical, and to be flexbox it must be in a browser, but anyone working with taffy will recognize that it’s implementing flexbox and will find that all their understands of flexbox apply to taffy.
Similarly, most of what I’m learning working with wgpu (outside of, maybe, threading), is directly transferable to WebGPU. And vice versa.
That's fair, my comment did understate their relation quite a lot. They are very closely linked, to the point stuff has been added to the WebGPU spec pretty-much because "eh, it's already in wgpu".
But the "knowledge is transferable to WebGPU" is key. Even when you're compiling your wgpu code to WASM, you're not using WebGPU directly - for every API call, you're going through generated bindings on the Rust side and then generated bindings on the JS side where the actual call is made. The JS wrappers aren't trivial either - Rust structs need to be converted to the nested JS objects required by WebGPU spec, numeric enums need to be converted strings, etc.
Considering that WebGPU is supposed to be the low-level graphics API for the Web, DX12/VK/Metal equivalent, all this is at least worth a mention, and "wgpu == WebGPU" brushes over a all that, to the point where we're now casually throwing statements like "[WebGPU] is guaranteed to run anywhere, including the browser".
But clearly libraries like wgpu implement WebGPU while not requiring a browser. So WebGPU is not browser-exclusive.
Yea, wgpu is basically an adapting layer between the WebGPU API and opengl, vulkan, directx. So it's the same† API.
But WebGPU in the browser is also an adapting layer between those technologies, it's just the browser that does the adapting instead of the wgpu library. For Firefox, WebGPU is adapted to those underlying systems by wgpu: https://github.com/gpuweb/gpuweb/wiki/Implementation-Status
† There are some "native-only" extensions beyond the WebGPU spec that gpu provides, so it does go a little beyond WebGPU. But for the most part, it's very similar.
Wait... so if webgpu (the API) is to allow browsers to use the underlying graphics libs... but desktop apps can access those directly... why would they want to go through the browser abstraction API instead? Better dev ex and ergonomics?
That and portability. The ergonomics are always up for debate, but I find it a much more modern and nicer interface than OpenGL which feels...quite dated. How it compares to something like Vulkan or Metal is up for debate.
But for portability if I write my code using directx, then I can only run it on systems with directx. If I write it for vulkan, I can only target systems with vulkan. If I write for metal, I can only target systems with metal.
However, if I use wgpu and the WebGPU API, I can target any system that has directx or vulkan or metal or OpenGL. I can also target wasm and compile my application for the web.
So, I can really easily write code that will run natively on linux, on osx, on windows and the web and will use the graphics library native to that platform.
I see now. Thanks for explaining! That makes a lot of sense.
Not to downplay it or anything, but I'm curious why webgpu is receiving so much attention? There have been many low-level cross-platform graphics abstractions over the years. The bgfx [1] project had its first commit ~12 years ago and it's still going! It's much more mature than webgpu. I'm guessing being W3C backed is what's propelling it?
[1] https://github.com/bkaradzic/bgfx
webgpu is a modern graphics API for modern GPUs to replace/complement these two APIs in the browser: WebGL (a very limited outdated subset of OpenGL) and Canvas (basically WinAPI drawing from 2000s)
It's heavily influenced by Metal (based on original work by Apple and Mozilla).
People are talking about it because it's on the web and it's finally something modern to work with.
TBF, WebGPU is at least a decade behind too. For instance it implements a Vulkan-style rigid render pipeline model which even Vulkan is moving away from because it turned out too rigid. So once WebGPU becomes mainstream it may very well be the most "awkward" 3D API.
Don't worry, browsers will then come up with a fourth graphics API to solve all the issues of the previous three :)
One remarkable feature of WebGPU is that it has no undefined behaviour, traditional 3D APIs are riddled with subtle UB. It also receives much better testing across all sorts of hardware and driver configs, just by being integrated with web browsers.
My perspective is it’s a couple things. Mainly because it’s backed by several implementations with market pressure to deliver high quality working implementations (wgpu in Firefox, Dawn in Chrome). Not saying bgfx or others are bad, but the Chrome team will be putting in a lot of work for you to make sure Dawn works well everywhere Chrome needs to be. That plus it’s super portable. First time in a long time that we could truly be looking at a “write once run anywhere” GPU API. It also fills OpenGL’s niche by being much more approachable than Vulkan. I guess this really just adds up to it being the “OpenGL 2” a lot of people have been wanting in light of Vulkan’s complexity and weaker portability.
I thought its browser support was pretty bad? https://caniuse.com/webgpu
Firefox and Safari don't support it yet. And how would you even deploy it to Steam (edit: or rather, make a desktop or mobile game with it)? Can you wrap it in a webview?
Doesn't seem mature...
There is a native API too, not just JS. You can link it into your game like any other library.
I'm confused now. I thought by definition it's a browser API that allows them to make Vulkan/OpenGL/DirectX calls, like an abstraction layer. Am I wrong?
Edit: wgpu is a reimplementation of the WebGPU api for use with desktop apps outside of browsers. See sibling thread by another poster for an explanation: https://news.ycombinator.com/item?id=40598416
wgpu isn’t a reimplementation of WebGPU, it’s Firefox’s WebGPU implementation. It also conveniently runs outside of a browser and is accessible with a Rust or C API. You can also do the same thing with Dawn, which is Chrome’s implementation of WebGPU.
Thanks for the correction!
Compute shaders aren't available in WebGL thanks to Google, OpenGL ES does them just fine.
WebGL isn't WebGPU. WebGPU has compute shaders: https://docs.rs/wgpu/latest/wgpu/struct.ComputePass.html https://www.w3.org/TR/WGSL/#compute-attr
(Edit: ah, I missed the bit in GP that you replied to.)
That doesn't invalidate what I said, rather that we could have gotten WebGL 2.0 Compute much sooner than WebGPU if it wasn't for bloody politics among browser vendors.
https://github.com/9ballsyndrome/WebGL_Compute_shader
https://github.com/9ballsyndrome/WebGL_Compute_shader/issues...
https://issues.chromium.org/issues/40150444
"Intel spearheaded the webgl2-compute context to provide a way to run GPU compute workloads on the web. At the same time, the WebGPU effort at the W3C aimed to design a new, lower-level graphics API, including GPU compute. The webgl2-compute approach encountered some technical barriers, including that macOS' OpenGL implementation never supported compute shaders, meaning that it wasn't easily portable. webgl2-compute has so far been used by customers for prototyping.
At present, WebGPU is close to shipment, and its shader pipeline is nearing completion. It's possible to run combined GPU rendering and compute workloads in WebGPU.
In order to reclaim code space in Chromium's installer that is needed by WebGPU, the webgl2-compute context must be removed."
Because webgl2-compute took soooo much space on the installer!
And it isn't as if Google isn't doing WebGPU compute on top of DirectX on Windows, but doing it on top of Metal on Apple, unthinkable!
By the way, where are the Safari and Firefox implementations of WebGPU compute, running on stable without browser flags?
Another good option is Direct3D 11. It’s IMO even easier to use than OpenGL but still allows to implement pretty good visuals, see GTA 5 or Baldur’s Gate 3.
Eh that's really only an option if you want to only ever target Microsoft platforms. Which is fine for some people, sure, but in a lot of situations, closing the door on Linux (Steam Deck), macOS, iOS, Android and the non-xbox consoles is a tough sell.
Valve's efforts on playing Microsoft only games started ~10 years ago with Proton (which is more like an umbrella project for ~15 software components like Wine and DXVK). Today, proton is surprisingly good. Their bet paid off, and allowed them to make SteamDeck. Most if not all new games play fine out of the box, with 0 tinkering. Older games sometimes have issues.
True, and it’s not just games: https://github.com/Const-me/Whisper/issues/42
Contrary to urban myths, non-xbox consoles aren't that found of Khronos standards either.
Since DXVK exists and Apple dropped OpenGL, the latter gains you access to Android and that's about it, unless you're ok with ES 3.0 contexts and a lackluster set of extensions (ANGLE on Metal).
AFAIK the non-Xbox consoles either don't support OpenGL or we reportedly don't want to use their implementations.
Linux you can get for “free” with DXVK or VKD3D-proton. MacOS is an even more niche platform than Linux as far as gaming is concerned and is not a tough sell to skip considering the total lack of market share in games. iOS and Android generally come in a pair so unless you’re only targeting one you need to support multiple APIs anyway. Xbox requires Xbox specific APIs so you need explicit work there. Every other console uses a bespoke API that will require another backend in your engine (yes Switch technically supports Vulkan but that is not the recommended path from Nintendo, you will likely end up with an NVN backend).
D3D11 gives you both desktop platforms that matter for games, everything else will require multiple backends anyway so if you only care about desktop it’s a fair choice.
I think Vulkan was a mistake. Maybe I should say OpenGL Next should've been a (priority) thing and Vulkan could still happen on the side.. and not this, push towards everything Vulkan. It's not for everybody and not everybody needs it (nor deserves it).
I don't think it's a mistake, I think it's targeted and leveraged poorly.
Vulkan is a great idea for a general game engine or backing something like OpenGL. It's a low-level abstraction that should allow you to do something like use OpenGL 8.0 on a GPU that has long lost support for from its manufacturer.
Exactly this. Vulkan, especially Vulkan 1.0, was never really meant directly for graphics developers. It was meant specifically for engine developers that didn't want to have to deal with buggy and inconsistent drivers, e.g. Nvidia vs AMD vs crappy Android phones.
The solution Vulkan came up with was "make everything as explicit and user-controlled as possible", and leave "implementing a more user friendly driver" up to user-space, so that it could be tweaked by users, wouldn't be tied to driver updates users would never do, would be consistent across platforms, etc.
Except that never really happened. There are some graphics-driver-as-a-library projects (daxa, nabla, etc), but none with a large amount of community support.
Meanwhile GPU's evolved over time (AMD RDNA1, Nvidia Turing, Intel Arc, Apple Silicon), and it turns out that some of the choices Vulkan 1.0 bet on weren't great (explicit extensive pipeline state, binding model, render passes).
Vulkan 1.2/1.3 cleaned up a lot of this (dynamic state, descriptor indexing, sync v2), which also happens to improve ergonomics a lot to the point that it's much less painful to use Vulkan directly nowadays, as long as you're ok dropping support for older platforms.
What happened was wgpu, which runs on top of Vulkan and can also run on the web as web gpu
Many Rust programs are written against wgpu rather than Vulkan
That's more on the community then, no? Same issue as OpenGL; most of the market is Windows or consoles, Windows/consoles have their own graphics api and professional suppport, the devs choose convinience over control and cross-compatibility.
Seems like a self-furfilling prophecy if devs relinquish their control to the corporations.
I can somewhat agree… I don’t dislike/fear Vulkan now, but I feel like there should have been something in-between OpenGL and Vulkan, because in most cases you don’t need the same level control over the GPU which AAA game require. Thankfully, libraries like VMA and vk-bootstrap + new extensions like dynamic rendering make Vulkan start to resemble what “OpenGL Next” could have been. It’s still not perfect, but I’m glad that things have moved into that direction.
I think the driver here is more likely the financial reality of game development. High-fidelity graphics are incredibly expensive, and small game studios simply cannot produce them on a realistic timeline and budget. Would consumers reject indie games with AAA-quality graphics? I think not. It's just that few such games exist because it's not financially viable, and there is a large enough market that is fine with more stylized, lower-fidelity graphics.
Yes, I agree.
With current hardware and tools, it becomes much cheaper to reach graphics quality that is 10 or 15 years old, so such game would be just enough and be profitable enough.
I think that high quality rendering is reaching a tipping point where it's mostly of diminishing returns, this means AAA studios could differentiate with good graphics, but this becomes less and less true.
Gameplay matters, and the innovation is not the on the side of AAA studios.
gameplay matters, but graphics sell. Look no further at the modern state of hardcore reviewers and hyper nitpickers on social media. See how every switch game is criticized for not being 1080p60 on 2017 mobile hardware. People demand that kind of quality.
That's a loud minority, but it's not like the biggest games aren't selling more and more every year en large. So non-vocal gamers do seem to be drawn to it.
There’s a very big difference between ultra realistic PBR graphics and running games at 60 FPS at 1080p.
You think the people trashing the switch know any difference? That's my point.
Vulkan has advantages over OpenGL even if you don't care about visual fidelity.
- No global state
- You can select which GPU you want to use at runtime
- OpenGL error handling is terrible
- Validation layers!!
- Cool official artwork
- Fantastic documentation
- You can upload data to the GPU asynchronously from a second thread in a sane way
- Fancy GPU features - Mesh shaders, RTX
I will say that validation layers alone are worth it. Damn they are nice, so many bugs in my code has been found by them which saved a lot of time.
I switched from opengl b/c I was tired of staring at a black screen with no debugging information (sure...glGetError) when I tried to implement a non-trivial feature. After tons of work to learn vulkan and getting it up and running, I still think it was worth it.
Agreed. I don’t even need all these fancy Vulkan features most of the time… Just validation layers and RenderDoc debugging make the whole experience of doing graphics programming so much more pleasant for me, compared to what I had to deal with in OpenGL.
OpenGL is deprecated on MacOS, so there's an argument that it's on it's way out and may not be supported in the future.
OpenGL is being sunset, no one cares about Metal, and the only people who voluntarily write Vulkan are bearded code wizards taking the weekend off from writing the kernel.
There's no end of frameworks and engines out there, most unfinished, and all extremely opinionated about what your code must look like. ("You will build scene trees that hold callbacks. Actually no, you will write entity and component classes and objects. Wait, forget that, everything is immediate mode and functional and stateless now!")
Add all the platform nonsense on top (push notifications for a mobile game? In-app purchases? Mandatory signing via Xcode?) and it's all just a hot mess. There's a reason Unity has the market share it does, and it's not because it's a fantastic piece of software. It's because cross-platform for anything more complex than a glorified web view (which is, to be fair, most non-game apps) is still a massive pain.
I don't think this is true. You can pretty much implement everything in OpenGL that you can do in Vulkan. The magic's in the shaders usually and I think there's pretty much feature parity there.
Performance isn't an issue either - with the advent of GPU-resident drawing ,meaning a compute shader filling buffers with draw parameters and drawing multiple objects with complex materials in a single draw call means there isn't much of a CPU bottleneck. Look up OpenGL AZDO if you are interested in this.
It's not just about performance, I think one of the reasons for moving away from OpenGL is that the High level functions are inconsistent across driver implementations/GPUs. Whats the point of an abstraction layer if you keep having to write GPU/driver specific code. Its better to have a low level driver and a high level library instead.
Unfortunately, Apple is holding every OpenGL app and dev hostage. It's officially deprecated by Apple. So OpenGL is not a viable cross-platform solution moving forward. Eventually, it will be completely removed from macOS, at which point it will just be an OpenGL to Metal converter that everyone uses, just like Vulkan with MoltenVK. So I feel it's better to use something like WebGPU moving forward that directly takes the stance that each OS will implement its own (perhaps multiple) backends. Unfortunately, it doesn't have the tutorials and learning material that OpenGL does. (This is from an inexperienced 3D person that's been looking into this a lot.)
The general idea is that if you care more about gameplay and features, use an existing engine, the engine itself takes advantage of Vulkan, or whatever backend is the most appropriate, so you don't have to.
OpenGL is kind of a middle ground. That's if you want to get technical, but not micromanage every detail and write pages of boilerplate. It is also a great introduction to graphics programming. More than an introduction actually, there is a lot you can do with OpenGL.