Kind of crazy to think that the only reason OpenGL was ever a thing for 3D gaming was because of John Carmack's obsession with using it for Quake II back in the 90s.
Kind of crazy to think that the only reason OpenGL was ever a thing for 3D gaming was because of John Carmack's obsession with using it for Quake II back in the 90s.
One day, Apple will deprecate opengl 3.3 core, and I guess everybody might end up deprecating it.
I've read that generally opengl is just easier to use than vulkan, I don't know if that's true, but if something is too complicated, it becomes just too hard for less experienced devs to exploit those GPU, and it becomes a barrier to entry, which might discourage some indie game developers.
Although everyone uses unity and unreal now, baking things from scratch or using other engines is just weird now, for some reason. It's really annoying, and it's fun to see gamedev wake up after unity tried to lock things more.
Open source in gaming has always been stretched thin. Godot is there, but I doubt it's able to seriously compete with unity and unreal even if I want it to, so even if godot is capable, indie gamedevs are more experienced with unity and unreal and will stick to those.
The state of open source in game dev feels really hopeless sometimes, the rise of next gen graphics API are not making things easy.
I've read that generally opengl is just easier to use than vulkan
[here's](https://learnopengl.com/code_viewer_gh.php?code=src/1.gettin...) an opengl triangle rendering example code (~200 LOC)
[here's](https://vulkan-tutorial.com/code/17_swap_chain_recreation.cp...) a vulkan triangle rendering example code (~1000 LOC)
ye it's fair to say opengl is a bit easier to use ijbol
This is a bit misleading. Much of the extra code that you'd have to write in Vulkan to get to first-triangle is just that, a one-time cost. And you can use a third-party library, framework or engine to take care of it. Vulkan merely splits out the hardware-native low level from the library support layer, that were conflated in OpenGL, and lets the latter evolve freely via a third party ecosystem. That's just a sensible choice.
And often those LOC examples use GLFW or some other library to load OpenGL. Loading a Vulkan instance is a walk in the park compared to initializing an OpenGL context, especially on Windows. It's incredibly misleading. If you allowed utility libraries for Vulkan to compare LOC-to-triangle Vulkan would be much closer to OpenGL.
It depends on the operating system. On macOS and iOS it was always just a few lines of code to setup a GL context. On Windows via WGL and Linux via GLX it's a nightmare though. Linux with EGL is also okay-ish.
Drawing conclusions from a hello world example is not representative of which API is "easier". You are also using lines of code as measure of "ease" where it's a measure of "verbosity".
Further, the OpenGL example is not following modern graphics best practices and relies on defaults from OpenGL which cuts down the lines of code but is not practical in real applications.
Getting Vulkan initialized is a bit of a chore, but once it's set up, it's not much more difficult than OpenGL. GPU programming is hard no matter which way you put it.
I'm not claiming Vulkan initialization is not verbose, it certainly is, but there are libraries to help you with that (f.ex. vkbootstrap, vma, etc). The init routine requires you to explicitly state which HW and SW features you need, reducing the "it works on my computer" problem that plagues OpenGL.
If you use a recent Vulkan version (1.2+), namely the dynamic rendering and dynamic state features, it's actually very close to OpenGL because you don't need to configure render passes, framebuffers etc. This greatly reduces the amount of code needed to draw stuff. All of this is available on all desktop platforms, even on quite old hardware (~10 year old gpus) if your drivers are up to date. The only major difference is the need for explicit pipeline barriers.
Just to give you a point of reference, drawing a triangle with Vulkan, with the reusable framework excluded, is 122 lines of Rust code including the GLSL shader sources.
Another data point from my past projects, a practical setup for OpenGL is about 1500 lines of code, where Vulkan is perhaps 3000-4000 LOC where ~1000 LOC is trivial setup code for enabled features (verbose, but not hard).
As a graphics programmer, going from OpenGL to Vulkan has been a massive quality of life improvement.
I also am a graphics programmer and lead Vulkan developer at our company. I love Vulkan. I wouldn’t touch OpenGL with a 10 foot pole. But I also have years of domain expertise and OpenGL is hands down the better beginner choice.
The Vulkan hello triangle is terrible, it’s not at all production level code. Yeah, neither is the OpenGL one, but that’s much closer. Getting Vulkan right requires quite a bit of understanding of the underlying hardware. There’s very little to no hand holding, even with the validation layers in place it’s easy to screw up barriers, resource transitions and memory management.
Vulkan is fantastic for people with experience and a good grasp of the underlying concepts, like you and me. It’s awful for beginners who are new to graphics programmers.
I've used OpenGL for over 20 years and Vulkan since it came out. Neither of them is easy, but OpenGL's complexity and awkward stateful programming model is quite horrific.
I've also watched and helped beginners struggling with both APIs on many internet forums over the years, and while getting that first triangle is easier in OpenGL, the curve gets a lot steeper right after that. Things like managing vertex array objects (VAO) and framebuffer objects (FBO) can be really confusing and they are kind of retrofitted to the API in the first place.
I actually think that beginners shouldn't be using either of them and understand the basics of 3d graphics in a friendlier environment like Godot or Unity or something.
Vulkan 1.3 makes graphics programming fun again. Now you don't need to build render passes and pipeline states up front, it's really easy to just set the pipeline states ad-hoc and fire off your draw calls.
But yeah, judging by the downvotes my GP comment is receiving, seems like a lot of readers disagree. I'm not sure how many of them have actually used both APIs beyond beginner level, but I don't know anyone who has used both professionally and wants to go back to OpenGL with its awkward API and GLSL compiler bugs and whatnot.
You’re getting downvoted for some reason, but OpenGL is absolutely easier. It abstracts so much (and for beginners there’s still a ton even with all the abstraction!). No need to think about how to prep pipelines, optimally upload your data, manually synchronize your rendering, and more with OpenGL, unlike Vulkan. The low level nature of Vulkan allows you to eek out every bit of performance, but for indie game developers and the majority of graphics development that doesn’t depend on realtime PBR with giant amounts of data, OpenGL is still immensely useful.
If anything, an OpenGL-like API will naturally be developed on top of Vulkan for the users that don’t care about all that stuff. And once again, I can’t stress this enough, OpenGL is still a lot for beginners. Shaders, geometric transformations, the fixed function pipeline, vertex layouts, shader buffer objects, textures, mip maps, instancing, buffers in general, there’s sooo much to learn and these foundations transcend OpenGL and apply to all graphics rendering. As a beginner, OpenGL allowing me to focus on the higher level details was immensely beneficial for me getting started on my graphics programming journey.
It won't be OpenGL-like, it will probably just be OpenGL https://docs.mesa3d.org/drivers/zink.html
OpenGL is not deprecated, it is simpler and continues to be used where Vulkan is overkill. Using it for greenfields is a good choice if it covers all your needs (and if you don't mind the stateful render pipeline).
It is officially deprecated on all Apple platforms, and has been for five years now.
Whether it will actually stop working anytime soon is a different question; but it is not a supported API.
For context: https://developer.apple.com/documentation/opengles
It is marked as being deprecated as of iOS 12, which came out in September 2018.
Non-ES version was deprecated in aligned macOS version 10.14: https://developer.apple.com/library/archive/documentation/Gr...
It kind of is, OpenGL 4.6 is the very last version, the red book only covers until OpenGL 4.5, and some hardware vendors are now shipping OpenGL on top of Vulkan or DirectX, instead of providing native OpenGL drivers.
While not officially deprecated, it is standing still and won't get anything newer past 2017 hardware, not even newer extensions are being made available.
The focus has already moved to other APIs (Vulkan and Metal), and the side effect of this will be that bitrot sets in, first in OpenGL debugging and profiling tools (older tools won't be maintained, new tools won't support GL), then in drivers.
FWIW Metal is actually easier to use than Vulkan in my opinion, as Vulkan is kind of designed to be super flexible and doesn't have as much niceties in it. Either way, OpenGL was simply too high level to be exposed as the direct API of the drivers. It's much better to have a lower level API like Vulkan as the base layer, and then build something like OpenGL on top of Vulkan instead. It maps much better to how GPU hardware works this way. There's a reason why we have a concept of software layers.
It's also not quite true that everyone uses Unity and Unreal. Just look at the Game of the Year nominees from The Game Award 2023. All 6 of them were built using in-house game engines. Among indies there are also still some amount of developers who develop their own engines (e.g. Hades), but it's true that the majority of them will just use an off-the-shelf one.
Metal is probably the most streamlined and easiest to use GPU API right now. It's compact, adapts to your needs, and can be intuitively understood by anyone with basic C++ knowledge.
WGPU is kinda supposed to solve the problem by making a cross platform API more user friendly than Vulkan. The problem with OpenGL is that it is too far from how GPUs work and it's hard to get good performance out of it.
It is hard to get the absolute best performance out of OpenGL but it isn't really hard to get good performance. Unless you're trying to make some sort of seamless open world game with modern AAA level of visual fidelity or trying to do something very out of the ordinary, OpenGL is fine.
A bigger issue you may face is OpenGL driver bugs but AFAIK the main culprit here was AMD and a couple of years ago they improved their OpenGL driver to be much better.
Also at this point OpenGL still has no hardware raytracing extension/API so if you need that you need to use Vulkan (either just for the RT bits with OpenGL interop or switching to it completely). My own 3D engine uses OpenGL and while the performance is perfectly fine, i'm considering switching to Vulkan at some point in the future to have raytracing support.
My understanding is that one of the primary reasons Vulkan was developed was because OpenGL was not a good model for GPUs, and supporting it prevented people from taking advantage of the hardware in many cases.
It's because Vulkan is designed for driver developers and (to a lesser degree) for middleware engine developers. As far as APIs go, it's pretty much awful. I was very pumped for Vulkan when it was initially announced, but seeing the monstrosity the committee has produced has cooled down my enthusiasm very quickly.
One day, Apple will deprecate opengl 3.3 core, and I guess everybody might end up deprecating it.
And here I am, recalling all the games and programs that failed once OpenGL 2.0 was implemented because they required OpenGL 1.1 or 1.2 but just checked the minor version number... time flies!
One day, Apple will deprecate opengl 3.3 core
OpenGL is already deprecated on macOS and iOS for a couple of years. It still works (nowadays running as layer on top of Metal), but when building GL code for macOS or iOS you're spammed with deprecation warnings (can be turned off with a define though).
I've read that generally OpenGL is just easier to use than Vulkan.
OpenGL mostly only makes sense if you followed its progress from the late 90's and understand the reasons behind all the accumulated design warts, sediment layers and just plain weird design decisions. For newcomers, OpenGL is just one weirdness after another.
Unfortunately Vulkan seems to be on the same track, which makes me think that the underlying problem is organisational, not technical, e.g. both APIs are lead by Khronos, resulting in the same 'API design and maintenance philosophy' - and frankly, the approach to API design was OpenGL's main problem, not that it didn't map to modern GPU architectures (which could have been fixed with a different API design approach without throwing the baby out with the bath water).
But on Mac, what matters more is how OpenGL compares to Metal, and the answer is much simpler: Metal both has a cleaner design, and is easier to use than OpenGL.
"Unlike the vendor’s non-conformant 4.1 drivers, our open source Linux drivers are conformant to the latest OpenGL versions, finally promising broad compatibility with modern OpenGL workloads, like Blender, Ryujinx, and Citra."
Looks like apple silicon are currently the best hardware for running linux and linux is the best OS for apple silicon machines.
I would love for my employer to support that config at work. We have quite lovely Linux dev laptops, but the battery life of the M1/M2 machines in the IT shop is definitely enticing, and Asahi Linux gets closer to MacOS in that regard than you might think given the relative maturity and optimization.
It definitely isn’t ready for use as a daily driver. There are lots of bits missing (see below for an example) and power management isn’t great compared to macOS.
How so? I'm daily driving it as my only machine since November.sure there are missing features but none that are really essential for most people.
You and I have very different work environments for you to be able to claim that microphones aren't essential for most people.
I use a headset anyway. Build in microphones are really shitty so they are frowned upon at my job. (Although Apples has one of the best).
Looks like apple silicon are currently the best hardware for running linux
I wonder if this effort to run Linux on apple silicon will continue if snapdragon X laptops become mainstream.
I think it will. One of the main issues with desktop linux is still broad hardware support. Random crap like fingerprint readers or Wi-Fi cards still don't work on certain machines. By having a very constrained set of hardware options, it makes it a lot easier to support. The snapdragon devices are also starting way behind.. both the Surface X and Lenovo X13S snapdragon devices exist today but Linux support isn't close to Asahi.
the Surface X and Lenovo X13S snapdragon devices exist today but Linux support isn't close to Asahi.
Is that true? When the X13S Snapdragon released I seem to remember it shipping with first-party Linux drivers for almost everything. Same goes for the Surface X actually.
Now, both of those devices definitely don't get the same attention Macs do, but they did ship day-and-date with decent Linux support. For example, the Adreno GPU that Qualcomm uses has upstream Mesa support for Vulkan and OpenGL. In many senses, Asahi isn't close to the vendor support those devices recieved.
Strange the article doesn't use the word "Apple" once, and instead awkwardly uses "the vendor" to refer to Apple.
It's inline with how the linux/floss community refers to hardware vendors in general, even if Apple is a unique case in many circumstances.
Didn't CentOS use "the vendor" instead of RHEL like this too?
Looks like apple silicon are currently the best hardware for running linux and linux is the best OS for apple silicon machines
Blender has Metal support for Apple Silicon macs. The Metal API is better architected (largely due to being more modern and being developed with benefit of hindsight) so all things equal I'd pick the Metal version on Mac.
In case you missed it in the article, the M1 GPU does not natively support OpenGL 4.6. They had to emulate certain features. The blog post goes into some of the performance compromises that were necessary to make the full OpenGL emulation work properly. Absolutely a good compromise if you're on Linux and need to run a modern OpenGL app, but if your only goal is to run Blender as well as possible then you'd want to stick to macOS.
Ryujinx is a Nintendo Switch emulator. They added support for Apple Silicon Macs a couple years ago and have been improving since then: https://blog.ryujinx.org/the-impossible-port-macos/
Linux on Apple hardware has come a long way due to some incredible feats of engineering, but it's far from perfect. Calling it the "best OS for Apple Silicon" is a huuuuge reach.
It's great if you need to run Linux for day to day operations, though.
Right, Blender Cycles for example can run on Metal, but neither on OpenGL or Vulkan. So while it's nice to have a working OpenGL, it depends if your workflow requires OpenGL apps.
I would be very surprised, if Blender Cycles ever ran on top of OpenGL or Vulkan other than using OpenGL or Vulkan as a loader for compute shaders.
That's why it is running as CUDA/OptiX/HIP/oneAPI on Windows and Linux.
apparently a lot of hardware is still not properly supported, like speakers, microphones and energy saving
Here’s the list of very detailed support status: speakers are generally supported but microphones are not. They have a driver for some energy savings but it has some rough edges.
Still waiting for some hardware support and hardware video decoding.
Hardware video decoding is well on the way: https://github.com/eiln/avd
...and you came to that conclusion because of OpenGL 4.6 - something that several other platforms enjoyed under GNU/Linux with FLOSS drivers for more than half a decade now?
And Sodium! (For minecraft)
This is for Fedora on the M1. It would be amazing to get this for macOS. What's involved in pulling something like that off?
Perhaps already possible via MoltenVK -> Vulkan -> Zink?
Probably needs one or two more layers just to be sure.
According to the devs, it isn't really possible due to Apple not having a stable public kernel API: https://social.treehouse.systems/@AsahiLinux/111930744188229...
I think Apple bans third party kernel drivers? To write a proper Vulkan or OpenGL implementation you need a kernel counterpart for handling the GPU if I understand correctly. That's probably the reason no one bothers implementing native Vulkan for macOS.
But if it's doable with Apple's driver - then not sure.
You can implement an OpenGL driver on top of Metal. But why bother dedicating so many resources for the sake of a suboptimal legacy API?
Ultimately they build command buffers and send them to the GPU. You'd need a way to do that from macOS.
The original Mesa drivers for the M1 GPU were bootstrapped by doing just that, sending command buffers to the AGX driver in macOS using IOKit.
https://rosenzweig.io/blog/asahi-gpu-part-2.html
https://github.com/AsahiLinux/gpu/blob/main/demo/iokit.c
So you'd need a bit more glue in Mesa to get the surfaces from the GPU into something you can composite onto the screen in macOS.
This is obviously very exciting, but—why not target Vulkan first? It seems like the more salient target these days and one on top of which we already have an OpenGL implementation.
They started with targeting older OpenGL to get a basic feature set working first. I guess from there, getting up to a more recent OpenGL was less work than doing a complete Vulkan implementation, and they probably learned a lot about what they'll need to do for Vulkan.
Ok, this makes a lot of sense—OpenGL sort of forms a pathway of incremental support.
Along with that, it's more immediately useful as it's used for desktops and compositers still, so getting a useful environment necessitates it.
I thought something similar, but from their comments, to support OpenGL over Vulkan you need higher versions of Vulkan anyway and it's still a big effort. So they decided to go with (lower versions of) OpenGL first to get something functional sooner.
OpenGL-on-Vulkan compat layers aren't magic. For them to support a given OpenGL feature, an equivalent feature must be supported by the Vulkan driver (often as an extension). That means you can't just implement a baseline Vulkan driver and get OGL 4.6 support for free, you must put in the work to implement all the OGL 4.6 features in your Vulkan driver if you want MESA to translate OGL 4.6 to Vulkan for you.
Plus, this isn't Alyssa's first reverse engineering + OpenGL driver project. I don't know the details but I'd imagine it's much easier and quicker to implement a driver for an API you're used to making drivers for, than to implement a driver for an API you aren't.
I find it very amusing that transitioning out of bounds accesses from traps to returning some random data is called “robustness”. Graphics programming certainly is weird.
It makes sense from the perspective of writing graphics drivers, and aligns with Postel's law (also called the robustness principle). GPU drivers are all about making broken applications run, or run faster. Making your GPU drivers strict by default won't fix the systemic problems with the video game industry shipping broken code, it'll just drive away all of your users.
And on hardware where branches are generally painfully expensive, it sounds really useful to have a flag to tell the system to quietly handle edge cases in whatever way is most efficient. I suspect there are a lot of valid use cases for such a mode where the programmer can be reasonably sure that those edge cases will have little or no impact on what the user ends up seeing in the final rendered frame.
This is one of the reasons why C and C++ have a rosy life ahead of themselves on graphics, HPC, HEP, HFT domains.
In domains where "performance trumps safety" culture reigns, talking about other programing languages is like talking to a wall.
The out of bounds accesses don't necessarily trap without the robustness checks, so the robustness is about delivering known results under those goofy cases. So it makes sense when you combine that with the fact that GPUs are pretty against traps in general. Carmack remarked once that it's was a pain to get manufacturers to be into the idea of virtual memory when he was designing megatexture.
Another upvote, another article I wish I had the knowledge and patience to understand better in context. Still, Alyssa's writeups are a fun read.
Same, I wish I knew more about graphics programming. It seems like such a steep learning curve though so I get discouraged.
Don't be discouraged, modern graphics APIs really are a mess, but you don't need to understand 1/100th of them to get graphics going. Also, this post is more about programming drivers than programming graphics.
Regrettably, the M1 doesn’t map well to any graphics standard newer than OpenGL ES 3.1. While Vulkan makes some of these features optional, the missing features are required to layer DirectX and OpenGL on top. No existing solution on M1 gets past the OpenGL 4.1 feature set.
I'm very curious to know the performance impact of this, particularly compared to using Metal on macOS. (I'm sure the answer is "it depends", but still.)
It's possible the article answers this question, but I didn't understand most of it. :(
Alyssa chooses some very odd language here, it seems to me. Yes, Apple GPUs do not support geometry shaders natively because geometry shaders are a bad design and do not map well to GPU hardware (geometry shaders are known to be slow even on hardware that allegedly supports it — there is a reason why Nvidia went ahead to design mesh shading). Transform feedback (ability to write transformed vertex data back to memory) is another feature that is often brought up in these discussions, but Apple GPUs can write to arbitrary memory locations from any shader stage, which makes transform feedback entirely superfluous.
The core of the issue is that Apple chose to implement a streamlined compute architecture, and they have cut a lot of legacy cruft and things that were known not to work well in the process. I don't think that the rhetorics of "M1 getting stuck at OpenGL 4.1" is appropriate. I stopped following OpenGL many years ago, so I don't know specifically which features past 4.1 she might refer to. What I can say is that I'd be very surprised if there is something that OpenGL offers that cannot be done in Metal, but there are plenty of things possible in Metal that cannot be done at all in OpenGL (starting with a fact that Metal shading language has fully featured pointers).
There isn't necessarily much difference between implementing features in driver compute code versus GPU hardware support. Even the "hardware support" is usually implemented in GPU microcode. It often goes through the same silicon. Any feature could hit a performance bottleneck and it's hard to know which feature will bottleneck until you try.
How do we break the 4.1 barrier? Without hardware support, new features need new tricks. Geometry shaders, tessellation, and transform feedback become compute shaders. Cull distance becomes a transformed interpolated value. Clip control becomes a vertex shader epilogue. The list goes on.
I wonder how much of this work is in m1 gpu code, versus how much feature-implemented-on-another-festure work could be reused by others.
This feels very similar to what Zink does (runs complex opengl capabilities via a more primitive Vulkan), except there is no Vulkan backend to target for m1. Yet.
More generally, you could execute complex OpenGL or Vulkan on some more-or-less arbitrary combination of CPU soft-rendering and hardware-specific native acceleration support. It would just be a matter of doing the work, and it could be reused across a wide variety of hardware - including perhaps older hardware that may be quite well understood but not usable on its own for modern workloads.
Alyssa Rosenzweig is a gift to the community that keeps on giving. Every one of her blog posts is a guarantee to learn something you didn't know about the internals of modern graphics hardware.
This endeavour proofs to me skills beat talkativeness every single day. Just reading the blogs sets my brain on fire. There is so much to unpack. The punch line is not the last but the second sentence, nevertheless you're forced to follow the path into the rabbit hole until you enjoy reading one bit manipulation after the other.
If there ever are benchmarks with eureka effects per paragraph Alyssa will lead them all.
Just thanks!
Quake is just (a probably small (not wanting to dimish it, of course)) part of the history. SGI and the enormous effort to get compliant implementations on many different systems and architectures are what made OpenGL what it eventually became.
I think both SGI and Quake were absolutely crucial.
Without Quake, OpenGL would have remained an extremely niche thing for professional CAD and modeling software. And Microsoft would have completely owned the 3D gaming API space.
Quake (and Quake 2, and Quake 3, and the many games that licensed those engines) really opened the floodgates in terms of mass market users demanding OpenGL capabilities (or at least a subset of them) from their hardware and drivers.
I'm not sure how to measure this in an objective way, but if the mass market of PC gamers didn't dwarf the professional CAD/modeling market by several orders of magnitude, I will print out my HN posting history and eat it.
Microsoft never owned the 3D gaming API space, SEGA, Sony and Nintendo also have/had their own APIs.
I never heard about SEGA, Sony or Nintendo 3D APIs being used on a PC. I guess somebody somewhere did it, but it's so insignificant.
I never heard about 3D gaming API space being something PC only, maybe in some fancy FOSS circles.
You're having a different discussion than everybody else.
Everybody else is this discussion is talking about the PC 3D API space. The place where OpenGL lives. It's right there in the title of the linked article.
"Without Quake, OpenGL would have remained an extremely niche thing for professional CAD and modeling software. And Microsoft would have completely owned the 3D gaming API space."
Ctrl+F "PC" => zero results found.
And yet the modern GPU features are pretty much the result of a close symbiosis between hardware vendors and the Direct3D team. For a while, GPUs were categorized by the D3D version they supported (today the focus has moved more towards raw compute performance I guess).
In the PC yes, many of those features were originally developed on arcades and game consoles.
TMS34010 (1986), Renderman Shading Language(1990), Cell (2005),...
And yet it still got its ass kicked by Direct3D because Microsoft made better stuff. Better API, better tooling, better debuggability.
Honestly, it would've been better to leave OpenGL to the legacy CAD vendors and standardize on Direct3D roundabout 1997 or so.
Well, except for only working on xbox and windows, which pretty much destroys it as a viable direct target for modern games or apps.
If you remember what Microsoft was like in those days, the chances of D3D being standardized in a viable way on any platform but windows were about the same chances as an ice cube in hell stands.
Technically, also Linux (and probably other Vulkan platforms) with dxvk.
I had no idea this was a thing! Cheers.
There's also VKD3D for dx12!
Aside from PlayStation exclusives, nearly every AAA game in the past 20+ years has targeted Direct3D and HLSL first. Any other backend is a port.
The XBox versions of DirectX aren't exactly compatible (in some pretty significant ways, IIRC)
Nintendo and Sony 3D APIs are also exlusive to their consoles, people keep forgeting about them.
Ye good ol' Microsoft stiffled OpenGL on Windows, hence open letter https://www.chrishecker.com/OpenGL/Press_Release not to mention insidious thing they did on Fahrenheit (next gen OpenGL+Direct3D, one to rule them all) when they were supposed to be working on it together with SGI. Microsoft did a well job after with it, but they were and are a shit company that made sus maneuvers to make success; Not all of them technical.
I don’t know that it was the only reason, but Carmack’s push for OpenGL certainly helped. A lot of things related to 3D games are thanks to doom and Quake.
Quake sure, but Doom? IIRC Doom is far more like Wolf3D's 2.5D/raycasting than the "true 3D" of Quake, it cpu rendered to a frame buffer with zero hardware acceleration. I find it hard to believe it made any lasting impact on any subsequent 3D rendering APIs.
Quake didn't use hardware acceleration either. It was only the later VQuake and GLQuake releases that did.
I think for the purposes of this discussion "Quake" is acceptable shorthand for GLQuake, Quake 2, Quake 3, all the games that used those engines, etc.
Quake got official 3D accelerated versions like GLQuake and VQuake. The improved visuals and better performance these versions offered drove a lot of early 3D accelerator sales in the consumer space.
It also helped that the API was actually user friendly compared to the earlier versions of Direct3D.
John Carmack a couple of years later, back in 2011:
From https://www.bit-tech.net/news/gaming/pc/carmack-directx-bett...
Back then OpenGL was objectively a much better API than Direct3D. A few years later it was the other way around though (thanks to D3D's radical approach to create a completely new API with each major version).
Fun fact: the earliest archived OpenGL site was a big "FAST GAMES GRAPHICS" banner with an animated Quake 1 graphic and a menu for other stuff :-P
https://web.archive.org/web/19970707113513/http://www.opengl...
For context https://www.chrishecker.com/OpenGL/Press_Release