return to table of content

Conformant OpenGL 4.6 on the M1

ryandvm
28 replies
1d

Kind of crazy to think that the only reason OpenGL was ever a thing for 3D gaming was because of John Carmack's obsession with using it for Quake II back in the 90s.

marcodiego
8 replies
23h52m

Quake is just (a probably small (not wanting to dimish it, of course)) part of the history. SGI and the enormous effort to get compliant implementations on many different systems and architectures are what made OpenGL what it eventually became.

JohnBooty
7 replies
21h27m

I think both SGI and Quake were absolutely crucial.

Without Quake, OpenGL would have remained an extremely niche thing for professional CAD and modeling software. And Microsoft would have completely owned the 3D gaming API space.

Quake (and Quake 2, and Quake 3, and the many games that licensed those engines) really opened the floodgates in terms of mass market users demanding OpenGL capabilities (or at least a subset of them) from their hardware and drivers.

I'm not sure how to measure this in an objective way, but if the mass market of PC gamers didn't dwarf the professional CAD/modeling market by several orders of magnitude, I will print out my HN posting history and eat it.

pjmlp
6 replies
11h1m

Microsoft never owned the 3D gaming API space, SEGA, Sony and Nintendo also have/had their own APIs.

throwaway11460
3 replies
8h44m

I never heard about SEGA, Sony or Nintendo 3D APIs being used on a PC. I guess somebody somewhere did it, but it's so insignificant.

pjmlp
2 replies
8h11m

I never heard about 3D gaming API space being something PC only, maybe in some fancy FOSS circles.

JohnBooty
1 replies
3h38m

You're having a different discussion than everybody else.

Everybody else is this discussion is talking about the PC 3D API space. The place where OpenGL lives. It's right there in the title of the linked article.

pjmlp
0 replies
3h25m

"Without Quake, OpenGL would have remained an extremely niche thing for professional CAD and modeling software. And Microsoft would have completely owned the 3D gaming API space."

Ctrl+F "PC" => zero results found.

flohofwoe
1 replies
3h50m

And yet the modern GPU features are pretty much the result of a close symbiosis between hardware vendors and the Direct3D team. For a while, GPUs were categorized by the D3D version they supported (today the focus has moved more towards raw compute performance I guess).

pjmlp
0 replies
3h26m

In the PC yes, many of those features were originally developed on arcades and game consoles.

TMS34010 (1986), Renderman Shading Language(1990), Cell (2005),...

bitwize
8 replies
23h11m

And yet it still got its ass kicked by Direct3D because Microsoft made better stuff. Better API, better tooling, better debuggability.

Honestly, it would've been better to leave OpenGL to the legacy CAD vendors and standardize on Direct3D roundabout 1997 or so.

breather
6 replies
22h58m

Well, except for only working on xbox and windows, which pretty much destroys it as a viable direct target for modern games or apps.

Honestly, it would've been better to leave OpenGL to the legacy CAD vendors and standardize on Direct3D roundabout 1997 or so.

If you remember what Microsoft was like in those days, the chances of D3D being standardized in a viable way on any platform but windows were about the same chances as an ice cube in hell stands.

malermeister
2 replies
22h32m

Technically, also Linux (and probably other Vulkan platforms) with dxvk.

breather
1 replies
21h2m

I had no idea this was a thing! Cheers.

malermeister
0 replies
20h33m

There's also VKD3D for dx12!

TillE
1 replies
18h59m

a viable direct target for modern games

Aside from PlayStation exclusives, nearly every AAA game in the past 20+ years has targeted Direct3D and HLSL first. Any other backend is a port.

p_l
0 replies
17h33m

The XBox versions of DirectX aren't exactly compatible (in some pretty significant ways, IIRC)

pjmlp
0 replies
11h6m

Nintendo and Sony 3D APIs are also exlusive to their consoles, people keep forgeting about them.

Keyframe
0 replies
22h47m

Ye good ol' Microsoft stiffled OpenGL on Windows, hence open letter https://www.chrishecker.com/OpenGL/Press_Release not to mention insidious thing they did on Fahrenheit (next gen OpenGL+Direct3D, one to rule them all) when they were supposed to be working on it together with SGI. Microsoft did a well job after with it, but they were and are a shit company that made sus maneuvers to make success; Not all of them technical.

cmovq
5 replies
23h58m

I don’t know that it was the only reason, but Carmack’s push for OpenGL certainly helped. A lot of things related to 3D games are thanks to doom and Quake.

pengaru
3 replies
23h15m

A lot of things related to 3D games are thanks to doom and Quake.

Quake sure, but Doom? IIRC Doom is far more like Wolf3D's 2.5D/raycasting than the "true 3D" of Quake, it cpu rendered to a frame buffer with zero hardware acceleration. I find it hard to believe it made any lasting impact on any subsequent 3D rendering APIs.

Narishma
1 replies
21h50m

Quake didn't use hardware acceleration either. It was only the later VQuake and GLQuake releases that did.

JohnBooty
0 replies
21h26m

I think for the purposes of this discussion "Quake" is acceptable shorthand for GLQuake, Quake 2, Quake 3, all the games that used those engines, etc.

babypuncher
0 replies
20h19m

Quake got official 3D accelerated versions like GLQuake and VQuake. The improved visuals and better performance these versions offered drove a lot of early 3D accelerator sales in the consumer space.

badsectoracula
0 replies
20h3m

It also helped that the API was actually user friendly compared to the earlier versions of Direct3D.

pjmlp
0 replies
11h3m

John Carmack a couple of years later, back in 2011:

Speaking to bit-tech for a forthcoming Custom PC feature about the future of OpenGL in PC gaming, Carmack said 'I actually think that Direct3D is a rather better API today.' He also added that 'Microsoft had the courage to continue making significant incompatible changes to improve the API, while OpenGL has been held back by compatibility concerns. Direct3D handles multi-threading better, and newer versions manage state better.'

It is really just inertia that keeps us on OpenGL at this point,' Carmack told us. He also explained that the developer has no plans to move over to Direct3D, despite its advantages.

From https://www.bit-tech.net/news/gaming/pc/carmack-directx-bett...

flohofwoe
0 replies
3h56m

Back then OpenGL was objectively a much better API than Direct3D. A few years later it was the other way around though (thanks to D3D's radical approach to create a completely new API with each major version).

badsectoracula
0 replies
20h5m

Fun fact: the earliest archived OpenGL site was a big "FAST GAMES GRAPHICS" banner with an animated Quake 1 graphic and a menu for other stuff :-P

https://web.archive.org/web/19970707113513/http://www.opengl...

Keyframe
0 replies
22h49m
jokoon
23 replies
18h36m

One day, Apple will deprecate opengl 3.3 core, and I guess everybody might end up deprecating it.

I've read that generally opengl is just easier to use than vulkan, I don't know if that's true, but if something is too complicated, it becomes just too hard for less experienced devs to exploit those GPU, and it becomes a barrier to entry, which might discourage some indie game developers.

Although everyone uses unity and unreal now, baking things from scratch or using other engines is just weird now, for some reason. It's really annoying, and it's fun to see gamedev wake up after unity tried to lock things more.

Open source in gaming has always been stretched thin. Godot is there, but I doubt it's able to seriously compete with unity and unreal even if I want it to, so even if godot is capable, indie gamedevs are more experienced with unity and unreal and will stick to those.

The state of open source in game dev feels really hopeless sometimes, the rise of next gen graphics API are not making things easy.

taminka
8 replies
18h18m

I've read that generally opengl is just easier to use than vulkan

[here's](https://learnopengl.com/code_viewer_gh.php?code=src/1.gettin...) an opengl triangle rendering example code (~200 LOC)

[here's](https://vulkan-tutorial.com/code/17_swap_chain_recreation.cp...) a vulkan triangle rendering example code (~1000 LOC)

ye it's fair to say opengl is a bit easier to use ijbol

zozbot234
2 replies
18h7m

This is a bit misleading. Much of the extra code that you'd have to write in Vulkan to get to first-triangle is just that, a one-time cost. And you can use a third-party library, framework or engine to take care of it. Vulkan merely splits out the hardware-native low level from the library support layer, that were conflated in OpenGL, and lets the latter evolve freely via a third party ecosystem. That's just a sensible choice.

MindSpunk
1 replies
17h34m

And often those LOC examples use GLFW or some other library to load OpenGL. Loading a Vulkan instance is a walk in the park compared to initializing an OpenGL context, especially on Windows. It's incredibly misleading. If you allowed utility libraries for Vulkan to compare LOC-to-triangle Vulkan would be much closer to OpenGL.

flohofwoe
0 replies
4h4m

It depends on the operating system. On macOS and iOS it was always just a few lines of code to setup a GL context. On Windows via WGL and Linux via GLX it's a nightmare though. Linux with EGL is also okay-ish.

exDM69
2 replies
6h15m

Drawing conclusions from a hello world example is not representative of which API is "easier". You are also using lines of code as measure of "ease" where it's a measure of "verbosity".

Further, the OpenGL example is not following modern graphics best practices and relies on defaults from OpenGL which cuts down the lines of code but is not practical in real applications.

Getting Vulkan initialized is a bit of a chore, but once it's set up, it's not much more difficult than OpenGL. GPU programming is hard no matter which way you put it.

I'm not claiming Vulkan initialization is not verbose, it certainly is, but there are libraries to help you with that (f.ex. vkbootstrap, vma, etc). The init routine requires you to explicitly state which HW and SW features you need, reducing the "it works on my computer" problem that plagues OpenGL.

If you use a recent Vulkan version (1.2+), namely the dynamic rendering and dynamic state features, it's actually very close to OpenGL because you don't need to configure render passes, framebuffers etc. This greatly reduces the amount of code needed to draw stuff. All of this is available on all desktop platforms, even on quite old hardware (~10 year old gpus) if your drivers are up to date. The only major difference is the need for explicit pipeline barriers.

Just to give you a point of reference, drawing a triangle with Vulkan, with the reusable framework excluded, is 122 lines of Rust code including the GLSL shader sources.

Another data point from my past projects, a practical setup for OpenGL is about 1500 lines of code, where Vulkan is perhaps 3000-4000 LOC where ~1000 LOC is trivial setup code for enabled features (verbose, but not hard).

As a graphics programmer, going from OpenGL to Vulkan has been a massive quality of life improvement.

justsid
1 replies
2h13m

I also am a graphics programmer and lead Vulkan developer at our company. I love Vulkan. I wouldn’t touch OpenGL with a 10 foot pole. But I also have years of domain expertise and OpenGL is hands down the better beginner choice.

The Vulkan hello triangle is terrible, it’s not at all production level code. Yeah, neither is the OpenGL one, but that’s much closer. Getting Vulkan right requires quite a bit of understanding of the underlying hardware. There’s very little to no hand holding, even with the validation layers in place it’s easy to screw up barriers, resource transitions and memory management.

Vulkan is fantastic for people with experience and a good grasp of the underlying concepts, like you and me. It’s awful for beginners who are new to graphics programmers.

exDM69
0 replies
1h25m

I've used OpenGL for over 20 years and Vulkan since it came out. Neither of them is easy, but OpenGL's complexity and awkward stateful programming model is quite horrific.

I've also watched and helped beginners struggling with both APIs on many internet forums over the years, and while getting that first triangle is easier in OpenGL, the curve gets a lot steeper right after that. Things like managing vertex array objects (VAO) and framebuffer objects (FBO) can be really confusing and they are kind of retrofitted to the API in the first place.

I actually think that beginners shouldn't be using either of them and understand the basics of 3d graphics in a friendlier environment like Godot or Unity or something.

Vulkan 1.3 makes graphics programming fun again. Now you don't need to build render passes and pipeline states up front, it's really easy to just set the pipeline states ad-hoc and fire off your draw calls.

But yeah, judging by the downvotes my GP comment is receiving, seems like a lot of readers disagree. I'm not sure how many of them have actually used both APIs beyond beginner level, but I don't know anyone who has used both professionally and wants to go back to OpenGL with its awkward API and GLSL compiler bugs and whatnot.

_gabe_
1 replies
14h15m

You’re getting downvoted for some reason, but OpenGL is absolutely easier. It abstracts so much (and for beginners there’s still a ton even with all the abstraction!). No need to think about how to prep pipelines, optimally upload your data, manually synchronize your rendering, and more with OpenGL, unlike Vulkan. The low level nature of Vulkan allows you to eek out every bit of performance, but for indie game developers and the majority of graphics development that doesn’t depend on realtime PBR with giant amounts of data, OpenGL is still immensely useful.

If anything, an OpenGL-like API will naturally be developed on top of Vulkan for the users that don’t care about all that stuff. And once again, I can’t stress this enough, OpenGL is still a lot for beginners. Shaders, geometric transformations, the fixed function pipeline, vertex layouts, shader buffer objects, textures, mip maps, instancing, buffers in general, there’s sooo much to learn and these foundations transcend OpenGL and apply to all graphics rendering. As a beginner, OpenGL allowing me to focus on the higher level details was immensely beneficial for me getting started on my graphics programming journey.

scheeseman486
0 replies
13h13m

It won't be OpenGL-like, it will probably just be OpenGL https://docs.mesa3d.org/drivers/zink.html

zamalek
4 replies
12h1m

OpenGL is not deprecated, it is simpler and continues to be used where Vulkan is overkill. Using it for greenfields is a good choice if it covers all your needs (and if you don't mind the stateful render pipeline).

klausa
1 replies
10h58m

It is officially deprecated on all Apple platforms, and has been for five years now.

Whether it will actually stop working anytime soon is a different question; but it is not a supported API.

klausa
0 replies
8h24m

For context: https://developer.apple.com/documentation/opengles

It is marked as being deprecated as of iOS 12, which came out in September 2018.

Non-ES version was deprecated in aligned macOS version 10.14: https://developer.apple.com/library/archive/documentation/Gr...

pjmlp
0 replies
11h11m

It kind of is, OpenGL 4.6 is the very last version, the red book only covers until OpenGL 4.5, and some hardware vendors are now shipping OpenGL on top of Vulkan or DirectX, instead of providing native OpenGL drivers.

While not officially deprecated, it is standing still and won't get anything newer past 2017 hardware, not even newer extensions are being made available.

flohofwoe
0 replies
4h9m

The focus has already moved to other APIs (Vulkan and Metal), and the side effect of this will be that bitrot sets in, first in OpenGL debugging and profiling tools (older tools won't be maintained, new tools won't support GL), then in drivers.

xign
1 replies
14h11m

FWIW Metal is actually easier to use than Vulkan in my opinion, as Vulkan is kind of designed to be super flexible and doesn't have as much niceties in it. Either way, OpenGL was simply too high level to be exposed as the direct API of the drivers. It's much better to have a lower level API like Vulkan as the base layer, and then build something like OpenGL on top of Vulkan instead. It maps much better to how GPU hardware works this way. There's a reason why we have a concept of software layers.

It's also not quite true that everyone uses Unity and Unreal. Just look at the Game of the Year nominees from The Game Award 2023. All 6 of them were built using in-house game engines. Among indies there are also still some amount of developers who develop their own engines (e.g. Hades), but it's true that the majority of them will just use an off-the-shelf one.

ribit
0 replies
6h41m

Metal is probably the most streamlined and easiest to use GPU API right now. It's compact, adapts to your needs, and can be intuitively understood by anyone with basic C++ knowledge.

SkiFire13
1 replies
9h35m

WGPU is kinda supposed to solve the problem by making a cross platform API more user friendly than Vulkan. The problem with OpenGL is that it is too far from how GPUs work and it's hard to get good performance out of it.

badsectoracula
0 replies
9h7m

It is hard to get the absolute best performance out of OpenGL but it isn't really hard to get good performance. Unless you're trying to make some sort of seamless open world game with modern AAA level of visual fidelity or trying to do something very out of the ordinary, OpenGL is fine.

A bigger issue you may face is OpenGL driver bugs but AFAIK the main culprit here was AMD and a couple of years ago they improved their OpenGL driver to be much better.

Also at this point OpenGL still has no hardware raytracing extension/API so if you need that you need to use Vulkan (either just for the RT bits with OpenGL interop or switching to it completely). My own 3D engine uses OpenGL and while the performance is perfectly fine, i'm considering switching to Vulkan at some point in the future to have raytracing support.

saagarjha
0 replies
17h31m

My understanding is that one of the primary reasons Vulkan was developed was because OpenGL was not a good model for GPUs, and supporting it prevented people from taking advantage of the hardware in many cases.

ribit
0 replies
6h45m

It's because Vulkan is designed for driver developers and (to a lesser degree) for middleware engine developers. As far as APIs go, it's pretty much awful. I was very pumped for Vulkan when it was initially announced, but seeing the monstrosity the committee has produced has cooled down my enthusiasm very quickly.

magicalhippo
0 replies
12h53m

One day, Apple will deprecate opengl 3.3 core, and I guess everybody might end up deprecating it.

And here I am, recalling all the games and programs that failed once OpenGL 2.0 was implemented because they required OpenGL 1.1 or 1.2 but just checked the minor version number... time flies!

flohofwoe
0 replies
4h7m

One day, Apple will deprecate opengl 3.3 core

OpenGL is already deprecated on macOS and iOS for a couple of years. It still works (nowadays running as layer on top of Metal), but when building GL code for macOS or iOS you're spammed with deprecation warnings (can be turned off with a define though).

flohofwoe
0 replies
5h28m

I've read that generally OpenGL is just easier to use than Vulkan.

OpenGL mostly only makes sense if you followed its progress from the late 90's and understand the reasons behind all the accumulated design warts, sediment layers and just plain weird design decisions. For newcomers, OpenGL is just one weirdness after another.

Unfortunately Vulkan seems to be on the same track, which makes me think that the underlying problem is organisational, not technical, e.g. both APIs are lead by Khronos, resulting in the same 'API design and maintenance philosophy' - and frankly, the approach to API design was OpenGL's main problem, not that it didn't map to modern GPU architectures (which could have been fixed with a different API design approach without throwing the baby out with the bath water).

But on Mac, what matters more is how OpenGL compares to Metal, and the answer is much simpler: Metal both has a cleaner design, and is easier to use than OpenGL.

marcodiego
20 replies
23h57m

"Unlike the vendor’s non-conformant 4.1 drivers, our open source Linux drivers are conformant to the latest OpenGL versions, finally promising broad compatibility with modern OpenGL workloads, like Blender, Ryujinx, and Citra."

Looks like apple silicon are currently the best hardware for running linux and linux is the best OS for apple silicon machines.

sho_hn
4 replies
23h47m

I would love for my employer to support that config at work. We have quite lovely Linux dev laptops, but the battery life of the M1/M2 machines in the IT shop is definitely enticing, and Asahi Linux gets closer to MacOS in that regard than you might think given the relative maturity and optimization.

eek2121
3 replies
19h57m

It definitely isn’t ready for use as a daily driver. There are lots of bits missing (see below for an example) and power management isn’t great compared to macOS.

rowanG077
2 replies
13h10m

How so? I'm daily driving it as my only machine since November.sure there are missing features but none that are really essential for most people.

klausa
1 replies
10h53m

You and I have very different work environments for you to be able to claim that microphones aren't essential for most people.

rowanG077
0 replies
5h29m

I use a headset anyway. Build in microphones are really shitty so they are frowned upon at my job. (Although Apples has one of the best).

tarruda
2 replies
19h58m

Looks like apple silicon are currently the best hardware for running linux

I wonder if this effort to run Linux on apple silicon will continue if snapdragon X laptops become mainstream.

seabrookmx
1 replies
10h2m

I think it will. One of the main issues with desktop linux is still broad hardware support. Random crap like fingerprint readers or Wi-Fi cards still don't work on certain machines. By having a very constrained set of hardware options, it makes it a lot easier to support. The snapdragon devices are also starting way behind.. both the Surface X and Lenovo X13S snapdragon devices exist today but Linux support isn't close to Asahi.

smoldesu
0 replies
52m

the Surface X and Lenovo X13S snapdragon devices exist today but Linux support isn't close to Asahi.

Is that true? When the X13S Snapdragon released I seem to remember it shipping with first-party Linux drivers for almost everything. Same goes for the Surface X actually.

Now, both of those devices definitely don't get the same attention Macs do, but they did ship day-and-date with decent Linux support. For example, the Adreno GPU that Qualcomm uses has upstream Mesa support for Vulkan and OpenGL. In many senses, Asahi isn't close to the vendor support those devices recieved.

johnbatch
2 replies
23h21m

Strange the article doesn't use the word "Apple" once, and instead awkwardly uses "the vendor" to refer to Apple.

breather
1 replies
23h4m

It's inline with how the linux/floss community refers to hardware vendors in general, even if Apple is a unique case in many circumstances.

doubled112
0 replies
18h30m

Didn't CentOS use "the vendor" instead of RHEL like this too?

Aurornis
2 replies
23h47m

Looks like apple silicon are currently the best hardware for running linux and linux is the best OS for apple silicon machines

Blender has Metal support for Apple Silicon macs. The Metal API is better architected (largely due to being more modern and being developed with benefit of hindsight) so all things equal I'd pick the Metal version on Mac.

In case you missed it in the article, the M1 GPU does not natively support OpenGL 4.6. They had to emulate certain features. The blog post goes into some of the performance compromises that were necessary to make the full OpenGL emulation work properly. Absolutely a good compromise if you're on Linux and need to run a modern OpenGL app, but if your only goal is to run Blender as well as possible then you'd want to stick to macOS.

Ryujinx is a Nintendo Switch emulator. They added support for Apple Silicon Macs a couple years ago and have been improving since then: https://blog.ryujinx.org/the-impossible-port-macos/

Linux on Apple hardware has come a long way due to some incredible feats of engineering, but it's far from perfect. Calling it the "best OS for Apple Silicon" is a huuuuge reach.

It's great if you need to run Linux for day to day operations, though.

galad87
1 replies
23h26m

Right, Blender Cycles for example can run on Metal, but neither on OpenGL or Vulkan. So while it's nice to have a working OpenGL, it depends if your workflow requires OpenGL apps.

vetinari
0 replies
22h26m

I would be very surprised, if Blender Cycles ever ran on top of OpenGL or Vulkan other than using OpenGL or Vulkan as a loader for compute shaders.

That's why it is running as CUDA/OptiX/HIP/oneAPI on Windows and Linux.

vrodic
1 replies
23h47m

apparently a lot of hardware is still not properly supported, like speakers, microphones and energy saving

acdha
0 replies
23h17m

Here’s the list of very detailed support status: speakers are generally supported but microphones are not. They have a driver for some energy savings but it has some rough edges.

https://github.com/AsahiLinux/docs/wiki/Feature-Support

specto
1 replies
23h6m

Still waiting for some hardware support and hardware video decoding.

stirlo
0 replies
22h18m

Hardware video decoding is well on the way: https://github.com/eiln/avd

seba_dos1
0 replies
23h32m

...and you came to that conclusion because of OpenGL 4.6 - something that several other platforms enjoyed under GNU/Linux with FLOSS drivers for more than half a decade now?

brucethemoose2
0 replies
23h11m

And Sodium! (For minecraft)

politician
6 replies
1d

This is for Fedora on the M1. It would be amazing to get this for macOS. What's involved in pulling something like that off?

fayalalebrun
1 replies
1d

Perhaps already possible via MoltenVK -> Vulkan -> Zink?

Guzba
0 replies
23h5m

Probably needs one or two more layers just to be sure.

skykooler
0 replies
10h10m

According to the devs, it isn't really possible due to Apple not having a stable public kernel API: https://social.treehouse.systems/@AsahiLinux/111930744188229...

shmerl
0 replies
12h56m

I think Apple bans third party kernel drivers? To write a proper Vulkan or OpenGL implementation you need a kernel counterpart for handling the GPU if I understand correctly. That's probably the reason no one bothers implementing native Vulkan for macOS.

But if it's doable with Apple's driver - then not sure.

ribit
0 replies
6h29m

You can implement an OpenGL driver on top of Metal. But why bother dedicating so many resources for the sake of a suboptimal legacy API?

jamesfmilne
0 replies
1d

Ultimately they build command buffers and send them to the GPU. You'd need a way to do that from macOS.

The original Mesa drivers for the M1 GPU were bootstrapped by doing just that, sending command buffers to the AGX driver in macOS using IOKit.

https://rosenzweig.io/blog/asahi-gpu-part-2.html

https://github.com/AsahiLinux/gpu/blob/main/demo/iokit.c

So you'd need a bit more glue in Mesa to get the surfaces from the GPU into something you can composite onto the screen in macOS.

breather
5 replies
22h59m

This is obviously very exciting, but—why not target Vulkan first? It seems like the more salient target these days and one on top of which we already have an OpenGL implementation.

wtallis
2 replies
22h52m

They started with targeting older OpenGL to get a basic feature set working first. I guess from there, getting up to a more recent OpenGL was less work than doing a complete Vulkan implementation, and they probably learned a lot about what they'll need to do for Vulkan.

breather
1 replies
21h17m

Ok, this makes a lot of sense—OpenGL sort of forms a pathway of incremental support.

simcop2387
0 replies
20h32m

Along with that, it's more immediately useful as it's used for desktops and compositers still, so getting a useful environment necessitates it.

shmerl
0 replies
22h27m

I thought something similar, but from their comments, to support OpenGL over Vulkan you need higher versions of Vulkan anyway and it's still a big effort. So they decided to go with (lower versions of) OpenGL first to get something functional sooner.

mort96
0 replies
19h41m

OpenGL-on-Vulkan compat layers aren't magic. For them to support a given OpenGL feature, an equivalent feature must be supported by the Vulkan driver (often as an extension). That means you can't just implement a baseline Vulkan driver and get OGL 4.6 support for free, you must put in the work to implement all the OGL 4.6 features in your Vulkan driver if you want MESA to translate OGL 4.6 to Vulkan for you.

Plus, this isn't Alyssa's first reverse engineering + OpenGL driver project. I don't know the details but I'd imagine it's much easier and quicker to implement a driver for an API you're used to making drivers for, than to implement a driver for an API you aren't.

saagarjha
3 replies
17h25m

I find it very amusing that transitioning out of bounds accesses from traps to returning some random data is called “robustness”. Graphics programming certainly is weird.

wtallis
0 replies
15h1m

It makes sense from the perspective of writing graphics drivers, and aligns with Postel's law (also called the robustness principle). GPU drivers are all about making broken applications run, or run faster. Making your GPU drivers strict by default won't fix the systemic problems with the video game industry shipping broken code, it'll just drive away all of your users.

And on hardware where branches are generally painfully expensive, it sounds really useful to have a flag to tell the system to quietly handle edge cases in whatever way is most efficient. I suspect there are a lot of valid use cases for such a mode where the programmer can be reasonably sure that those edge cases will have little or no impact on what the user ends up seeing in the final rendered frame.

pjmlp
0 replies
11h8m

This is one of the reasons why C and C++ have a rosy life ahead of themselves on graphics, HPC, HEP, HFT domains.

In domains where "performance trumps safety" culture reigns, talking about other programing languages is like talking to a wall.

monocasa
0 replies
13h59m

The out of bounds accesses don't necessarily trap without the robustness checks, so the robustness is about delivering known results under those goofy cases. So it makes sense when you combine that with the fact that GPUs are pretty against traps in general. Carmack remarked once that it's was a pain to get manufacturers to be into the idea of virtual memory when he was designing megatexture.

ics
2 replies
1d

Another upvote, another article I wish I had the knowledge and patience to understand better in context. Still, Alyssa's writeups are a fun read.

randgoog
1 replies
1d

Same, I wish I knew more about graphics programming. It seems like such a steep learning curve though so I get discouraged.

yazzku
0 replies
13h29m

Don't be discouraged, modern graphics APIs really are a mess, but you don't need to understand 1/100th of them to get graphics going. Also, this post is more about programming drivers than programming graphics.

Wowfunhappy
2 replies
20h14m

Regrettably, the M1 doesn’t map well to any graphics standard newer than OpenGL ES 3.1. While Vulkan makes some of these features optional, the missing features are required to layer DirectX and OpenGL on top. No existing solution on M1 gets past the OpenGL 4.1 feature set.

I'm very curious to know the performance impact of this, particularly compared to using Metal on macOS. (I'm sure the answer is "it depends", but still.)

It's possible the article answers this question, but I didn't understand most of it. :(

ribit
0 replies
6h31m

Alyssa chooses some very odd language here, it seems to me. Yes, Apple GPUs do not support geometry shaders natively because geometry shaders are a bad design and do not map well to GPU hardware (geometry shaders are known to be slow even on hardware that allegedly supports it — there is a reason why Nvidia went ahead to design mesh shading). Transform feedback (ability to write transformed vertex data back to memory) is another feature that is often brought up in these discussions, but Apple GPUs can write to arbitrary memory locations from any shader stage, which makes transform feedback entirely superfluous.

The core of the issue is that Apple chose to implement a streamlined compute architecture, and they have cut a lot of legacy cruft and things that were known not to work well in the process. I don't think that the rhetorics of "M1 getting stuck at OpenGL 4.1" is appropriate. I stopped following OpenGL many years ago, so I don't know specifically which features past 4.1 she might refer to. What I can say is that I'd be very surprised if there is something that OpenGL offers that cannot be done in Metal, but there are plenty of things possible in Metal that cannot be done at all in OpenGL (starting with a fact that Metal shading language has fully featured pointers).

gilgoomesh
0 replies
19h39m

There isn't necessarily much difference between implementing features in driver compute code versus GPU hardware support. Even the "hardware support" is usually implemented in GPU microcode. It often goes through the same silicon. Any feature could hit a performance bottleneck and it's hard to know which feature will bottleneck until you try.

jauntywundrkind
1 replies
23h39m

How do we break the 4.1 barrier? Without hardware support, new features need new tricks. Geometry shaders, tessellation, and transform feedback become compute shaders. Cull distance becomes a transformed interpolated value. Clip control becomes a vertex shader epilogue. The list goes on.

I wonder how much of this work is in m1 gpu code, versus how much feature-implemented-on-another-festure work could be reused by others.

This feels very similar to what Zink does (runs complex opengl capabilities via a more primitive Vulkan), except there is no Vulkan backend to target for m1. Yet.

zozbot234
0 replies
20h23m

More generally, you could execute complex OpenGL or Vulkan on some more-or-less arbitrary combination of CPU soft-rendering and hardware-specific native acceleration support. It would just be a matter of doing the work, and it could be reused across a wide variety of hardware - including perhaps older hardware that may be quite well understood but not usable on its own for modern workloads.

zdimension
0 replies
19h23m

Alyssa Rosenzweig is a gift to the community that keeps on giving. Every one of her blog posts is a guarantee to learn something you didn't know about the internals of modern graphics hardware.

noiv
0 replies
22h45m

This endeavour proofs to me skills beat talkativeness every single day. Just reading the blogs sets my brain on fire. There is so much to unpack. The punch line is not the last but the second sentence, nevertheless you're forced to follow the path into the rabbit hole until you enjoy reading one bit manipulation after the other.

If there ever are benchmarks with eureka effects per paragraph Alyssa will lead them all.

Just thanks!