I agree entirely with the author on the limitations of Raylib. I'm currently working on a tower-defense style game that I started in Raylib, but I'm running into many of the same limitations (and more). Things such as toggling fullscreen not working consistently across platforms, not being able to enumerate screen modes, toggling rendering features at runtime, saving compiled shaders etc., etc. Having said that, I appreciate Ray's work on this library and will continue to sponsor him. Raylib is great for quickly banging out a prototype, but not much beyond that unless you're okay with living with severe limitations.
Lesson learned, for sure, but I'm too far into the development to swap all of the Raylib stuff out for SDL (or something else) now.
Raylib has a lot of issues that are never going to be fixed, but I wouldn't blame fullscreen on it. Fullscreen is just absolutely unusably FUBARed on windows and has been for decades. It's probably the same for other platforms. The modern strategy is to just do borderless windowed and pretend true fullscreen doesn't exist.
I have no knowledge about such things but…
This explains a lot as to why I experienced some interesting inconsistency going full screen in different applications in Windows ;)
In Linux/Wayland, Windows-style fullscreen does not exist and has no reason to exist either.
In a composited environment, fullscreen windows are just maximized windows without a border, which triggers some heuristic to unredirect windows (i.e. does not have to pay the price of compositing) when they are the sole thing that is drawn on the screen.
So, on Linux, fullscreen and maximized borderless are the same. Given that Windows is a fully compositing desktop as well, I wonder why the difference still exists.
But what does "Windows-style fullscreen" actually do? It feels like it switches into a different video mode (even if the resolution is the same), and back when my monitor was a CRT, I could even hear a faint click inside it whenever a game entered/exited the full-screen mode. So there was definitely something special going on, but what?
Windows style fullscreen made it so that the game controlled the resolution being sent "over the wire" to the display, and as I understand it, basically gave exclusive display access to the game. This had the interesting effect of delegating all resolution scaling to the display, which was great for CRTs and often very bad on digital displays which (usually) do very muddy bilinear scaling even if being sent a resolution that was an exact integer division of the total display resolution.
The current day approach is to instead do the "large borderless window" and then do software scaling of the image in order to match the display resolution. One cool result of this is you can get much better scaling than your display would do naively. Low-res games really benefit from software integer scaling instead of "whatever scaling the display firmware does, usually making all the edges blurry".
Rescaling in software is so fast now that some graphically intense games can do per-frame scaling in order to keep per-frame latencies below a particular goal threshold, at the cost of your game losing fidelity (getting blurier). It's so common that most big-name titles now do this.
Some legacy apps (that is, games) react very badly when they're not in full control of their window sizing, priority, input and device polling, ecc.
At least on Xorg it isn't an issue, but to ensure cross-compatibility your solution trumps all. I cannot comment on other platforms.
In Xorg it is definitely an issue. Virtually all games use a borderless window because actual X11 fullscreen is awful: it captures all input events and changes the system's screen resolution to whatever the application is running at.
There is no all-encompassing "actual X11 fullscreen", X11 does not have an API for applications to have control over the entire screen like you'd find in, e.g, DirectX 9 (what most people think of with "real fullscreen" in Windows). What games (and SDL1) ages ago did was to use APIs that changed the video mode and APIs that captured input, both of which are completely separate (they're even from different sources: the video mode API is from an extension, the capture API is from the core protocol) - and then created a borderless window the normal way.
There are three different aspects here, each one can be done differently and all can be (and have been) mixed or ignored:
1. Set the video mode. You can either do that or not and use whatever the desktop is running at. You may want to change the video mode not just for resolution but for using a different refresh rate. Both native games (like, e.g. Quake source ports using SDL2) and Wine still do that (you can disable the modesetting in Wine via the registry so that games only see the desktop resolution as the only available one).
2. Create a fullscreen window. There are two ways to do that: either create a window that bypasses redirection ("redirection" here means that the window manager wont manage it, it has nothing to do with compositing - if a compositor is used - though some compositors also use it as a hint for that) and have it cover the entire desktop area (you may want to use randr to figure out the monitor area for multiple monitors) or alternatively use the fullscreen hint on the window and let the window manager handle this. Note that some window managers may not support this or have bugs with it.
3. Handle input. X11 provides an API to capture mouse and/or keyboard input, though strictly speaking this is not really required and TBH i do not see why games ever did that (i can only think of conflicts with other programs but i'd expect this to be something for the user to deal with). In any case, it is rare for games to do that these days. You can just handle input "normally" like any other program. You can provide a (by default disabled) option if you really want to though as the API for capturing is trivial.
For my last game engine i did #1 (with an option to use whatever the desktop resolution is) and #2 (using the fullscreen flag by default with the redirect flag as an alternative for window managers that had issues with the fullscreen flag) but i didn't bother with #3.
I do the same, I do wonder however, if there are performance issue at the OS level, from running (on a 4K screen) 4K borderless vs. 4K fullscreen.
Like, the whole giving the application maximum priority would work the same?
Whether there's a performance detriment from borderless-fullscreen vs exclusive-fullscreen depends on how a program presents its frames (and on the OS, of course).
On Windows, under ideal circumstances borderless-fullscreen performs identically to exclusive-fullscreen as Windows will let the program skip the compositor and present its frames more-or-less directly to the display. (Under really ideal circumstances the same applies to bordered non-fullscreen windows.)
If the compositor can't be skipped, borderless-fullscreen can be a bit brutal on performance: on a 4K 160Hz screen I've experienced an additional 40-milliseconds+ of frame-latency purely from borderless-fullscreen being used.
The Special K wiki has some pages that go into more detail about the situation on Windows: https://wiki.special-k.info/SwapChain, https://wiki.special-k.info/Presentation_Model
This is absolutely bonkers.
Hah, good to know I am not the only one.
True, it’s just one of the many issues that came to mind while writing this.
My solution to the issue is also full screen borderless combined with resolution scaling.
Wow this is kind of insane. About this
The developer answered this
in c and c++ this is not just normal but almost unavoidable. if your function takes a t* and someone casts a random int to a t* and pass it in, it is gonna segfault. no possible way to validate it, though in theory you could open /proc/self/maps and iterate through it to catch the segfault cases, or install a segfault handler
True, that is not something a function ought to defend against.
However, the complaint was not about that, though, it was about not checking if the dataSize parameter is NULL.
I don't really have a problem with functions that segfault when given NULL pointer parameters as long as this is clearly documented!
Sometime, however, you just need to be familiar with enough projects in C that common-sense gets built. In the specified example:
I expected that a function that returns an array needs to tell the caller how large that array is. This specific function is not one I would complain about.These things are usually documented in the headers. In this case it says:
So, yeah, it's pretty clear to me that the number of bytes has to be returned somewhere, and there's only one parameter called `dataSize`, so this isn't something I consider to be a valid complaint.[EDIT: Escaped pointers, and added last paragraph]
And if every C project takes this same philosophy to poor documentation, how exactly is a new person learning C supposed to build that common sense? Brushing off poor documentation with "you just need more experience" is not a valid response. The documentation for that function could be much more clearer without being much longer.
My point is not that it was good or acceptable, my point is that, while not perfect, it isn't bad enough to warrant the moniker of `insane`.
WARNING unpaired | unescaped *'s !! :-)
grrr, markup
Thanks, fixed.
yeah. unlike an arbitrary garbage pointer, a null pointer is something the callee could detect. but on linux or windows the operating system will reliably detect it for you, and in a way that makes it a lot easier to debug: it segfaults, making it obvious that there's a bug, and your debugger will show you the stack trace leading up to the LoadFileData call with the null pointer parameter. and, as you point out in your other comment, in the very next line after your call to LoadFileData, you need the data size, and if you passed in a statically null pointer, you don't have it, so you realize you're going to need to pass in a pointer to an actual int
so, while the documentation could and should be more explicit than the single telegraphic line you quote, the function's behavior is fine, and detecting and trying to handle the null pointer would make its behavior worse and harder to debug (except on like an arduino or ms-dos or something)
not every c api description needs to be an introductory tutorial for c
I think the word 'insane' is going to far to describe the behaviour of the specified function.
It returns an array of bytes. If you, the programmer, wrote a line that called that function, on the very next line you are going to try to use the array, realise that you don't know the length, and realise that the `NULL` that passed in on the line above is probably the output for the length!
In order to actually write a call with `NULL` for the dataSize argument, the programmer needs to be clueless about how to write a for loop.
So, no, I can't easily see a situation where a programmer accidentally uses a `dataSize` parameter of `NULL`, because that would mean they don't know that arrays in C have no length information, which is C 101.
Arrays in C have length information. Pointers do not.
Sure, but they don't carry that length information when being returned from a function.
Not in any sense that is actually useful to the programmer, at runtime at least. You cannot ask an array how big it is.
Pointers actually do have length information as well, otherwise free() would not work. But its the same deal with arrays - there is no way to access it outside of compile time constants, if those are even present.
Quick appreciation for the detail that Raylib is named after the creator's name Ray and not ray-tracing, fun.
Things Unexpectedly Named After People: https://notes.rolandcrosby.com/posts/unexpectedly-eponymous/
How do you know Ray was not named after ray-tracing?
The author's name is the first hint, and the lack of ray tracing the second
You missed the joke, so let me ruin it by explaining it:
What if Ray the person was named after ray-tracing by his parents?
Plot twist: Ray Tracing was the name of a person that was very important to both of them and unfortunately passed away, so they named their son Ray as a tribute.
I can confirm that. And Ms. Litre went to the same school and was best friends with Mr. Tracing. [https://en.wikipedia.org/wiki/Claude_%C3%89mile_Jean-Baptist...]
Ah shit, I was That Guy on the internet, sorry. I guess it happens to everyone eventually.
I choose to interpret it as: How do you know that *the author* was not named after ray-tracing?
Which is amusing :)
Such a good list.. worth a submission of its own IMHO
It got good traction a couple times before, many more fun examples in the comments.
https://news.ycombinator.com/item?id=39462516
https://news.ycombinator.com/item?id=23888725
I wish they'd add French drains.
This made me want to look at raylib. It comes with some cute examples that run using WebAssembly: https://www.raylib.com/examples.html
One thing that's always bothered me about Wasm and browser 3d/2d graphics is that I often find minor issues such as scrolling. Look at the example called "Background scrolling & parallax" here: https://www.raylib.com/examples.html
I've tested on several devices and it's definitely not smooth scrolling, unless there's something wrong with my eyes.
How can 2D smooth scrolling not be a solved problem in 2024?
In that sample the foreground scrolls perfectly smoothly for me, but the background looks jittery. This indicates to me that it's not a platform issue at all. That sample is just doing something weird with the background.
Yes, the background is odd but the foreground is definitely not smooth. I see small little jitters occasionally. At one point I had to wait 15 seconds for it to jitter, though.
Yes the back and foreground is quite jittery for me on Firefox, and I'm almost certain its the browsers own requestAnimationFrame that's the problem.
Update: Although, having a closer look at the scene, I see its pixel art, so I bet the author is snapping floating point positions to a pixel point to prevent sub pixel blurring.
Another small update: I was sure requestAnimationFrame was locked to 60fps, but I noticed on Chrome the other day it was 144hz, the full speed of my monitor.
yeah, http://canonical.org/~kragen/sw/dev3/qvaders is unplayable on 120-hertz monitors because i'm running the game physics from raf ;)
This jitteryness is because the sample doesn't have antialiasing enabled (since it's pixel art) and the background scrolling is 0.1pixels per frame, which means every 10 frames it snaps 1 pixel. The scrolling is also updating on fixed amount per frame instead of looking at deltaTime, so if there are lags or small differences in frame time this might look choppy.
But I think it's more meant to demonstrate drawing parallax layers rather than subpixel scrolling.
So by modifying it to look at delta time it would be smooth?
Because it's a surprisingly tricky topic on modern operating systems [1], and even trickier in web browsers (smooth scrolling was actually much easier to achieve on hard-realtime systems like the 8- and 16-bit home computers of the 80's and early 90s).
TL;DR: if you base your animation- or scroll-speed on the 'raw' measured time between two frames, you'll get micro-stutter because it's pretty much impossible to obtain a non-jittery frame duration on modern operating systems or web browsers, all you can do is try to remove the noise via filtering, or 'align' your measured frame duration with the display refresh interval, which on some platforms cannot be queried.
In web browsers the most important problem is that you can't measure the exact frame duration (which is fallout from Spectre/Meltdown), or obtain a precise 'presentation timestamp', or even query the display refresh frequency.
Even in the native OS APIs that provide a presentation timestamp (like DXGI on Windows or CVDisplayLink on macOS) that timestamp has considerable jitter and has not much to do with the actual presentation time when the frame becomes visible to the user.
And as soon as you base your animation timings on such a jittery timestamp you'll get micro-stutter (the easiest way to get smooth animation is actually to assume a fixed frame duration, but then your animation speed will be tied to the display refresh rate).
It's often possible to eliminate the timing jitter with 'noise removal' filters or just tracking an average over the last couple dozen frames, but those then may behave funny in situations where the frame duration changes drastically (such as moving a window between displays with different refresh rate, or when rendering stops and then resumes because the window or browser tab is fully obscured and then becomes visible again).
PS: Raylib's frame pacing code on the web is also a bit on the crude side [2].
...e.g. it just sleeps for 16 milliseconds, and relies on ASYNCIFY to enable a traditional render loop in browsers. It would actually be better to use a frame callback via requestAnimationFrame (or the Emscripten wrapper function emscripten_request_animation_frame), but this means giving up the cross-platform 'own the game loop' application model. Not that requestAnimationFrame alone solves any of the above mentioned time jitter problems though.
[1] https://medium.com/@alen.ladavac/the-elusive-frame-timing-16...
[2[ https://github.com/raysan5/raylib/blob/f1007554a0a8145060797...
Thanks for this comment, this thing you mentioned "micro stutter" is something that has had me really scratching my head in my game engine project!
Do you have any comment whether frame blending (actually using a mix of two frame states to produce the rendering) would be a workable solution?
https://github.com/ensisoft/detonator
the answer to your last question is "inner platform effect" and "second system effect".
Raylib is easy to get started but once the project gets a little complex it bites back. SDL on the other hand takes more time to setup everything but scales extremely well as the project gets bigger and bigger. Also, SDL is exceptionally well written code.
And an exceptionally well written documentation, too! One of the first big-ish projects of mine was a raytracer I wrote in C with SDL.
Got a link to that raytracer?
It would be like a regular raytracer, but instead of writing pixels to a file, you write them to your buffer/texture.
I switched from raylib to Sokol some time ago and it's the best. Super simple and single header C code, no dependencies, cross platform code and shaders, etc etc. I've shipped a steam game and I plan to continue using Sokol in the future.
Do you mind sharing a link or game name? Despite all the advice out there picking up a game engine, I too am pursuing a goal of making a game from scratch using simple parts of C++ and SFML for now, but was also looking at SDL and Sokol. What was your experience? Are you a solo dev? Was it worth it?
Not the parent, but I also use sokol as the base for my engine. I've shipped 3 3d web games with it, and honestly it was the best technical decision I ever made! There's just so much to be said for owning all your dependencies (as far as possible, sokol is still a dependency but it's way more manageable than a 3rd party game engine)
Do you mind to share your games details? No judging, I promise :) I just need examples from people like me and not AAA studios for motivation sake. I literally was called an idiot for not choosing Unity or Unreal, and was turned down for collaboration with non-tech people just for that. Game dev landscape is crazy these days.
Kind of share the same feeling, started a project about 2 months ago and chose to use Raylib, and while the basic stuff is really simple to get going, the more you use it, the more random minor inconveniences you run into, but at this point I've invested too much into this project to back out of using Raylib. My biggest issue with it right now is the font handling and text rendering, I think I'll have to switch from TTF fonts to pre-baked bitmap fonts for it (which will suck for localization later). The biggest two features I'm missing after switching from Love2D is being able to render multi-color text (with Raylib, you have to manually split up the text into chunks based on color markup, then apply width offset, then call the draw function for each chunk, while also taking into account things like linebreaks and such), not to mention that it seems to tank the FPS a lot when you try to draw a lot of text on the screen (perhaps the draw call batching is broken for text?) and being able to easily chop up textures and make them wrap or tile, Raylib used to have tiled texture draw function in the past but for some reason they removed it.
Why did you switch from Love2D?
I need multithreading, async TCP, and have to implement systems that benefit from type safety. Though, I do still use Lua (using sol2), for interface modules, but I miss the features that Love2D provided on the framework level.
I'm the organizer for the conference [0] mentioned in TFA.
We had a professional UI/UX designer react to ShapeUp [1], and one of the things she commented on was the font being hard to visually parse.
I laughed a little when the author yelled "raylib!" to make sure blame was assigned appropriately XD. I'm currently the top GitHub sponsor for raylib, so there's no hate, but I wish he changed some of his defaults.
[0] https://handmadecities.com/seattle
[1] https://vimeo.com/887532756/2972a82e55#t=49m58s (timestamped)
IIRC it defines some common words too like all the color names and uses a lot of names that should be prefixed. Good otherwise.
At least they're all-caps, but as somebody that writes C++ and uses Raylib, I just wrapped it in a namespace in my project that I include, like so (note that cstdio must be included before raylib if you're using it from C++):