That's a good article. He's right about many things.
I've been writing a metaverse client in Rust for several years now. Works with Second Life and Open Simulator servers. Here's some video.[1] It's about 45,000 lines of safe Rust.
Notes:
* There are very few people doing serious 3D game work in Rust. There's Veloren, and my stuff, and maybe a few others. No big, popular titles. I'd expected some AAA title to be written in Rust by now. That hasn't happened, and it's probably not going to happen, for the reasons the author gives.
* He's right about the pain of refactoring and the difficulties of interconnecting different parts of the program. It's quite common for some change to require extensive plumbing work. If the client that talks to the servers needs to talk to the 2D GUI, it has to queue an event.
* The rendering situation is almost adequate, but the stack isn't finished and reliable yet. The 2D GUI systems are weak and require too much code per dialog box.
* I tend to agree about the "async contamination" problem. The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests. I've been pushing back against it creeping into areas that don't really need it.
* I have less trouble with compile times than he does, because the metaverse client has no built-in "gameplay". A metaverse client is more like a 3D web browser than a game. All the objects and their behaviors come from the server. I can edit my part of the world from inside the live world. If the color or behavior or model of something needs to be changed, that's not something that requires a client recompile.
The people using C# and Unity on the same problem are making much faster progress.
> I'd expected some AAA title to be written in Rust by now.
I'm disinclined to believe that any AAA game will be written in Rust (one is free to insert "because Rust's gamedev ecosystem is immature" or "because AAA game development is increasingly conservative and risk-averse" at their discretion), yet I'm curious what led you to believe this. C++ became available in 1985, and didn't become popular for gamedev until the turn of the millenium, in the wake of Quake 3 (buoyed by the new features of C++98).
Exactly, it's all about the ecosystem and very little about the language features
Disagree the adoption of C++ was more about Moore's law than ecosystem, although having compilers that were beginning to not be completely rubbish also helped.
Also C++ could be adopted incrementally by C developers. You could use it as “C with classes”, or just use operator overloading to make vector math more tolerable, or whatever subset that you happened to like.
So there’s really three forces at play in making C++ the standard:
1) The Microsoft ecosystem. They literally stopped supporting C by not adopting the C99 standard in their compiler. If you wanted any modern convenience, you had to compile in C++ mode. New APIs like Direct3D were theoretically accessible from C (via COM) but in practice designed for C++.
2) Better compilers and more CPU cycles to spare. You could actually count on the compiler to do the right thing often enough.
3) Seamless gradual adoption for C developers.
Rust has a good compiler, but it lacks that big ticket ecosystem push and is not entirely trivial for C++ developers to adopt.
I'd say Rust does have that big ticket ecosystem push. Microsoft has been embracing Rust lately, with things like official Windows bindings [1].
The bigger problem is just inertia: large game engines are enormous.
[1]: https://github.com/microsoft/windows-rs
I'd say the inertia is far more social than codebase size related. Right now whilst there are pockets of interest there is no broader reason to switch. Bevy as the leading contender isn't going to magic it's way to being capable of shipping AAA titles unless a studio actually adopts it. I don't think it's actually shipped a commercially successful indie game yet.
Also game engines emphatically don't have to be huge. Look at Balatro shipping on Love2d.
There are a few successful games like Tunnet [1] written in Bevy.
[1]: https://store.steampowered.com/app/2286390/Tunnet/
Looks cool and well received but at ~300ish reviews hardly a shining beacon if we extrapolate sales from that. But I'll say that's a good start.
Speaking as a Godot supporter, I don't think sales numbers of shipped games are relevant to anyone except the game's developer.
When evaluating a newer technology, the key question is: are there any major non-obvious roadblocks? A finished game (with presumably decent performance) tells you that if there are problems, they're solvable. That's the data.
It doesn't tell you anything about velocity, which is by far the most important metric for indie devs.
After all, the studio could have expended (maybe) twice as much effort to get a result.
Or maybe Rust allowed them to develop twice as fast. Who knows? We're going by data here, and this data point shows that games can be made in Bevy. No more and no less.
Agreed. We've learned a lot from Godot, by the way. I consider all us open source engines to be in it together :)
Game engines are tools not fan clubs. It’s reasonable to judge them on their performance for which they are designed. As someone who cares about the commercial viability of their technology choices this is a small but positive signal.
What it tells me is someone shipped something and it wasn’t awful. Props to them!
Balatro convinced me that Love2D might be a good contender for my next small 2D game release. I had no idea you could integrate Steamworks or 2D shaders that looked that good into Love2D. And it seems to be very cross-platform, since Balatro released on pretty much every platform on day 1 (with some porting help from a third party developer it seems like).
And since it's Lua based, I should be able to port a slightly simpler version of the game over to the Playdate console.
I'm also considering Godot, though.
There’s a pretty big difference between the Playdate and anything else in performance but also in requirements for assets. So much so I hope your idea is scoped accordingly. But yeah Love2d is great.
It is. I've already half ported one of my games to the Playdate (and own one), I'm pretty aware of its capabilities.
The assets are what I struggle with most. 1-bit graphics that look halfway decent are a challenge for me. In my half-ported game, I just draw the tiles programatically, like I did in the Pico-8 version (and they don't look anywhere near as good as a lot of Playdate games, so I need to someday sit down and try to get some better art in it).
Repo contributor here, just to curb some expectations a bit: it's one very smart guy (Kenny), his unpaid volunteer sidekick (me), and a few unpaid external contributors. (I'm trying to draw a line between those with and without commit access, hence all the edits.)
There's no other internal or external Microsoft /support/ that I'm aware of. I wouldn't necessarily use it as a signal of the company's intentions at this time.
That said, there are Microsoft folks working on the Rust compiler, toolchain, etc. side of things too. Maybe those are better indicators!
That's disappointing on Microsoft's part, because their docs make it seem like windows-rs is the way of the future.
Thanks for your work, though!
I wish Microsoft had any direction on the 'way of the future' for native apps on Windows
If they did publish a “way of the future” direction, would you believe them?
Fool me N times then shame on them, fool me N+1 times, then shame on me sort of thing.
I'd have bought into MAUI if there was Linux support in the box.
The most infuriating thing is their habit of rebuilding things just about the time they reach a mature and highly stable state, creating an entirely new unstable and unreliable system. And then the time that system almost reaches a stable state - it's scrapped and it starts all over again.
WPF -> UWP -> WinUI -> WinUI 2 -> WinUI 3 is just such a ridiculous chain. WPF was awesome, highly extensible, and could have easily and modularly been extended indefinitely - while also maintaining its widespread (if unofficial) cross platform support and just general rock solid performance/stability. Instead it's the above pattern over and over and over.
And now it seems WinUI 3 is also dead, alas without even bothering with a replacement. Or maybe that's XAMARIN, wait I mean MAUI? Not entirely joking - I never bothered to follow that seemingly completely parallel system doing pretty much the same things. On the bright side this got me to finally migrate away from Microsoft UI solutions, which has made my life much more pleasant since!
Don't be, they also killed C++/CX, even went to CppCon 2016 telling us how great future C++/WinRT would bring to us.
Now almost a decade later, VS tooling is still not there, stuck in ATL/VC++ 6.0 like experience (they blame it on the VS team), C++/WinRT is in maintenance, only bug fixes, and all the fun is on Rust/WinRT.
I would never trust this work for production development.
Yes, the Google folks are also funding efforts to improve Rust/C++ interop, per https://security.googleblog.com/2024/02/improving-interopera...
Thanks for the link. This one was also posted awhile back in a rust comment and when I first read it, I thought Google had used Rust in the V8 sandbox, but re-reading it seems that the article uses Rust as an ‘example’ of a memory safe language but does not explicitly say that it uses Rust. Maybe someone with more knowledge can confirm that Rust was (or was not) used in the V8 Google Chrome sandbox example….
https://v8.dev/blog/sandbox
Rust is not used in V8, to my knowledge.
So far I am way less productive in rust than in any language I've ever used for actual work, so to rewrite an entire game engine would seem like commercial suicide.
"so far" is doing a lot of heavy lifting there =)
I was the same the first two times I tried to use rust (earnestly). However, one day it just "clicked" and my productivity exceeds that of almost anything else, for the specific type of work I'm doing (scientific computation)
I think we shouldn't expect any language to lead different programmers to the same experiences. Rust has the inital steep learning curve, and after that it's a matter of taste whether one is willing to forge on and turn it into a honed tool. Also, I think it's clear that Rust excels in some fields far more naturally than in others. Making blanket statements about how Rust, or any language, is (un)productive is a disservice to everyone.
not true anymore, c11 and c17 are either supported or coming
https://devblogs.microsoft.com/cppblog/c11-and-c17-standard-...
Not really relevant to 30 years ago though.
Theoretically accessible describes the experience of trying to use D3D from C very well!
Was trying to use it with some kind of gcc for windows. The C++ part was still lacking some required features, so it was advised to use D3D from C instead C++. There were some helper macros, but overall I was glad when Microsoft started to release their Express (and later Community) Editions of Visual Studio.
I access D3D(11) from C in my libraries and tbh it's not any different from C++ in terms of usability (only difference is that the "this" argument and vtable indirection is implicit in C++, but that's just syntax sugar that can be wrapped in a macro in C).
That description of problems bodes well for Zig
I worked on many of Activision's games 1995-2000 and C++ was the overwhelming choice of programming language for PC games. C was more common for console. In 1996 the quality of MSFT IDE/ Compiler, plus the CPUs available at the time was such that it could take an hour to compile a big game. By 1998 it was a few minutes. As I recall I think MSFT purchased another companies compiler and that really changed Visual Studio.
I was a developer on the Microsoft C++ compiler team from 1991 to 2006. We definitely didn't purchase someone else's compiler in that time. We looked at the EDG front end at various times but never moved over to it while I was there.
Perhaps the speed-up you remember had something to do with the switch-over from 16 bits to 32, which would have been the early to mid 90s. Or you're thinking of Microsoft's C compiler starting from Lattice C, back in the 80s before my time. There was also a lot of work done on pre-compiled headers to speed compilation in the latter half of the 90s (including some that I was responsible for).
I heard that early versions of C++ IntelliSense from Visual Studio used Edison Design Group's (EDG) front end. Is that true? No trolling here -- honest question. If yes, are they still using it now?
Not true by the time I retired in 2007, but I've got a vague memory of talking to someone on the C++ front-end team some time after that and EDG for IntelliSense being mentioned. So no idea if that's really true or not, and if so, whether that's true today.
I was heavily involved in the first version of C++ IntelliSense, roughly 1997?, and it was all home-grown. It was also a miracle it worked at all. I've blocked out most of the ugly details from my memory, but parsing on the fly with a fast enough response time to be useful in the face of incomplete information about which #if branches to take and, especially, template definitions was a tower of heuristics and hacks that barely held together. Things are much better nowadays with more horsepower available to replace those heuristics.
I was a teenager at that point. I learnt C in the early 90s and C++ after 96 IIRC. Didn’t start professionally in games until 2004 though!
C++ classes with inheritance are a pretty good match for objects in a 3D (or 2D) world, which is why C++ became popular with 3D game programmers.
Yeah, OOP makes sense for games. The language will matter a bit for which one takes off, but anything will work given enough support. Like, Python doesn't inherently make a lot of sense for data processing or AI, but it's good enough.
OOP kind of goes out the window when people start using entity component systems. Of course, like the author, I'm not sure I'll need ECS since I'm not building a AAA game.
Had to look up ECS to be honest, and it's pretty much what I already do in general dev. I don't care to classify things, I care what I can do with something. Which is Rust's model.
Sorry I got lost in that sentence. What is Rust's model?
Rust has traits on structs instead of using inheritance. Aka composition.
You can also have structs be generic over some "tag" type, which when combined with trait definitions gets you quite close to implementation inheritance as seen in C++ and elsewhere. It's just less common because usually composition is all that's required.
But that... wasn't in your comment at all...
If I say "I don't care about safety, I care about expressiveness. Which is Rust's model"... "which" has to refer to one of the other things I just mentioned (safety or expressiveness) not some other concept.
Even PHP as traits by now. Languages tend to incorporate others Languages successful features. There is of course feature inflation risk of course. There are Languages that take as a goal to avoid that inflation, such as Zig, or that arrives there as a byproduct of being very focused in a specific use case like AWK.
Interfaces or traits are not ECS though. ECS is mostly concerned about how data is layed out in memory for efficient processing. The composability is (more or less) just a nice side effect.
This is correct. I wonder how Rust models SoA wirh borrowing. Is it doable or becomes very messy?
I usually have some kind of object that apparently looks like OOP but points all its features to the SoA. All that would be borrowing and pointing somewhere else in slices or similar in Rust I assume?
AFAIK tagged-index-handles are typically used for this (where the tag is a generation-counter to detect 'dangling handles'), which more or less side-steps the borrow checker restrictions (e.g. see https://floooh.github.io/2018/06/17/handles-vs-pointers.html).
To be clear, the reason why Python is so popular for data wrangling (including ML/AI) is not due to the language itself. It is due to the popular extensions (libraries) exclusively written in C & C++! Without these libraries, no one would bother with Python for these tasks. They would use C++, Java, or .NET. Hell, even Perl is much faster than Python for data processing using only the language and not native extensions.
Python makes sense because of accessibility and general comfort for relatively small code bases with big data sets.
Those data scientists at least from my experience are more into math/business than interested in most efficient programming.
Or at least that was the situation at first and it sticked.
This is not at all my experience.
What I have experienced is that C++ classes with inheritance are good at modeling objects in a game at first, when you are just starting and the hierarchy is super simple. Afterwards, it isn’t a good match. To can try to hack around this in several ways, but the short version of it is that if your game isn’t very simple you are better off starting with an Entity Component System setup. It will be more cumbersome to use than the language-provided features at first, but the lines cross very quickly.
Hmm no not really in my experience. Even the old "Entities and Components" system in Unity was better, because it allowed to compose GameObject behaviour by attaching Component objects, and this system was often replicated in C++ code bases until it "evolved" into ECS.
This is how I feel about golang and systems programming. The strong concurrency primitives and language simplicity make it easier to write and reason about concurrent code. I have to maintain some low level systems in python and the language is such a worse fit for solving those problems.
Kind of both in my opinion. But rust is bringing nothing to the table that games need.
At best rust fixes crash bugs and not the usual logic and rendering bugs that are far more involved and plague users more often.
The ability of engines like Bevy to automatically schedule dependencies and multithread systems, which relies on Rust's strictness around mutability, is a big advantage. Speaking as someone who's spent a long time looking at Bevy profiles, the increased parallelism really helps.
Of course, you can do job queuing systems in C++ too. But Rust naturally pushes you toward the more parallel path with all your logic. In C++ the temptation is to start sequential to avoid data races; in systems like Bevy, you start parallel to begin with.
Aside from a physics simulation, I'm curious as to what you think would be a positive cost benefit from that level of multithreading for the majority of game engines. Graphical pipelines take advantage of the concept but offload as much work as possible to the GPU.
We were doing threading beyond that in 2010, you could easily have rendering, physics, animation, audio and other subsystems chugging along on different threads. As I was leaving the industry most engines were trending towards very parallel concurrent job execution systems.
The PS3 was also an interesting architecture(i.e. SPUs) from that perspective but it was so distant from the current time that it never really took off. Getting existing things ported to it was a beast.
Bevy really nails the concurrency right IMO(having worked on AA/AAA engines in the past) it's missing a ton in other dimensions but the actual ECS + scheduling APIs are a joy. Last "proper" engine I worked on was a rats-nest of concurrency in comparison.
That said as a few other people pointed out, the key is iteration, hot-reload and other things. Given the choice I'd probably do(and have done) a Rust based engine core where you need performance/stability and some dynamic language on top(Lua, quickjs, etc) for actual game content.
I fully agree that this will likely be the solution a lot of people want to go with in Bevy: scripting for quick iteration, Rust for the stuff that has to be fast. (Also thank you for the kind words!)
Yeah, it's a fairly clean and natural divide. You see it in most of the major engines and it was present in all the proprietary engines I worked on(we mostly used Lua/LuaJIT since this predated some great recent options like quickjs).
We even had things like designers writing scripts for AI in literate programming with Lua using coroutines. We fit in 400kb of space for code + runtime using Lua on the PSP(man that platform was a nightmare but the scripting worked out really well).
Rust excels when you know what you want to build, and core engine tech fits that category pretty cleanly. Once you get up in game logic/behavior that iteration loop is so dynamic that you are prototyping more than developing.
Animations are an example. I landed code in Bevy 0.13 to evaluate all AnimationTargets (in Unity speak, animators) for all objects in parallel. (This can't be done on GPU because animations can affect the transforms of entities, which can cause collisions, etc. triggering arbitrary game logic.) For my test workload with 10,000 skinned meshes, it bumped up the FPS by quite a bit.
In big-world high-detail games, the rendering operation wants so much time that the main thread has time for little else. There's physics, there's networking, there's game movement, there's NPC AI - those all need some time. If you can get that time from another CPU, rendering tends to go faster.
I tend to overdo parallelism. Load this file into a Tracy profile, version 0.10.0, and you can see what all the threads in my program are doing.[1] Currently I'm dealing with locking stalls at the WGPU level. If you have application/Rend3/WGPU/Vulkan/GPU parallism, every layer has to get it right.
Why? Because the C++ clients hit a framerate wall, with the main thread at 100% and no way to get faster.
[1] https://animats.com/sl/misc/traces/clockhavenspeed02.tracy
"Fearless concurrency"
I sometimes wonder if the problem with rust is that we have not yet had a major set of projects which drive solutions to common dev problems.
Go had google driving adoption, which in turn drove open source efforts. The language had to remain grounded to not interfere with the doing of building back-end services.
Rust had mozilla/servo which was ultimately unsuccessful. While there are more than a few companies uinf rust for small projects with tough performance guarantees - I haven't seen the “we manage 1-10 MM sloc of complex code using rust” type projects.
I really think the problem of Rust is the borrow checker. Seriously. It is good but it is overkill. You have to do and plan all things around it and discourages a lot of patterns or makes them really difficult to refactor.
I would encourage people to understand Hylo's object model and mutable value semantics. I thinks something like that is far better, more ergonomic and very well-performing (in theory at least).
You can use unsafe code and pointers if you really want, but code will be unsafe, like C or C++.
TBF, unsafe Rust still enforces much more correctness than C or C++ (Rust's "unsafety" is more similar to Zig than C or C++).
TBF this is not really true. Unsafe Rust is a lot harder than comparable C/C++, because it must manually uphold all safety invariants of Safe Rust whenever it interacts with idiomatic Rust code. (These safety invariants are also why Safe Rust can often be compiled into better-optimized code than the idiomatic C/C++ equivalent.)
I wonder if Rust is killing flies with canons (as we say in spanish). There are perfectly safe alternatives or very safe ones.
Even in a project coded in Modern C++ with async code included, activating all warnings (it is a cards game) I found two segfaults in like almost 5 years... It can happen, but it is very rare at least with my coding patterns.
The code is in the tens of thousands of lines of code I would say, not sure 100%, will measure it.
Is it that bad to put one share pointer here and there and stick to unique pointers and try to not escape references? This is ehat I do and I use spans and string views carefully (you must with those!). I stick to the rule of zero. With all that it is not that difficult to have mostly safe code in my experience. I just use safe subsets except in a handful of places.
I am not saying C++ is better than Rust. Rust is still safer. What I am saying is that an evolution of the C++ model is much more ergonomic and less viral than this ton of annotations with a steep learning curve where you spend a good deal of your time fighting the borrow checker. So my question is:
- when it stops being worth to fight the borrow checker and just replace it with some alternative, even smart pointers here and there? Bc it seems to have a big viral cost and refactoring cost besides preventing valid patterns.
That "evolution of the C++ model" (the C++ Core Guidelines) has an even steeper learning curve than Rust itself, and even more invasive annotations if you want to apply it across the board. There is no silver bullet, and Rust definitely has the more principled approach to these issues.
I'm not answering your question here, just saying my opinion on C++ vs Rust. I think that the big high-level difference (before diving into details like ownership and the borrow checker) is that C++'s safety is opt-in, while Rust's safety is opt-out. So in C++ you have to be careful each time you allocate or access memory to do it in a safe way. If you're working in a team, you all have to agree on the safe patterns to use and check that your team members are sticking with it during code rewiews. Rust takes this burden from you, at the expense of having to learn how to cooperate with the borrow checker.
So, going back to your question, I think that the answer is that it depends on many factors, including also some non-strictly-technical ones like the team's size.
Unsafe Rust is not harder or safer than C/C++. If you can uphold all safety invariants for C/C++ code (OMG!), then it will be easier to do same thing for unsafe Rust, because Rust has better ergonomic.
With more enforced correctness of Rust (also unsafe Rust) I mean small details like Rust not allowing implicit conversion between integer types. That alone eliminates a pretty big source of hidden bugs both in C and C++ (especially when assigning a wider to a narrower type, or mixing signed and unsigned integers).
All in all I'm not a big fan of Rust, but details like this make a lot of sense (even if they may appear a bit draconic at first) - although IMHO Zig has a slightly better solution by allowing implicit conversions that do not lose information. E.g. assigning a narrower to a wider unsigned integer type works, but not the other way around.
Look at Hylo. Tell me what you think. You do not need all that juggling. Just use value semantics with lazy copying. The rest is handled for you. Without GC. Without dangling pointers.
? I believe the Rust efforts in Firefox were largely successful. I think Servo was for experimental purposes and large parts were then added to Firefox with Quantum: https://en.wikipedia.org/wiki/Gecko_(software)#Quantum
My recollection was that those were separate changes - servo didn’t get to the stage where it could be merged, but it was absolutely the plan to build a rendering engine that outperformed every other browser before budget cuts hit.
It would be interesting to hace a postmortem of what went well, wrong, etc. for this initial effort.
I believe work continues now somewhere else but it would be absolutely nice to know more from the experience from others.
Servo is an ongoing project, it has not "failed" or been unsuccessful in any sense.
I think the original poster is perhaps speaking to previous articles (ie https://news.ycombinator.com/item?id=39269949) which from the outside looking in made me feel that perhaps this infact was the case (at least for a period).
Microsoft is rewriting quite a bit of their C# to Rust for performance reasons. Especially within their business line products. Rust have also become rather massive in the underlying tech in the telecommunications infra structure in several countries.
So I’m not sure that your take is really so on point. Especially as far as comparing it with Go goes (heehee), at least not in terms of 3rd party libraries where most of the Go ecosystems seems to be either maintained by one or two people or abandoned as those two people got new jobs. I think Go is cool by the way, but there is a massive difference in the maturity of the sort of libraries we looked into using during our PoCs.
Anyway. A lot of Rust adoption is a little quiet, and well, rather boring. So maybe that’s why you don’t hear too much about it.
Microsoft rewrote one, maybe two microservices as it was driven by a lead interested in using Rust and is rewriting parts of NT kernel (way more important).
There's lots of Rust code in Firefox!
Meta has a lot of Rust internally.
The problems with Rust for high-level indie game dev logic, where you're doing fast prototyping, are very specific to that domain, and say very little about its applicability in other areas.
This is commonly said but I think it's only correct in the sense that Google is famous and Google engineers started it.
Google never drove adoption; it happened organically.
I really hope that C++ evolves with gamedev and they become more and more symbiotic.
Maybe adoption of rust by gamedev community isn't the best thing to wish to happen to language. Maybe it is better to let other crowd to steer evolution of rust, letting system programming and gamedev drift apart
I think I don't know a single gamdev who's fond of "modern C++" or even the C++ stdlib in general (and stdlib changes is what most of "modern C++" is about). the last good version was basically C++11. In general the C++ committee seems to be largely disconnected from reality (especially now that Google seems to be doing its own C++ successor, but even before, Google's requirements are entirely different from gamedev requirements).
I can only comment this like: tell me you have no idea about current state of C++ without telling me you have no idea about current state of C++.
Then let's hear some counter examples please. As far as I'm aware the last important language change since C++11 was designated init in C++20, and that's been butchered so much compared to C99 that it is essentially useless for real world code.
There a whole bunch of features and fixes that each new version of the standard proclaimed, which severely affected usability, expressibility and convenience of the language. Describing many of them could easily take an hour. I'm sorry, I can only highlight a few of my particular favourites that I regularly use and let you study the rest changes.
https://en.cppreference.com/w/cpp/14
- fixed constexpr, which in C++11 was basically unusable
- great improvements for metaprogramming, which made such gems as `boost::hana` possible, such as variable templates and generic lambdas.
- function return type deduction
https://en.cppreference.com/w/cpp/17
- inline variables finally fixes the biggest pain of developing header-only libraries
- useful noexcept fix
- if constexpr + constexpr lambdas
- structured bindings
- guaranteed copy elision
- fold expressions
I'm at automotive where due to safety requirements we just barely started to work with C++17, so I don't have much practical experience of the standards past it, though I'm aware there are great updates too. Overall - C++11 is as horrible compared to C++17, as C++98 and roughly 03 were compared to ground breaking back then C++11. Personally, when I skim though job vacancies and see they are stuck at C++11, I pass it. Even C++14 makes me very sceptical, even though I used it really a lot. All due to new nice improvements of C++17.
https://en.cppreference.com/w/cpp/20
https://en.cppreference.com/w/cpp/23
Ok, I'll give you fold expressions and structured bindings as actually important language updates. The rest are mostly just tweaks that plug feature gaps which shouldn't have existed in the first place when the basic feature was introduced in C++11 or earlier.
IMHO by far most things which the C++ committee accepts as stdlib updates should actually be language changes (like for instance std::tuple, std::variant or std::range). Because as stdlib features those things make C++ code more and more unreadable compared to "proper" syntax sugar (Rust suffers from the exact same problem btw).
Let me make one more educated (by your new remark) guess: you have never in your life read any part of a C++ standard, right? This is fine, it's not for anyone. The point here is however that to prevent people from smiling when you say such things, you need to understand that you can not just proclaim: "thou, compiler, shall do this and that!" and assume it magically happens. It doesn't work that way. Each new feature should be carefully and consistently inbuilt into something called C++ abstract machine. This thing is already very complex, so it is an extremely complex process that should be implemented with a great care to not break some subtle things elsewhere. But it's not the only thing you should take into account: the result of your feature change should not just be implementable on your platform, but also on other over 100499 various hardware platform into which you can just take your sources - and cross-compile, to run basically everywhere.
So, no, sorry, but no, it never works the way you've said.
Oh I followed the C++ standardization process quite closely for about 15 years up until around C++14 and still follow it from the sidelines (having mostly switched back to C since then), and I'm fully aware of the fact that C++ has designed itself into a complexity corner where it is very hard to add new language features (after all, C++ has added more new problems that had then to be fixed in later standards than it inherited from C in the first place).
I still think the C++ committee should mainly be concerned about the language instead of shoehorning stuff into the stdlib, even if fixing the language is the harder problem.
And I can't be alone in this frustration, otherwise Carbon, Circle and Herb Sutter's cppfront wouldn't have happened.
You should probably tone done your speech, and lay off the patronizing attitude, no matter how well justified are your artguments.
A practical example on C++14 & its constexpr+variable templates fixes, and why this was important: a while ago I wrote a wrapper over a compile-time fixed size array that imposed a variable compile-time fixed tensor layout on it. Basically, it turned a linear array into any matrix, or 3D or 4D or whatever -D is needed tensor and allowed to efficiently work with them in compile time already. There was obviously constexpr constuction + constexpr indexing + some constexpr tensor operations. In particular there was a constexpr trace operation for square matrices (a sum of the elements on the main diagonal, if I'm not mistaken). I decided to showcase the power of constexpr to some juniors in the team. For some reason, I thought that since the indexing operation is constexpr, then computing the matrix trace would require a compiler to just take elements of the matrix at precomputed at compile time addresses, which will be seen in the disassembly as memory loads from fixed offsets (without computing these offsets in runtime, since matrix layout is fixed in a compile time and index computation is constexpr operation). So I quickly wrote an example, compiled it with asm output, and looked at it... It was a facepalm moment - I forgot that trace() was also constexpr, so instead of doing any runtime computations at all, the code just had already computed trace value as a constant in a register. How is it not cool? Awesome!
Such things are extremely valueable as they allow to write much more expressive and easy to understand and maintain code for entities known in a compile time.
C++17/20 are light-years beyond C++11 in terms of ergonomics and usability. Metaprogramming in C++11 is unrecognizable from C++20 things have improved so much. I hated C++ before C++11 but now C++11 feels quite legacy compared to even C++17. The ability to write almost anything, like a logging library, without C macros is a huge improvement for maintainability and robustness.
Most of the features in modern C++ are designed to enable writing really flexible and highly optimized libraries. C++ rarely writes those libraries for you.
Heh, mentioning metaprogramming and logging is not exactly how you convince anybody of superior ergonomics and usability.
Lamothe's Black Art book came out in '95. Abrash's black book came out in '97.
Borland C++ was pretty common and popular in 93 and we even had some not-so-great C++ compilers on Amiga in 92/93 that had some use in gamedev.
SimCity 2000 was written in C++, way back in '93 (although they started with Cfront)
An absolute fuckton of shareware games I was playing in the 90s were built with Turbo C++.
I also remember by videogame magazines I was reading back in early 90s that another C++ compiler that was a favourite among devs was Watcom C++ that was released in 88.
That doesn't mean that it was used primarily with C++ though. IIRC Watcom C/C++ mainly became popular because of Doom, and that was written in C (as all id games until Doom 3 in 2004 - again IIRC though).
The actual killer feature of Watcom C/C++ was not the C or C++ compiler, but its integration with DOS4GW.
Btw, dont’t remember Turbo C or Borland C++ to be able to compile to 32-bit x86 on DOS
Kind of true, however they had endless amounts of inline Assembly, as shown on the Black Book as well.
I know of at least a MS-DOS game, published on Portuguese Spooler magazine, that was using Turbo C++ basically as a macro assembler.
One of the PlayStation selling points for developers was being the first home console with a C SDK, while SEGA and Nintendo were still doing Assembly, C++ support only came later to the PlayStation 2.
While I agree C++, BASIC, Turbo Pascal, AMOS were being used a lot, specially in the Demoscene, they were our Unity, from the point of view of successful game studios.
Many tried c++ in early 90s, but wasnt it too slow/memory intensive? You had to implement lots of inline c/assembly to have a bit of performance. Nowadays everything is heavily optimized, but back then not.
If you’re referring to game dev specifically, there have been (and continue to be) concerns around the weight of C++ exception handling, which is deeply-embedded in the STL. This proliferated in libraries like the EASTL. C++ itself however is intended to have as many zero-cost abstractions as possible/reasonable.
The cost of exception handling is less of a concern these days though.
Exception handling is easy enough to disable. Luckily, or C would probably still be the game developers go to.
Comparing the time it takes for a prog language to spread from the 80s to today is a bad vantage point. Stuff took much longer to bake back then -- but even so the point is moot, as other commentors pointed out, it took off roughly the same amount of time between 2015 and today.
Hmm I don't agree. We're far away from the frantic hardware and software progress in the 80s and 90s. Especially in software development it feels like we're running in circles (but very, very fast!) since the early 2000's, and things that took just a few months or at most 2..3 years in the 80s or 90s to mature take a decade or more now.
Yeah, gaming industry has become mature enough to build up its own inertia so it will take some time for new technologies to take off. C# has become a mainstream gamedev language thanks to Unity, but this also took more than a decade.
The concept of AAA games didn't even exist back in 1985, very few people were developing games at that era, and even fewer were writing "complex" games that would need C++.
The SNES came on 1990 and even then it had it's own architecture and most games were written in pure assembly. The PlayStation had a MIPS CPU and was one of the first to popularize 3D graphics, the biggest complexity leap.
I believe your are seeing causation were only correlation should be given. C++ and more complex OOP languages just joined the scene when the games themselves became complex, because of hardware and market natural evolution
Seems like a few contradictory ideas here. Rust is supposed to be a better safer C/C++.
Then lot of comments here that games are best done in C++.
So why can't Rust be used for games?
What is really missing beyond an improved ecosystem of tools. All also built on Rust.
Even there it's very problematic at scale unless you know what you're doing. async/await isn't zero cost, regardless of what people will tell you.
Absolutely. Async/await typically improves headroom (scalability) at the cost of latency and throughput. It may also make code easier to reason about.
Compared to what?
Doing epoll manually?
Threading, probably.
Async/await isn't related to threading (although many users and implementations confuse them); it's a way of transforming a function into a suspendable state machine.
I know. But threading, and earlier processes, were less scalable but potentially faster ways of handling concurrent requests.
It's also much easier to reason about, since scheduling is no longer your problem and you can just write sequential code.
That's one way to see it. But the symmetric view is equally valid: async await is easier to reason about because you see were the block points are instead of having to guess which function is blocking or not.
In any case you aren't writing sequential code, it's still concurrent code, and there's a trade-off between the writing simplicity of writing it as if it was sequential code, and the reading simplicity of having things written down explicitly.
This “write-time vs read-time” trade of is everywhere in programming BTW, that's also the difference between error-as-return-values and exception, or between dynamic typing and static one for instance.
Games need async/await for two main reasons:
- coding multi-frame logic in a straightforward way, which is when transforming a function into a suspendable state machine makes sense
- using more cores because you're CPU-bound, which is literally multithreading
Both cases can be covered by other approaches, though:
- submitting multi-frame logic as job parameters to a separate system (e.g., tweening)
- using data parallelism for CPU-intensive work
Threading is compatible with async
"threading alone" as in a thread per request.
I don't think so, because there isn't a performance drawback compared to threads when using async. In fact there's literally nothing preventing you from using a thread per task as your future runtime and just blocking on `.await` (and implementing something like that is a common introduction to how async executors run under the hood so it's not particularly convoluted).
Sure there's no reason to do that, because non-blocking syscalls are just better, but you can…
A reactor has to move the pending task to some type of work queue. The task has to pulled off the work queue. The work queue is oblivious as to the priority of your tasks. Tasks aren't as expensive as context switching, but they aren't free either: e.g. likely to ruin CPU caches. Less code is fewer instructions is less time.
If you care enough, you generally should be able to outdo the reactor and state machines. Whether you should care enough is debatable.
So yeah, you're thinking about the comparison between async/await and manual state machines management with epoll. But that's not what most people have in mind when you're saying async/await have performance impact, most of them would immediately think you're talking about the difference with threads.
The cache thing is a thing I think a lot of people with a more... naive... understanding of machine architecture don't clue into.
Even just synchronizing on an atomic can thrash branch prediction and L1 caches both, let alone working your way through a task queue and interrupting program flow to do so.
If I'm not doing slow blocking I/O, I'm not doing epoll anyways.
But the moment somebody drops async into my codebase, yay, now I get to pay the cost.
Either you are doing slow IO (in some of your dependency) or you don't have anyone dropping async in your code though…
Definitely makes code harder to reason about.
If you were to write the same code without using async you'd be trudging through a mess of callbacks and combinators. This is what writing futures code before 2018 was like. It was doable if you needed the perf but it sucked. Async is a huge improvement to readability and reasoning that we didn't have before.
No, actually that was just javascript. Programming environments with threading models don't have to live that way. Separate threads can communicate through channels and do quite well for themselves. See how it works is, you do something like let data = file.read(); and the it just sits there on that line until the read is done and then your data has the actual bytes in it and you just use them and go on with your life.
That's exactly how async/await works, except that it translates to state machines under the hood which gives you great performance. No need to mess with threading models, at all.
Yeah, Rust's async/await and lightweight threads are functionally very similar. Function coloring is a problem with async/awaitt, though (for now?).
Until you need cancellation
One rarely really needs that.
Maybe you are both right but your scales are orders of magnitude apart.
I disagree with this, you're probably not paying much (if at all) in latency or throughput for better scaling.
What you're paying for with async/await is a state machine that describes the concurrent task, but that state machine can be incredibly wasteful in size due to the design of futures and the desugaring pass that converts async/await into the state machine.
That's why I said it's not "zero cost" in the loosest definition of the phrase - you can write a better implementation by hand.
That is true. Rust's async/await desugaring is still missing optimizations. I think that will be ironed out eventually. What mainly concerns me about async/await is that, even with Rust's best efforts, the baseline complexity will probably always be somewhat higher than for sync code. I will be pleased if the gap is minimized and people only need to reach for async when they want to. Right now, the latter isn't the case because of the "virality [of] function coloring".
Why? Those kinds of game engines are enormous amounts of code, and there's little incentive to rewrite.
I do strongly disagree that we aren't ever going to see large-scale game development in Rust; it just takes time. Whether games adopt an engine is largely about that engine's maturity rather than anything about the language. Bevy is quite young; 0.13 doesn't even have support for animation blending yet (I landed that for 0.14).
It was a few years back that the question came up to the developers of a Call of Duty title. "Is there still code from Quake 3 in COD?". They dodge around it by saying something like "we cannot deny this but e use the most appropriate tech where needed".
While not confirmation, I wouldn't be surprised if there is a few nuggets of Q3 in that code base still doing some of the basics. That would be really cool if it is true.
It seems like unless you are someone like John Carmack or most of Nintendo, game dev tools are about what can get the best results quickest rather than any sort of technical specifics. It is a business after all.
A lot of big projects have amazing longevity to their older architectural decisions. Unreal still has a lot of stuff in it people that used UE1 would recognize, I did most of my professional development on UE3 and a bunch of that is still pretty recognizable. Similarly Chrome is a product of the time it was first created. And looking into the Windows source is probably like staring into the stygian abyss.
There is a lot of legacy and tech debt out there!
I remember years back someone form Microsoft calling the windows code base "The Abyss" because of how much technical legacy there was in it.
I think it was Steve Gibson who said that the Windows code base had some very questionable things in it. For instance they had work experience high school students working on code that made it into the final build that was less than spectacular. Like how Windows used to stall when you put a CD in and wouldn't proceed until the disc spun up and started reading data.
Windows 11 probably would still do that but I don't know because I don't have a disc drive any more.
Damn I forgot about explorer hanging when you put a CD in. That was especially terrible when you didn't have DMA
It wasn't really windows lagging, it was explorer. There used to be more things in explorer that were blocked on something ultimately blocked by I/O.
This tends to not be the case so much any more, so I doubt it would happen today.
Instead you get the dreaded "Working on it....". It seem's like hard drives can be just as slow to spin up these days as CDs were back in the day.
If that's the question... Let me assure you that there are decades-old pieces of code inside of, and used to assemble, many modern AAA games coming out of mature studios. The systems and tooling is typically carried forward. I don't think this is some big secret and you've intuited exactly the reason why:
Not surprised at all that this stuff sticks around. I find it very endearing actually. Ain't broke, don't fix it!
A neat real-world example of ancient Quake code surviving to this day is visible in Valves games - the hardcoded patterns for flickering lights in Quake 1 survived into GoldSrc and then into Source and then into Source 2, most recently showing up in Half Life Alyx, 24 years on from their original appearance in Quake 1.
https://www.alanzucconi.com/2021/06/15/valve-flickering-ligh...
Basically all of the bigger systems will have been Ship-of-Theseus'd several times over by now, but little things like that can slip through the cracks.
That light flickering is quite cool, thanks for sharing. It reminds me of the Wilhelm scream, but on a much smaller scale of course.
Bingo. Rust's biggest strength is correctness. But games aren't mission critical, and gamers are very tolerant towards bugs (maybe not on social media, but very few buggy games have had their sales impacted). Your biggest sale to AAA game devs are to engine programmers to minimize tech debt. But as we are seeing with the current industry, that's not exactly something companies care about until it's too late.
Then on the indie level we get articles like this. Half the article ultimately came down to "it's faster to break things and iterate than to do it right once". Again, similar lack of need for bug-free games. In addition, few indie games are scoped to a point where they need a highly disciplined ECS solution to scale with.
The author even criticizes the "tech specs" community part of rust gamedev. Different tools, diferent goals, different needs. IMO, I think Rust will help make for some very robust renderers one day, but ultimaely the scripting will be done on another language. Similar to how Unity uses C# scripting to a C++ engine, that they IL2CPP to bring back to a full C++ game.
This, exactly. As an embedded turned Unreal developer the first impression I had while using Unreal is how little concern for correctness there is overall. UB is used liberally, and there's clearly a larger focus on development speed and ease off use compared to safety and correctness. If a game has integer overflow or buffer overflows nobody cares. Viceversa, you need to keep the whole thing usable enough for the various 3D artists and such who have a hard time understanding advanced programming.
At one point the studio behind the Finals was writing game server code in Rust with an Unreal engine client. Not sure if that's true still
Backend 3d code?
Server side rendering for games.
That's a thing?
Yep, Stadia might have failed, but GeForce Now and XBox Cloud Gaming have enough customers to keep them going.
That’s complete different. They are rendering the client and streaming it to users. That doesn’t make the client side code “server side” any more than you streaming Fortnite on Twitch does.
Nope, XBox XDK has facilities for code to be aware of rendering server side.
Absolutely! Any sort of multiplayer game needs a source of authority if you want to prevent cheats like a hacked client lying about its position, and a really good way to do that is load the geometry of your level and run physics checks server side at a lower frequency than once per frame. Godot and Unity both support headless builds for exactly this reason, it's basically the whole game engine, minus the renderer, audio, and UI systems, usually.
That is not server side rendering. Per your own comment:
(Otherwise you are completely correct.)
Closest I can think of is server side ragdolls that are rendered the same on all screens and similar stuff.
I'm not familiar with the domain, but wouldn't 3D collision checking be considered backend 3D code? Even if it's not rendered, it still needs to be calculated.
The studio you're talking about is Embark studios, and is openly pretty big on Rust [1] I think it was rumored that their next project will use a Rust game engine, but I am not sure how it's going now.
[1] https://github.com/EmbarkStudios/rust-ecosystem
Their creative sandbox project is full Rust from client to server I believe. I haven't kept up with it after trying the closed alpha a while ago but it looks like it's still going, and has a name now: https://wim.live
It's still only listed as coming to PC, Mac, Linux and Android so I guess they haven't broken through the barrier of shipping Rust on consoles.
Can you please elaborate on this? I see a lot of similar concerns in other contexts too. Linux kernel's scheduler for example. Is it a throughput/latency tradeoff?
The current popularity of the async stuff has its roots in the classic "c10k" problem. (https://en.wikipedia.org/wiki/C10k_problem)
A perception among some that threads are expensive, especially when "wasted" on blocking I/O. And that using them in that domain "won't scale."
Putting aside that not all of use are building web applications (heterodox here in HN, I know)...
Most people in the real world with real applications will not hit the limits of what is possible and efficient and totally fine with thread-based architectures.
Plus the kernel has gotten more efficient with threads over the years.
Plus hardware has gotten way better, and better at handling concurrent access.
Plus async involves other trade-offs -- running a state machine behind the scenes that's doing the kinds of context switching the kernel & hardware already potentially does for threads, but in user space. If you ever pull up a debugger and step through an async Rust/tokio codebase, you'll get a good sense for what the overhead here we're talking about is.
That overhead is fine if you're sitting there blocking on your database server, or some HTTP socket, or some filesystem.
It's ... probably... not what you want if you're building a game or an operating system or an embedded device of some kind.
An additional problem with async in Rust right now is that it involves bringing in an async runtime, and giving it control over execution of async functions... but various things like thread spawning, channels, async locks, etc. are not standardized, and are specific per runtime. Which in the real world is always tokio.
So some piece of code you bring in in a crate, uses async, now you're having to fire up a tokio runtime. Even though you were potentially not building something that has anything to do with the kinds of things that tokio is targeted for ("scalable" network services.)
So even if you find an async runtime that's optimized in some other domain, etc (like glommio or smol or whatever) -- you're unlikely to even be able to use it with whatever famous upstream crate you want, which will have explicit dependencies into tokio.
I've been developing in (mostly async) Rust professionally for a about a year -- I haven't written much sync rust other than my learning projects and a raytracer I'm working on, but what are the kind of common dependencies that pose this problem? Like wanting to use reqwest or things like that?
Yes. Reqwest cranks up Tokio. The amount of stuff it does for a single web request is rather large. It cranks up a thread pool, does the request, and if there's nothing else going on, shuts down the thread pool after a while. That whole reqwest/hyper/tokio stack is intended to "scale", and it's massive overkill for something that's not making large numbers of requests.
There's "ureq", if you don't want Tokio client side. Does blocking HTTP/HTTPS requests. Will set up a reusable connection pool if you want one.
reqwest also has a blocking version, which I use in projects not already using an async rt
https://docs.rs/reqwest/latest/reqwest/blocking/index.html
The blocking implementation still depends on and uses tokio, last I looked.
I've seen this with multiple Rust packages. "Yes, we offer a synchronous blocking version..." and then you look and it's calling rt.block_on behind the scenes.
Which is a pretty large facepalm IMHO
You don't have to do that, Tokio also provides a single-threaded runtime that just runs async tasks on the main thread.
Perfect moment to mention "rouille" which is a very lightweight synchronous web server framework. So even when you decide to build some web application you do not necessarily have to go down the tokio/async route. I have been using it for a while at work and for private projects and it turned out to be pretty eye-opening.
"I tend to agree about the "async contamination" problem. The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests. I've been pushing back against it creeping into areas that don't really need it."
100% this. As I say elsewhere in these threads: Rust is the language that Tokio ate. It isn't even just async viral-chain-effect, it's that on the whole crates for one async runtime are not even compatible with those of another, and so it's all really just about tokio.
Which sucks, if you're doing, y'know, systems programming or embedded (or games). Because tokio has no business in those domains.
Disappointing to hear this after battling the same nonsense in JS for years.
Rust is a language made and used by Dunning-Kruger people who violently react to having to learn the prior art.
What did you really expect?
Rust's async/await design makes a lot of sense when you consider its primary goals (C interop, low level control, zero cost abstractions, etc.). Sure, perhaps most of us should be using a language with different constraints as opposed to Rust.
It's just endemic to the industry. Framework-itis
It does in my domain of systems programming with async data handling. Tokio works like a dream - slipping into the background and just working so I can concentrate on the business logic.
The main reason is that you can't ship that Rust code on PS5 in a sensible manner. People have tried, got useless toys to compile, but in the end even Embark gave up. I remember seeing something from them that they had moved Rust to server-only.
Really - why’s that?
Sony requires that you use their tooling, which you can only get under NDA.
If there was significant pressure from developers Sony would allow Rust. I doubt there is any.
It's a catch 22 - you can't deploy Rust so no one uses Rust for anything, no one uses Rust for anything so there is no reason for Sony to work on Rust deployment.
I think it would be a really good fit for certain parts of the engine - serialization code especially. We have massively complicated C++ code parsing network packets and all sorts of similar sketchy things, always scares me when I see it.
Really a shame that there's that sort of thing going on in 2024 too.
I'm happy to see someone still doing some work in second life.
There's a lot going on. Someone is doing a new third party viewer, Crystal Frost, in Unity. Linden Lab has a mobile viewer in alpha test. Rendering is PBR now for new objects. There are mirrors! Content upload is moving to glTF, to be compatible with everybody else. Voice is switching from Vivox to WebRTC. Game controller support is in test. New users get better avatars. The dev staff is larger.
None of this is yet increasing Second Life usership much, but it remains the best metaverse around.
I thought the metaverse thing was going to be bigger. Meta spent so much money to produce so little.
I'd like to use the opportunity to ask: What happened during the covid pandemic? I haven't heard/read anything about second life during the pandemic even though this was probably a once-in-a-lifetime opportunity?
Are there any news sources that you can recommend to keep an eye on second life, because it doesn't seem that it gets that much press coverage?
Usage went up about 10%, and then leveled off. Logged in right now, at 0020 PDT: 32084 users. Varies between 30,000 and 50,000 around the clock.
* https://modemworld.me/
* https://ryanschultz.com/
Argh I have the same issue. Sure if you write JS or Python you probably need async. My current Java back end that has like 5 concurrent users does not need async everything making 10x the complexity.