return to table of content

Let's write a video game from scratch like it's 1987

ceronman
14 replies
1d21h

The result is a ~300 KiB statically linked executable, that requires no libraries, and uses a constant ~1 MiB of resident heap memory (allocated at the start, to hold the assets). That’s roughly a thousand times smaller in size than Microsoft’s. And it only is a few hundred lines of code.

And even with this impressive reduction in resource usage, it's actually huge for 1987! A PC of that age probably had 1 or 2 MB of RAM. The Super NES from 1990 only had 128Kb of RAM. Super Mario Word is only 512KB.

A PlayStation from 1994 had only 2MB of system RAM. And you had games like Metal Gear Solid or Silent Hill.

Doctor_Fegg
3 replies
1d20h

Yes. I wrote a version of Minesweeper for the Amstrad CPC, a home computer popular in 1987 (though I wrote it a few years later). I think it was about 5-10Kb in size, not 300. The CPC only had 64k of memory anyway, though a 128k model was available.

pdw
0 replies
1d3h

Even the Windows 95 Minesweeper was only a 24 kilobyte program.

codazoda
0 replies
1d3h

Probably a little later but I had an Amstrad 8086 as a teen. I think it was the first computer I bought with my own money.

aidos
0 replies
1d18h

7yo me could not understand how people could possibly make software but I knew I wanted to be part of it. I loved my CPC 6128.

switchbak
2 replies
1d20h

In 1987, I think you'd be very lucky to have that much RAM. 4MB and higher only started becoming standard as people ran Windows more - so Win 3.1 and beyond, and that was only released in 1992.

aidenn0
0 replies
1d1h

It was over $100/MB for RAM in 1987. The price was declining until about 1990, then froze at about $40/MB for many years do to cartel-like behavior, then plummeted when competition increased around 1995. I was there when the price of RAM dropped 90% in a single year.

II2II
0 replies
1d3h

4 MB was considered a large amount of memory until the release of Windows 95. There were people who had that much, but it tended to be the domain of the workplace or people who ran higher end applications.

If I recall correctly, home computers tended to ship with between 4 MB and 8 MB of RAM just before the release of Windows 95. There were also plenty of people scrambling to upgrade their old PCs to meet the requirements of the new operating system, which was a minimum of 4 MB RAM.

mjbrusso
0 replies
1d8h

A lot of the work being done here by the program code was done in dynamically linked libraries in the original game.

ben7799
1 replies
1d5h

A PC in 1987 didn't run X11 either though.

You needed something way more expensive to run X11 before 1990.

aidenn0
0 replies
1d1h

Yes and no.

Since we are talking about software written today, not just software available in 1987, X386 (which came out with X11R5 in 1991) was more than capable of running on a 386-class machine from 1987. Granted, a 386 class machine with 1MB of ram and a hard-disk would have been pushing $10k in 1987 (~$27k in 2024 dollars), so it wasn't a cheap machine.

pjmlp
0 replies
1d4h

Also PlayStation was notiorious in game development, by being the first games console with a C SDK, until then it was only Assembly.

When arcade games started to be written in C, it was still using mainframe and micros, with downlink cables to the development boards.

ido
0 replies
1d12h

A PC in 1987 was more likely to have max 640kb of RAM, the "AT compatibles" (286 or better) were expensive still. We had an XT clone (by the company that later rebranded at Acer) bought in 1987 with 512kb RAM.

brandall10
0 replies
1d3h

Like others have said, that would only be available on what would be a very costly machine for '87.

I distinctly remember the 386sx-16 I got late 1989 came with 1 megabyte and a 40mb hard drive for just under $4k from Price Club (now Costco), which was an unusually good price for something like that at the time.

pan69
12 replies
1d21h

Writing a game, or any software in 1987 would be painstaking compared to the luxury we have today. Back then it was normal to run DOS and DOS can only do one thing at the time. You open your editor, write code, close your editor, run the compiler, run your program, test your program, exit program, re-launch editor, etc. Over time small improvements where made to this flow like a DOS Shell and even things like DESQView that allow for basic multitasking.

This is probably a better description (from a code point of view) on what you had to do as a programmer to write a game in the late 80s / early 90s:

https://cosmodoc.org/

bsder
6 replies
1d21h

Erm, there was a reason why Turbo Pascal (and the other Borland stuff) was such a big deal.

And that dates to 1983.

switchbak
3 replies
1d20h

TP was just so awesome, it was like a superpower compared to others waiting 10-20x as long for each build.

For me the big thing was all the latencies stacked together. Slow hard drives, slow floppies, god help you if you swapped, etc. The era was mostly waiting around for the beige machine to finally do its thing.

toast0
2 replies
1d20h

Thank goodness for modernity. Now we wait for white, black, or unpainted aluminum machines to finally do their thing. Sometimes, we never even get to see the machines. :(

M95D
1 replies
1d9h

No modernity for me, thank you! Now every time I add a component to my PC, it's a different color than the rest. It looks like a zebra, I swear.

Back then it was just "new" beige and "yellow-ish old worn" beige.

tashbarg
0 replies
4h17m

Zebras come in colors? Ours are all kinda monochrome.

fentonc
0 replies
1d18h

I spend all day writing C++ or Python, and like playing around with Turbo Pascal on a circa-1984 Kaypro 2 as a hobby machine - the projects are certainly smaller, but my edit-compile-run loop on my Kaypro is usually faster in practice (even running on a 4 MHz Z80 with 64KB of RAM compared to an 8-core 3+ GHz workstation with 64GB of RAM) than my 'modern' work. It's genuinely crazy to me how usable a 40 year old machine is for development work.

0xcde4c3db
0 replies
1d19h

Also, much like some people are Excel wizards, some people were Apple II monitor/mini-assembler wizards or MS-DOS DEBUG wizards, or whatever other thing already lived natively on the machine. If someone has strong knowledge of the machine, a handful of carefully targeted little software augmentations, and well-developed muscle memory for the commands, watching them use a debugger/monitor can be almost like watching someone use a REPL.

M95D
2 replies
1d10h

It was single tasking, but usually there was no need to close the editor to launch the compiler and test the program. IIRC, QBASIC compiled and ran the program on one of the F keys; even EDIT.COM had a subshell.

rob74
1 replies
1d9h

Turbo Pascal (and its sibling Turbo C) also had a text-mode IDE that could open multiple files, had mouse support, syntax highlighting etc. and if you ran your program, it would run it and when it was finished you were back in the IDE. You could even set breakpoints and step through your code.

SetTheorist
0 replies
1d4h

Don't forget about Turbo Basic!

It was a great language and IDE for a young self-taught programmer.

theendisney4
0 replies
13h58m

One would often design tools specifically for an application. Depending on how nuts you went with that it could be quite luxorious. Map editors are the obvious example but if you need to find space for a disassembler you might as well make it part of the application.

Agingcoder
0 replies
1d6h

The annoying part most people don’t realize is that when your software crashed you often had to reboot the box

deaddodo
12 replies
2d

I will say, while this is interesting and fun to see, using a trivial library like SDL adds almost nothing to the overhead and expands support to non *nix OSes.

There is definitely something to be said of bloat but probably not in this case. You could even keep supporting Linux versions as old as this promises by using legacy 1.x SDL.

jandrese
11 replies
1d22h

"adds almost nothing to the overhead" is not always true. In the example code in the article he kept the pixbuf in the local X Server memory for low overhead and fast performance over the network. SDL always wants to send the pixmaps over the network. This is not a big deal for Minesweeper, but can be tough for action games.

deaddodo
9 replies
1d21h

You're just outlining a major flaw in the X Server protocol, not SDL. That's a unique situation to that specific system due to it's intrinsic network-oriented design and specifically the issue Wayland was designed to handle, which doesn't have this issue.

In addition, there are solutions for cache locality of the pixbuf for SDL that you can code around, if you specifically need higher performance X11 code on Linux.

account42
4 replies
1d4h

Being able to support efficient use over the network is a flaw now? Wayland "solves" that flaw the same way that death cures disease and suffering.

kbolino
2 replies
1d3h

Hasn't it been the case, for quite some time now, that VNC and RDP are more efficient over the network than X11 for modern graphical apps? Client-side font rendering, antialiasing, full-color graphics, alpha-blending, etc. have as far as I know neutered all of the benefits that X11 originally intended to deliver in terms of "efficient" network use.

immibis
1 replies
6h3m

It has, but only because X11 apps are programmed in ways that don't work well on slow networks. They are programmed so poorly for networks that VNC works better. You can write a network-efficient one if you want to, and it will work better than VNC. Meanwhile, all Wayland apps work the VNC way, by design.

kbolino
0 replies
2h59m

I guess I don't understand why this comes off like a bad thing. X11 has ossified badly. Web apps directly achieve the goals of network transparency in a cross-platform way. Remote desktop works better in practice with VNC and RDP and those solutions are also cross-platform. Maybe in a world without Windows and macOS, X11's architecture would have been more relevant and would have evolved more. But looking at the state of affairs today, it just looks like a half-baked solution to a problem that doesn't quite exist.

deaddodo
0 replies
1d3h

It's a flaw for performant client-specific/standalone graphical use, yes. Literally one that the Linux community fought with with multiple hacks (DRI, AIGLX, etc) through the years.

It's not a flaw if you want to run a thin-client from a central machine or otherwise offer a networked interfacing system, no.

One of those use cases is far more common today than the other.

jandrese
3 replies
1d21h

The article just described how he was able to avoid this flaw in X11 by being mildly careful in how he structured his code. Making sure to do the bitmap copy only once and then issuing copyrect() calls to the server to have it do all of the blitting locally. With SDL it generally wants to do all of the blitting on the client and then push the whole window over as a giant pixmap for every frame. At least that's what it has done when I've tried to use it.

immibis
2 replies
1d20h

It's debatably even a flaw. Network transparency is pretty cool. This Minesweeper probably runs faster over an internet SSH tunnel to Australia than any pixel-based remote desktop protocol.

You feel it's a flaw because you only ever run applications locally. But more constraints are a side effect of more possibilities, because you have to program the lowest common denominator of all possible scenarios, so your program works in all of them.

That's how APIs work. They all have this tradeoff.

nottorp
1 replies
1d9h

Say, Wayland doesn't and will never support remote displays will it?

immibis
0 replies
1d5h

Doesn't and never will. It's all based around transferring pixel data. You can write a VNC-like wayland proxy, which has been done (it's called waypipe) but it will never be as performant as something designed for minimal network traffic. Waypipe will never be able to blit sprites locally on the server, because Wayland clients don't do that.

kragen
0 replies
1d10h

sdl uses shared memory in the usual case, i think

metadat
6 replies
1d20h

This source code looks perfectly legible to me. In fact, for C, it's the equivalent of T-Ball.

Nicely and consistently formatted, relevant comments, sane structure, along with decent fn, and var names.

What is your definition of "good, readable code?"

P.s. I've emailed the mods, but a currently dead child comment from a new account states essentially the same thing (my vouch was insufficient to rez): https://news.ycombinator.com/item?id=40742493

LorenDB
4 replies
1d20h

Look at the R_MapPlane function. Indentation in if statements is non-existent.

stoltzmann
1 replies
1d13h

I honestly don't see anything wrong with how it's done, can you please clarify? This is pretty much exactly the formatting I used for C always.

caseyy
0 replies
1d4h

It could use more verbose variable names and comments to give the reader more context. So the bus factor is very low.

If this programmer leaves the team and their large codebase like this is left behind, it will likely become a calcified legacy monolith of code that is expensive to maintain.

Today, such code would not pass code review and possibly some automated pre-submit testing (Halstead’s metrics, Microsoft’s maintainability index, etc) in the AAA companies I worked for that cared about code maintainability.

It’s not specifically about formatting or syntax. It’s more about whether a different programmer can look and understand exactly what it does, and what cases it handles, and which it doesn’t, at first glance. And the other programmer can’t be Carmack-experienced or grey beards — it has to be the usual mid-level dude.

The code could even be written in a self-documenting code paradigm with parts extracted into small appropriately named functions. And it doesn’t need to be done to some perfectionist extreme, just a little bit more than what Carmack did. It just can’t be what we jokingly refer to in the industry as academic code — made for one author to understand and for every other reader to be impressed but not work with.

1993 was a different time, more code was academic, most of it was poorly documented. And there were good reasons not to have function call overheads. We even used to do loop unrolling because that saves a jump instruction and a few variable sets (you only increment the program counter as you go). So some of the reasons why this code is the way it is are good. But in readability, we have evolved a lot in the games industry.

So much so that Doom’s code is pretty hard to read for most programmers. I asked around at work, in a large AAA company, and the consensus is that it’s archaic. But you know, it’s still good code, it did what it had to do, I’m not bashing it.

metadat
1 replies
1d20h

What? What's a pretty severe nit pick. Especially for 30 year old C code.

They didn't have no gofmt back then. We are spoiled today with the extreme consistency :) the skill of reading code includes reading even when it deviates from one's own preferred formatting (within reason, maliciously formatted code can be challenging, to say the least).

I must respectfully disagree about this being an issue worthy of anyone's attention, especially yours and mine.

another2another
0 replies
1d6h

They didn't have no gofmt back then.

Didn't need gofmt as plenty of IDEs and editors had automatic indentation and syntax colouring already implemented.

1993 wasn't the dark ages you know.

caseyy
0 replies
1d19h

I must admit, despite programming games for a long time commercially, that code is not very clear to me.

If the code is perfectly legible to you, can you explain how R_DrawPlanes draws the ceiling and the floor, step by step, as a practical illustration? How long did it take? It took me maybe 5 minutes to understand how it works.

I think just about every function I read every day is easier to comprehend. And I review a lot of game engine code. I make no claims I’m a fantastic programmer, of course.

freestyle24147
1 replies
1d22h

It's pretty easy to post a link to a codebase and say "this is unreadable".

What do you find unreadable about it? What would you do differently now that there have been 30 years of software development between then and now?

caseyy
0 replies
1d4h

In the games industry, we generally try to write code that can be maintained by an average programmer.

I would at least add more comments for context. Some teams have other ideas, like self-documenting paradigms with small functions and descriptive names.

If a typical programmer doesn’t at a glance understand what each line of each function does on screen, what all the variables mean, what changing any line would do, what is the usual data flowing through the function, and what are the limitations of the function, then it either can’t be maintained quickly and cheaply, or maintaining it will introduce unforeseen bugs.

But the key point is to not have a codebase that only a small core team can maintain.

If you want to see examples of the difference between this and modern in C-like code, see Unreal Engine’s source code. It will generally, at least in areas of frequent change, be much easier to read. I would expect a mid-level programmer to understand 90% of UE’s functions in 10 seconds each. And more experienced programmers usually understand most functions they’ve not seen in UE in a couple of seconds.

That’s not the case with Doom code. It took me up to 5 minutes to understand some of them. That means significantly worse readability. And I work in C++, C, C#, and other programming languages at a quite senior level in AAA games. So I don’t think this is a skill issue. It could be, of course.

LorenDB
1 replies
1d20h

Man, that could use a healthy dose of clang-format.

account42
0 replies
1d4h

How so? The formatting is quite consistent even if it is not the style you are used to.

pjmlp
0 replies
1d4h

In that time period the game would have been written in Assembly, Doom was still years away.

pan69
6 replies
1d21h

Actual? Is this not reversed engineered?

MrLeap
2 replies
1d21h

I think the "actual" in this case points to "commercial game".

axus
1 replies
1d21h

From the README.md: "Also thank to Roberto Carlos Fernandez, who many years ago made me promise I'd give him a printed copy of the sources. That copy is buried in some box in storage somewhere, and we'd rather do this than go search for it."

SilasX
0 replies
1d3h

To adapt the old saying, 3 months of reverse engineering can save you 3 hours in the storage room.

Jare
2 replies
1d21h

We lost the source code in any usable form. Even if we dug up the printed copy, scanning it would be tedious as hell. :) So we ended up reverse engineering our own game, fun times.

But yeah, "actual" in that it's a real commercial game (30k copies or so) developed and released in 1987. If memory serves right, we started in late autumn'86 and it was published around November'87.

kotojo
1 replies
23h39m

This is awesome. I listen to a video game history podcast with the founder of the video game history foundation, https://gamehistory.org/ , and the one thing he constantly brings up is to send him any and every person in the game industry with fun stories, weird bits and bobs of prototypes, and anything in between. If you've got the time I know that dude would love to pick your brain!

Jare
0 replies
2h45m

I didn't know about that site! I've queued up a few of the podcast episodes already. Thank you for the reference!

0xf00ff00f
1 replies
1d20h

Wow, that's pretty cool. The game looks great!

Was it developed on an actual ZX Spectrum? I think I read that the development environment for the Spectrum port of R-Type was running on a 80286 PC, curious if this was common back in the day.

Jare
0 replies
1d11h

Yes, developed on a 100% standard Spectrum 48K with the rubber keyboard :) We had a Timex 3" double disc drive, unlike our previous (not commercial) game which we developed using regular cassette tapes.

We used HiSoft's GENS assembler. My brother reverse-engineered the microdrive code in GENS and replaced it with functions to access the Timex drive. That was a huge timesaver for us and other Spanish developers at the time. The source code and the GENS program had to fit in the 48K.

For our next Spectrum game, we used a hardware system to connect an Atari ST and develop on it. It was certainly faster and more comfortable, but the system was buggy as hell and crashed/corrupted the source almost daily.

ykonstant
0 replies
1d7h

The disassembly is great, but I also love the hand-drawn maps. I wish modern games came with thick paper manuals detailing lore, mechanics, development notes and other goodies.

theendisney4
0 replies
1d13h

It looks stunning for the time and system

pmg101
0 replies
1d9h

Just watched a play through at https://youtu.be/i5QV-J3JlAY and I can definitely say that the graphics in this game would have blown 9-year-old me away in 1987! Really impressive

musha68k
11 replies
2d1h

Great "old-school" article! Intrigued to try Odin sometime.

One interesting thing: in Odin, similarly to Zig, allocators are passed to functions wishing to allocate memory. Contrary to Zig though, Odin has a mechanism to make that less tedious (and more implicit as a result) by essentially passing the allocator as the last function argument which is optional.
bsder
9 replies
1d20h

The problem with "implicit allocation" is always multithreading.

An allocation library cannot serve two masters. Maximizing single thread performance is anathema to multithreaded performance.

I'd go further. If allocators don't matter, why are you using a systems programming language in the first place?

kragen
8 replies
1d11h

it sounds like implicitly passing the allocator as an extra parameter in every call would solve the problems you identify. if it's passed in a call-preserved register, it doesn't even make that implicit passing slower, because it requires zero instructions to not change the register before calling a subroutine

(when i've done similar things, it's been to pass an arena for inlined pointer-bumping allocation, a strategy widely used by runtimes that want to make allocation fast. normally this requires two registers, though chicken gets by with just one)

bsder
7 replies
1d10h

Sure, it does, sorta.

However, again, if you don't care about your allocators, why are you using a systems programming language? If you're willing to give up control over allocation, you are way, way better off in a managed memory (garbage collected or reference counted) language.

Systems programming is pain for gain. You give up a lot of convenience in order to have strict control over things--control of latency, control of memory, control of threading, etc. The price for that control is programming with a lot fewer abstractions and far less help from the language/compiler/interpreter to keep you from shooting yourself in the foot.

If you aren't using that "control" then you're just making life painful for yourself for no good reason.

Obviously, people can use whatever language they perfectly well want for whatever reason they want. Given how much I talk about and use C, Rust and Zig, people are always surprised that if they ask if they should use any of the systems languages (C/C++/Rust/Zig/etc.) my first response is always "Oh, hell, no."

kragen
6 replies
1d10h

passing an allocator as an implicit parameter doesn't give up any control over allocation; with that mechanism you can still pass a different allocator when you want (and i assume odin allows that, though i haven't tried it)

but i disagree pretty comprehensively with your comment. i don't agree that systems programming is pain, i don't agree with your definition of systems programming as programming with tight low-level control, i don't agree that tight low-level control is pain, i don't agree that abstraction is a necessary or useful thing to give up either for systems programming or to get tight low-level control (though it certainly is expedient), i don't agree that garbage collection (including reference counting) is the only way to simplify memory management, and i don't agree that either systems programming or tight low-level control requires accident-prone languages (and i'm especially surprised to see that assertion coming from an apparent rustacean)

that is, i recognize that these tradeoffs are possible. i just disagree that they're necessary

nottorp
3 replies
1d9h

Not to mention we're talking about programming games here and you generally preload your assets and don't mess with them dynamically or your performance tanks. It's not only systems programming.

kragen
2 replies
1d9h

game engines typically do a lot of dynamic allocation, even aside from loading new levels; tight control over where that allocation happens and what to do when it fails is maybe the most important reason c++ is so popular in the space

c++ is a good example of having tight low-level control without programming with a lot fewer abstractions. indeed, the abundance of abstraction is what makes c++ usually so painful

i think it's reasonable to describe game engine programming as systems programming. i mean some of it, like writing interpreters, persistence frameworks, drivers for particular devices, and netcode, is pretty obviously right in the core of systems programming, but plausibly all of it is in that wheelhouse

account42
1 replies
1d4h

tight control over where that allocation happens and what to do when it fails is maybe the most important reason c++ is so popular in the space

It's also the most important reason why the C++ standard library is so unpopular in the space. All standard C++ containers allocate implicitly because often enough that's OK even in systems programming. What matters is that you can control the allocation more finely when you want.

kragen
0 replies
1d1h

yeah, agreed. when adding the stl to the standard library was being debated in the standards committee, microsoft forced the stl to use pluggable allocators, so you can easily make your std::vector allocate on your per-frame pointer-bumping heap, but often that's a poor substitute for not allocating at all

bsder
1 replies
20h13m

that is, i recognize that these tradeoffs are possible. i just disagree that they're necessary

That's a nice theoretical position; however, the current programming languages as they exist disagree with you.

I would also point out the one of the problems in "systems programming" is that it encompass both "can run full blown Linux" and "slightly more CPU than a potato and has no RAM". Consequently, there are VERY different lenses looking at "systems programming".

i don't agree that either systems programming or tight low-level control requires accident-prone languages (and i'm especially surprised to see that assertion coming from an apparent rustacean)

Rust is particularly poor when you can't define ownership at compile time. If you have something that you init once and then make read-only, you will be writing unsafe. If memory ownership passes between Rust and something else (say: memory between CPU and graphics card), you will be writing lots of unsafe. RPC via shared memory with a non-Rust process--prepare for pain.

Writing "unsafe Rust" is super difficult--moreso that straight C/C++. If you are writing enough of it, why are you in Rust?

You have to architect your solution around Rust to make the most of it and lots of things (especially stuff at runtime) are off limits. See: Cliff Biffle from Oxide and all the things he needed to do to make their RTOS completely defined at compile time because anything at runtime just gave Rust fits.

steveklabnik
0 replies
20h2m

because anything at runtime just gave Rust fits.

This is not the main reason that hubris does things up front. It does that because it makes for a significantly more robust system design. From the docs:

We have chosen fixed system-level resource allocation rather than dynamic, because doing dynamic properly in a real-time system is hard. Yes, we are aware of work done in capability-based memory accounting, space banks, and the like.

https://hubris.oxide.computer/reference/#_pragmatism

wredue
0 replies
2d

One of zigs core tenets is “no hidden allocations”. You’ll never see zig language supporting a default value for allocator parameters to support hiding it at first glance.

Zig does have support for empowering library users to push coded settings in to libraries, and you could conceivably use this for writing this type of code. Although it’s probably not worthwhile.

Or you can just straight up default your library to using a specific allocator, fuck the calling code.

Anyway. Zig has patterns to do stuff like this, but it’s probably unwieldy in large projects and you’re better off just making it a parameter.

You can also look in to the std ArrayList, which provides managed and unmanaged variants for yet another way that you might write code that empowers users to set an allocator once, or set it every time.

sublinear
9 replies
1d15h

I heard some hype lately about Godot so took a look today... I'm super bummed that the wasm is 50MB minimum just to get the engine rendering a blank scene.

Seems like that could be further optimized especially for simple 2D games that don't use many features. I was impressed overall though. I hadn't looked at Godot in a long time.

Aeolun
3 replies
1d8h

If you run on web, yes. But on desktop 50mb to get a whole game engine seems pretty awesome.

account42
2 replies
1d4h

What is a "whole game engine"? Ideally an exported game only includes the parts that are actually used and those should be much smaller than 50 MB for a simple example or even for most games.

manchmalscott
0 replies
1d3h

The official “export templates” that you use need to be able to run any game you create with the stock engine, so they are fully featured. You have the option (and it’s recommended) to compile your own export templates, at which point you can specify which parts of the engine your specific game does and does not need (e.g. turn off the 3D, turn off the more advanced UI components, etc…)

Aeolun
0 replies
7h55m

It might be, but it’s extremely rare I see an app below 300mb these days.

pkphilip
2 replies
1d7h

50MB is nothing. Consider that a "small" footprint Electron app is at least 120mb.

josephg
0 replies
1d6h

Sure, but web apps are downloaded in their entirety when they’re opened. A 50mb website would be horrible. Could be expensive to host, too.

And, like electron, the size is almost entirely unnecessary. Even in the browser you can make quake in 13kb:

https://js13kgames.com/entries/q1k3

account42
0 replies
1d4h

Just because many webapps are bloated doesn't make the size any less ridiculous.

pacifika
0 replies
1d4h

I think they would love to get any insight into shrinking this!

kragen
0 replies
1d10h

also you need https and some tweaks to your web server's cors parameters to permit wasm to multithread. godot is amazing tho

Razengan
7 replies
1d23h

Even some games from 1984 or even earlier are amazingly complex, making you wonder how they made them in such short time with limited tools and manpower.

space_oddity
1 replies
1d23h

The complexity and creativity of early video games from the 1980s and even earlier are truly impressive

another2another
0 replies
1d6h

Yes, what people managed in pure assembly language was really impressive.

Unsurprisingly though they also spent a lot of time developing a lot of time-saving tools, like macro assemblers and higher level languages like C.

Probably the games on the Amiga and Atari ST was the last heyday of these kinds of development.

kragen
1 replies
1d10h

hey, are you the guy that tried to steal freenode?

mietek
0 replies
1d4h

That was a "rasengan".

shiroiushi
0 replies
1d11h

They didn't get distracted by phones, the internet, or multitasking. When you sat in front of your computer, you could only do one thing at a time on it, and you dedicated all your attention to it. You didn't have a company chat or email window popping up notifications about irrelevant stuff, you didn't get the urge to look up random things on Wikipedia, etc. You probably also got to sit in a small room without a lot of distractions too, instead of sitting in a huge open-office area next to the sales group.

ilrwbwrkhv
0 replies
1d20h

Passion + skill > Total compensation optimization + Javascript.

Razengan
0 replies
1d20h

And little prior art to draw inspiration from!

The 1980-2000 era was just a raw font of imagination for video games. Since then things have become more iterative and tend to focus on maximizing profits, though thankfully there is a lot of creativity in indie studios/solo devs being empowered by the increasing ease of modern game engines, and even some bigger studios here and there, like FromSoftware.

jandrese
5 replies
1d22h

X11 is old and crufty, but also gets out of the way. Once a few utility functions to open the window, receive events, etc have been implemented, it can be forgotten and we can focus all our attention on the game. That’s very valuable. How many libraries, frameworks and development environments can say the same?

This is my thought as well. You can even avoid some of the grotty details of this article if you use Xlib as your interface instead of going in raw over a socket. Basic Xlib is surprisingly nice to work with, albeit with the caveat that you're managing every single pixel on the screen. For something like a game where you're not using system widgets it is all you need.

Where people ran into trouble is when they try to add the X Toolkit, which is far more opinionated and intrusive.

immibis
2 replies
1d20h

You might also consider using xcb, which is more of a simple wrapper around the X11 binary protocol, rather than Xlib which leakily tries to abstract it. The famous example (noted in XCB's documentation) is calling XInternAtom in a loop to intern many atoms. Xlib forces you to send request, wait for response, send request, wait for response. XCB lets you send all the requests, then wait for all the responses.

account42
1 replies
1d4h

Xlib is definitely custy but that example isn't really that convincing as you're not going to be interning atoms all the time but ideally only during initialization - after all the whole point of atoms is that you only pass the strings once over the protocol and then use the numeric IDs in subsequent requests.

immibis
0 replies
6h37m

Yes, but when you have 100 atoms and a 300ms round trip time (New Zealand to anywhere, or a satellite link in any part of the world) that's the difference between the application starting in 0.3 seconds or 30 seconds. Add a few more round trips for other stuff: 2 seconds or 32 seconds. Of course interning atoms isn't the only thing apps do on startup that is unnecessarily serialized. There could well be another 100 unnecessary round trips.

If you've ever actually tried using that configuration, you might notice that every part of every application suffers from this same problem. Almost all slowness of remote X11 used to be caused by stacking up round trip delays. Probably still is, though there's another cause now, which is transferring all the pixel data because apps treat it as a dumb pixel pipe.

This isn't a niche problem and it doesn't only affect application startup.

kragen
1 replies
1d11h

xlib is miserable, but most of the misery is the x11 protocol

http://www.art.net/~hopkins/Don/unix-haters/x-windows/disast... exaggerates slightly but is basically in keeping with my experience. win32 gdi is dramatically less painful. it's true that the x toolkit is far worse than xlib

if you do want to write a game where you manage every single pixel, sdl is a pretty good choice. i also wrote my own much smaller alternative called yeso: https://gitlab.com/kragen/bubbleos/blob/master/yeso/README.m...

tetris with yeso is about five pages of c: https://gitlab.com/kragen/bubbleos/blob/master/yeso/tetris.c

the stripped linux executable of tetris is 31.5k rather than 300k. it does need 10 megs of virtual memory though, but that's just because it's linked with glibc

i should do a minesweeper, eh?

krapp
0 replies
15h24m

i should do a minesweeper, eh?

Go for it. I just finished a lazy port to C and SDL. Not counting SDL and the spritesheet it's 42Kb. It's a fun weekend hack.

ferrantim
4 replies
2d6h

Different topic but this article got me lost down a rabbit hole looking for something similar for the TI86. Ah, memories...

Sohcahtoa82
3 replies
2d2h

My favorite TI graphing calculator story to tell was back in Algebra II class in high school, while studying polynomial expansion, I wrote a program on my TI-85 that would not only solve them, but also showed the work, so I literally only had to copy the exact output of the program and it looked exactly like I had done it by hand. I asked the teacher if using it would be cheating, and she said "If you know the material so well that you can write a program that actually shows the work, then you're going to ace the test anyway, so go ahead and use it, just don't share it with any of your friends."

The joke was on her, of course, because I didn't have any friends. :-(

Later I wrote a basic ray tracer for my TI-89. I even made it do 4x anti-aliasing by rendering the scene 4 times with the camera angle slightly moved and had a program that would rapidly move between the 4 rendered pics so that pixels that were dark for only some of the pictures would appear grey because of the screen's insanely slow response time. A basic "reflective sphere over a checkered plane" in that super low TI-89 resolution still took like 90 minutes and drained half the battery.

nsguy
1 replies
2d1h

I was just listening to this podcast the other day: https://99percentinvisible.org/episode/empire-of-the-sum/

It tells the story of how TI got into the calculator market and the domination it achieved in the US classrooms (+ other interesting tidbits).

pests
0 replies
1d17h

Asianometry has a good ~2 month old video on TI that goes into it's history as a chip maker, how it got into calculators and consumer products, and where it stands today.

https://youtu.be/Wu3FnasuE2s?si=cnOV7oPLc_MSYyyn

kragen
0 replies
1d10h

wonderful

wslh
2 replies
2d

I Remember that around 1985 was more simple to write basic games in the TI99/4A in TI Extended BASIC or Logo. The sprite support was an advantage from writing basic games on PCs without sprites support. I remember a performance similar to the Atari 2600 and not using Assembler.

0xcde4c3db
1 replies
1d19h

The graphics chip of the TI99/4A (TMS9918A) was turned into a separate product line for other computer manufacturers, and became hugely influential in the 1980s. Sega used it in the SG-1000 and SC-3000, which led to enhanced clones being used in the Master System, Game Gear, and Genesis. It also became part of the MSX computer standard, which spawned another lineage from Yamaha. The overall design of the graphics hardware of the NES, Game Boy, and TurboGrafx-16 are also strikingly similar despite none of those systems being descended from anything that used the TI chips.

wslh
0 replies
1d3h

Great insight!

I always saw the TI99/4A as one of the "zillions" of "microcomputers" in the 70s/80s but from your comment I now learnt that it was not so simple regarding Texas Instruments, one of the leaders (if I remember well) of the semiconductor industry at that time. Also the ColecoVision [2] used a variant of that chip. The ColecoVision had the best version of Donkey Kong [3] available in game consoles.

Reading about the TMS9918A[1] now. Thank you very much.

[1] https://en.wikipedia.org/wiki/TMS9918

[2] https://en.wikipedia.org/wiki/ColecoVision

[3] https://archive.org/details/donkey-kong-game-manual-coleco-v...

rkagerer
2 replies
1d16h

Microsoft’s official Minesweeper app has ads, pay-to-win, and is hundreds of MBs

WTF? This is a showcase of everything wrong with the company today.

zephyrfalcon
0 replies
1d10h

More generally, an example of what is wrong with the experience of using computers/internet today.

shiroiushi
0 replies
1d11h

You sound like someone who doesn't own any MSFT stock.

immibis
2 replies
1d20h

It's noteworthy that it's impossible, by design, to write a statically linked Wayland executable. You have to load graphics drivers, because there's no way to just send either pixel data or drawing commands to the display server, like you can in X11. You have to put the pixel data in a GPU buffer using a GPU driver, then send the GPU buffer reference.

kragen
0 replies
1d10h

thank you!

mrdanielricci
1 replies
1d15h

Good job!!

I did a 1-1 replica of the Windows 95 version of Minesweeper.

You can find it at https://github.com/danielricci/minesweeper

I didn’t do anything fancy in terms of reducing the size of the output file, but it was fun to try and replicate that game as best as I could.

jamesdhutton
0 replies
7h21m

Would love to try it but, knowing nothing about Java, I don't know how to run it. Could you add instructions to your github page on how to run it?

demondemidi
1 replies
1d14h

1 MB of assets? Huh?

Joker_vD
0 replies
1d9h

Yeah, the entirety of Chip's Challenge for Windows 3.1 was less than half megabyte.

aidenn0
1 replies
1d1h

Fun fact: the Windows 3.1 minesweeper had a cheat code! Typing:

  x y z z y S-Return
would cause the top left pixel of the screen to change color depending on whether the cursor was over a safe square or a bomb. Since you could plant flags before the timer started, it was possible to get rather unrealistic times.

lagniappe
0 replies
1d1h

That's awesome :) I used to cheat by setting the match to a custom size board with a height of 999 and width of 999. The board would not end up quite that large, however, clicking a single tile would reveal all other tiles of interest immediately, allowing me to mark them at my leisure.

tedunangst
0 replies
1d16h

Since authentication entries can be large, we have to allocate - the stack is only so big. It would be unfortunate to stack overflow because a hostname is a tiny bit too long in this file.

What? How big are your hostnames?

steve1977
0 replies
1d8h

"Let’s write a video game from scratch like it’s 1987"

"We will implement this in the Odin programming language"

checks Odin homepage...

"The project started one evening in late July 2016"

spacecadet
0 replies
1d22h

I did this last year, built a zero player pixellated space simulator using pygame.

pjmlp
0 replies
1d4h

In 1987 the game would have been written in Assembly.

This is more like 1992.

Michael Abrash's book was published in 1990.

ngcc_hk
0 replies
1d11h

Can I say when you said “video game” am thinking arcade, dos, windows … or even mac (1984). Xwindows!!! … as someone said Sdl based … and unix curses based.

It is legitimate but by 1987 …

mproud
0 replies
1d5h

I don’t exactly remember the rules though so it’s a best approximation.

What?

lagniappe
0 replies
1d1h

I really appreciate the effort and attention to detail that went into this.

agumonkey
0 replies
1d22h

I'm often curious about how people organized / conceptualized development in the 60s, 70s and 80s.

Lerc
0 replies
1d18h

One of the first DOS PC programs I made was a MineSweeper clone. It was done as a special request for some friends who had machines that were not up to running Windows, but were addicted to minesweeper from school computers. It was a little weird trying to implement a game I hadn't seen myself, but they gave me very precise descriptions (I think most of them have Math PhDs now)

I did it in Turbo Pascal with BGI graphics. I remember having problems with recursively uncovering empty tiles in large levels where mines were sparse. Recursive algorithms turned out to be rather tricky in general when the stack segment is 64k.

I added a starting disk of various diameters which let you pick a starting location without risk of explosion, which I think was appreciated.

Isamu
0 replies
1d5h

Honestly at that time I remember getting fed up using a library for the graphics of a simulation I was doing over a weekend.

So I just threw out the graphics library and wrote directly to the screen memory. Lots of games did that