return to table of content

Porting my JavaScript game engine to C for no reason

senkora
23 replies
1d2h

Many Web games were created with Impact [the game engine from the article] and it even served as the basis for some commercial cross-platform titles like Cross Code, Eliot Quest and my own Nintendo Wii-U game XType Plus.

Cross Code is an excellent game. I knew that it used web tech and I was constantly amazed by how performant it was on the Nintendo Switch hardware. I would guess that this engine deserves some credit there!

phoboslab
9 replies
1d1h

To be fair, they modified Impact _a lot_. In some of their development streams[1] you can see a heavily extended Weltmeister (Impact's level editor).

Imho, that's fantastic! I love to see devs being able to adapt the engine for their particular game. Likewise, high_impact shouldn't be seen as a “feature-complete” game engine, but rather as a convenient starting point.

[1] https://youtu.be/4lZfnM9Ubeo?t=3215

pdpi
8 replies
23h56m

To be fair, they modified Impact _a lot_.

You can't polish a turd. There would've been no point in modifying the engine a bunch if you hadn't given them a useful base to work with.

make3
3 replies
23h0m

that's just crazy bs, starting from open source code and adding specific features needed for a project is a very common strategy, doesn't mean at all that the tool wasn't good to begin with

pdpi
1 replies
22h20m

I think we're in violent agreement.

phoboslab was downplaying their own efforts by saying that the Cross Code team customised the Impact engine a bunch. My point was that no amount of customisation can turn a bad engine into a good one (you can't polish a turd), so phoboslab definitely deserves credit for building the base engine for Cross Code.

seba_dos1
0 replies
11m

no amount of customisation can turn a bad engine into a good one (you can't polish a turd)

At a risk of being off-topic and not contributing much to this particular conversation (as I doubt it's relevant to the point you're making), I'd like to note that I often actually find it preferable to "polish a turd" than to start from scratch. It's just much easier for my mind to start putting something that already exists into shape than to stop staring at a blank screen, and in turn it can save me a lot of time even if what I end up with is basically a ship of Theseus. Something something ADHD, I guess.

However, I'm perfectly aware this approach makes little sense anywhere you have to fight to justify the need to get rid of stuff that already "works" to your higher-ups ;)

xtirpation
0 replies
22h26m

I think the point is that Impact is _not_ a turd because it could be polished.

dylan604
1 replies
20h37m

You can't polish a turd

Of course you can. Not really sure why this still is tossed about. You just get a shiny turd with a lot less stink. I've made a career of taking people's turds and turning them into things they can now monetize (think old shitty video tape content prepped to be monetized with streaming).

dgb23
0 replies
4h38m

I like this take, but you're saying something different I think, which is more along the lines of "Customers don't care about how the sausage is made".

You didn't polish a turd, you found something that could be valuable to someone and found a way to make it into a polished product, which is great.

But "You can't polish a turd" implies it's actually a turd and there's nothing valuable to be found or the necessary quality thresholds can't be met.

BadHumans
0 replies
23h52m

In game development this isn't true for better or worse. There is a lot of sunken cost mindset in games that we just go with what we have because we have already invested the time in it and we'll make it work by any means.

rowanG077
6 replies
23h58m

The switch port required heroic levels of efforts though I recall.

0cf8612b2e1e
5 replies
23h41m

Huh. Never would have guessed. I am sure there were many “fancy” effects in the game, but during my playthrough, it all felt like it would have been achievable on a SNES.

pjmlp
3 replies
11h35m

That is to be expected by anything that isn't C, C++ and Assembly, also why Unity has IL2CPP.

Even Rust is going to be heroic levels until console vendors decide it is an acceptable language to be made available on their devkits.

jdnxudbe
2 replies
9h25m

I am not familiar with either Rust or the Switch, so maybe you can enlighten me: why would a Rust game be challenging on the Switch?

As long as you have a good C ffi (which Rust has), shouldn't it be quite easy?

Or is the Switch more similar to Android/iPhone where you cannot (could not?) run native code directly but have to use a platform specific language like Java or Objective C?

pjmlp
0 replies
8h58m

As flohofwoe pointed out, you are limited by the console vendor SDK, the compiler toolchains, and everything required to target their OS, linker, API, ABI,...

flohofwoe
0 replies
9h4m

AFAIK some (all?) console vendors require you to use the compiler toolchain from their SDK to compile your code. So unless the console SDK includes a Rust toolchain the only way is to somehow translate the Rust code to C or C++ and compile that with the SDK compiler.

Maybe if the console SDK toolchain is Clang based it would also work to go via LLVM bitcode instead of C/C++ though.

Version467
1 replies
1d1h

Thanks for the recommendation. Looks interesting and is currently on sale on steam, so I bought it.

Modified3019
0 replies
23h17m

One thing to note, is you don’t have to feel compelled to master every combat mechanic the game throws at you (which is a lot), you can just pick your favorites. A “fox with one trick vs a thousand” and all that.

I for example basically ignore the shield for the vast majority of the game, only doing some very basic usage for some bosses, but perfect counters could very well be your favorite thing.

senkora
0 replies
22h49m

Really interesting watch, thanks for sharing!

So, if I understand correctly, this is roughly what he did:

1. Wrote a transpiler to convert the subset of Javascript used by CrossCode into a dialect of Haxe.

2. Selectively modified his version of Haxe in small ways to more closely match the semantics of Javascript, to make writing the transpiler easier.

3. Selectively re-wrote complicated parts of the Javascript code into Haxe directly, so that his transpiler didn't have to handle them.

4. Transpiled <canvas> and other browser API calls into calls to his own pre-existing Kha framework for Haxe, which provided similar APIs.

5. Compiled the Haxe output into C++.

6. Compiled the C++ to native code.

Damn.

otachack
0 replies
1d1h

Agreed! While I loath JavaScript it was immensely impressive that a masterpiece like Cross Code came from it.

GuB-42
0 replies
21h13m

That switch port took a lot of effort as someone has already commented, it is absolutely not standard impact.js.

For the anecdote, everyone wanted a Switch version, but considering the technical limitation, the team replied with "Sorry but CrossCode will be coming to Switch when Hedgehags learn to fly." [1] When they finally got to do it [2], it came with an extra quest called "A switch in attitude", featuring, you guessed it, flying hedgehags.

[1] https://www.radicalfishgames.com/?p=6581 [2] https://www.radicalfishgames.com/?p=6668

cookiengineer
11 replies
12h14m

Honestly I would never ever execute any code from this guy. He is the inventor/founder behind the coinhive crypto mining network. [1]

This guy made billions illegally [2], and maintained the biggest ransomware crypto coin network for years, by offering the tools and SDKs to fund dozens of cyber war involved agencies across the planet. [3]

I have no idea how he got away with it, because his name keeps appearing in lots of crypto trading companies and trade registries. (Not gonna post them, but you can google his name to find this evidence)

He even organized a doxxing campaign against brian krebs at the time called "krebsistscheisse" via his pr0gramm platform [4] [5] [6], to somehow defend the idea that abusing user's computers for personal enrichment is a legit way of making money if you donate some low percentage to cancer research with it?!?

Sorry, but I would never trust this guy's code again. You should be careful, and audit any code he writes before you execute it.

[1] https://krebsonsecurity.com/2018/03/who-and-what-is-coinhive...

[2] 30% fee of monero/XMR went to coinhive: https://coinmarketcap.com/currencies/monero/

[2b] Schuerfstatistik on pr0gramm, where it all started: https://web.archive.org/web/20231005033135/https://pr0gramm....

[2c] Troyhunt analysis after he snatched away the coinhive TLD: https://web.archive.org/web/20240804081830/https://www.troyh...

[3] https://www.trendmicro.com/vinfo/us/security/news/cybercrime...

[3] https://krebsonsecurity.com/tag/dominic-szablewski/

[4] https://krebsonsecurity.com/2019/03/annual-protest-raises-25...

[5] (German) https://www.t-online.de/digital/aktuelles/id_83466874/tausen...

[6] https://www.heise.de/news/krebsistscheisse-Spendenwelle-an-K...

(Lots of other articles about it, and that dominic szlablewski was the guy behind coinhive, and the original owner of pr0gramm, while still doing development work for the company that owns the imageboard officially nowadays)

thih9
3 replies
11h59m

I find this context helpful, but I wish it was written in a less personal and a more objective way.

Which of these sources is about the doxxing campaign? Or about making billions for that matter? And what kind of billions, monero or $?

cookiengineer
2 replies
11h37m

Added some more links/news because I was on the mobile before.

If you google "doxxing brian krebs pr0gramm" you will find lots of other news sources, same as for "coinhive trade volume", as it was the platform that made monero/XMR the biggest cryptojacking platform.

thih9
1 replies
11h28m

If you google "doxxing brian krebs pr0gramm" you will find lots of other news sources

I found no doxxing, I read that they organized a fundraiser, donated to cancer research, and even Brian Krebs wrote that “the response from pr0gramm members has been remarkably positive overall.”[1]

[1]: https://krebsonsecurity.com/2018/03/coinhive-expose-prompts-...

cookiengineer
0 replies
9h26m

I found no doxxing

If you think that images of brian krebs' face with "Hurensohn" (German for "son of a whore") on it are not doxxing, you must be living in a parallel world.

Not gonna post direct links to this, because of HN guidelines. See the VICE article about it, which still contains some of those images. [1]

What I'm saying is that there was an attempt to doxx Brian Krebs, and the users of the imageboard [2] and Gamb, one of the admins, was turning that shitstorm into a positive thing. [3]

[1] NSFW (kinda) https://www.vice.com/de/article/j5apm4/coinhive-die-fieseste...

[2] NSFW (as you might guess) https://pr0gramm.com/new/hurensohn%20brian%20krebs

[3] https://pr0gramm.com/user/Gamb/uploads/2455908

phoboslab
3 replies
8h46m

I don't know what your personal agenda is, but there's so much misinformation and hyperbole in your comment that I have to assume that this is personal for some reason!?

I've been meaning to write a proper post-morten about all that, now that the dust has settled. But in the meantime, just quickly:

- I did not make billions. You're off by quite a few orders of magnitude. After taxes it was well below $500k.

- Nothing I did was illegal; that's how I got away with it.

- Coinhive was not ransomware. It did not encode/hide/steal data. In fact, it did not collect any data. Coinhive was a JavaScript library that you could put on your website to mine Monero.

- I did not operate it for "years". I was responsible for Coinhive for a total of 6 month.

- I did not organize a doxing campaign. There was no doxing of Brian Krebs. I had nothing to do with the response on the image board. They were angry, because Brian Krebs doxed all the wrong people and their response was kindness: donating to cancer research. In German Krebs = cancer, hence the slogan “Krebs ist scheiße” - “cancer is shit”.

- Troy Hunt did not "snatch away" the coinhive domain. I offered it to him.

In conclusion: I was naive. I had the best intentions with Coinhive. I saw it as a privacy preserving alternative for ads.

People in the beta phase (on that image board) loved the idea to leave their browser window open for a few hours to gain access to premium features that you would have to buy otherwise. The miner was implemented on a separate page that clearly explained what's happening. The Coinhive API was expressly written with that purpose: attributing mined hashes to user IDs on your site. HN was very positive about it, too[1]

The whole thing fell apart when website owners put the miner on their page without telling users. And further, when the script kiddies installed it on websites that they did not own. I utterly failed to prevent embedding on hacked websites and educating legitimate website owners on “the right way” to use it.

[1] https://news.ycombinator.com/item?id=15246145

cookiengineer
2 replies
6h24m

I did not make billions.

I only have access to the trade volume of coinhive's wallet addresses that were publicly known at the time and what the blockchain provides as information about that. How much money RF or SK or MM made compared to you is debatable. But as you were a shareholder of the company/companies behind it, it's reasonable to assume you've got at least a fair share of their revenue.

If you want me to pull out a copy of the financial statements, I can do so. But it's against HN's guidelines so I'm asking for your permission first to disprove your statement.

Nothing I did was illegal (...) Coinhive was not ransomware

At the time, it went quickly into being the 6th most common miner on the planet, and primarily (> 99% of the transaction volume) being used in malware.

It was well known before you created coinhive, and it was known during and after. Malpedia entries should get you started [1] [2] but I've added lots of news sources, including German media from that time frame, just for the sake of argument [3] [4] [5] [6] [7] [8]

----------

I've posted troyhunt's analysis because it demonstrates how easily this could've been prevented. A simple correlation between Referer/Domain headers or URLs and the tokens would've been enough to figure out that a threat actor from China that distributes malware very likely does not own an .edu or .gov website in the US, and neither SCADA systems.

As there was a financial benefit on your side and no damage payments to any of the affected parties, and none revoked transactions from malicious actors, I'd be right to assume the unethical motivation behind it.

I did not organize a doxing campaign. There was no doxing of Brian Krebs.

As I know that you're still an admin on pr0gramm as the cha0s user, that's pretty much a useless archive link.

Nevertheless I don't think that you can say "There was no doxing of Brian Krebs" when you can search for "brian krebs hurensohn" on pr0gramm, still, today, with posts that have not been deleted, and still have his face with a big fat "Hurensohn" stamp on it. [9]

As I wrote in another comment, I also said that there are also nice admins on the imageboard like Gamb, and that they successfully turned around that doxxing attempt into something meaningful.

I don't know what your personal agenda is, but there's so much misinformation and hyperbole in your comment that I have to assume that this is personal for some reason!?

This is not personal for me, at all. But I've observed what was going on and I could not be silent about the unethical things that you built in the past.

To me, doing that lost all trust and good faith in you. The damage that you caused on a global scale with your product coinhive far exceeds whatever one person's lifetime can make up for. And I think that people should know about that before they execute your code and are going to be a victim to a fraudulent coin mining scheme.

Calling this hyperbole and misinformation is kind of ridiculous, given that antivirus signatures and everything are easily discoverable with the term "coinhive". It's not like it's a secret or made up or something.

----------

[1] https://malpedia.caad.fkie.fraunhofer.de/details/win.coinmin...

[2] https://malpedia.caad.fkie.fraunhofer.de/details/win.monero_...

[3] https://cyberexperts.com/what-is-coinhive-malware/

[4] https://censys.com/de/hunting-for-threats-coinhive-cryptocur...

[5] https://www.pcrisk.de/ratgeber-zum-entfernen/8716-coinhive-v...

[6] https://www.golem.de/news/kryptomining-coinhive-skripte-warn...

[7] https://www.malwarebytes.com/blog/detections/coinhive-com

[8] https://www.coindesk.com/tag/coinhive/

[9] https://pr0gramm.com/top/brian%20krebs%20hurensohn

----------

phoboslab
1 replies
6h0m

Googling your name reveals that you like to stir up drama. Please find another venue.

cookiengineer
0 replies
5h31m

Googling your name reveals that you like to stir up drama. Please find another venue.

Kind of ironic, given that you claim to have been doxxed out of your own forum.

stuffoverflow
1 replies
7h59m

Ok so he basically made a hidden javascript based miner and tools to distribute it.

Couldn't find anything to support the claim that he would have tried to dox Krebs. Also "maintaining biggest ransomware crypto coin network" feels like a dishonest phrasing trying to make it sound like he had something to do with ransomware. Monero was practically never used for ransomware payments back when coinhive was active, and even today Bitcoin is the most used method for ransom payments by far. Monero was simply the most profitable coin to mine with CPU.

That being said I agree that I wouldn't trust any software made by this guy. Even the hidden miner was obviously highly unethical and probably illegal.

cookiengineer
0 replies
7h20m

Monero was practically never used for ransomware payments back when coinhive was active

This is a kind of wrong assumption. While I agree that ransomware payments themselves weren't done via monero or coinhive - malware on the other hand (read as: installed viruses/trojans/programs that the owner of the machine didn't consent to) was using it primarily to mine crypto coins.

See [3] from my previous post:

https://www.trendmicro.com/vinfo/us/security/news/cybercrime...

Kiro
0 replies
3h50m

I think Coinhive was really cool and a fantastic idea that was ruined by rogue actors. I love the thought of mining for 20 seconds to unlock reading an article instead of getting out your credit card or even paying with crypto. Completely anonymous payment with zero overhead.

mgaunard
10 replies
1d

The history section does not feel quite accurate.

From what I recall, what killed Flash wasn't iOS, but rather the acquisition of Macromedia by Adobe.

phamilton
4 replies
23h17m

One of the final nails was the infamous Chrome 45 aka the Chromepocalypse in the video ad world.

Chrome 45, in the name of performance, defaulted to only loading flash from 3rd party domains after a "click to load". This was bad for ads for obvious reasons, but it was much much worse due to an implementation detail.

In order to get the page laid out properly, Chrome loaded the flash component and then suspended it after a single frame. From an ad perspective, that was enough to trigger the impression pixel, signifying that the ad had been shown and the advertiser should be billed. However, the ad was not shown and there was no way for the ad to detect it had been suspended. Just a nightmare. We (I led the effort at BrightRoll) had to shift a ton of ad auction behavior to special case Chrome 45 and it limited what kind of ads we could serve.

That was the inflection point away from flash for ads. While ad formats like VPAID supported JS/HTML5, they didn't start getting popular until after Chrome 45 was released.

hypeatei
3 replies
21h6m

to trigger the impression pixel

I've always wondered how ad companies verify the ad is actually being displayed. Can you explain this in more detail?

I imagine it has to be somewhat "bulletproof" since money is involved.

regus
0 replies
15h46m

I worked at a company that did this.

Here is the formula for viewability:

Percent In View = Area of the Intersection of the Ad and the Viewport / Area of the Ad

You would get the area of the ad using getBoundingClientRect, and the area of the viewport using window.innerWidth and window.innerHeight.

It was not possible to do this if the ad was within a hostile iframe (cross origin iframe) so you needed to use a third party source for this information like SafeFrame.

All of this was greatly simplified when Intersection Observer was officially supported by modern browsers.

phamilton
0 replies
16h1m

I've been out of the space for about a decade, so I can't say how things have continued to evolve, but the only consistent antidote to fraud has been a trusted third party.

I spent a lot of my time back then pulling in metrics from third parties like Nielsen (who do traditional TV viewership numbers) as well as companies who specialized in "viewability". We could include that data in our real-time auction and bidders would adjust what they were willing to pay based on the reputation of the ad opportunity. At least that's the theory. In practice there was so much indirection and reselling that it was hard to say what was true.

As far as the technical answer: it's a game of cat-and-mouse but we found some interesting metrics we could dig in on. One was that fps of a video hidden behind another element on the page would have a drastically different rendered fps than one in the foreground. That would be different between browsers (and browser versions) but the viewability vendors would find all those quirks, measure, and analyze to give us a normalized score. For a hefty fee, of course.

dylan604
0 replies
20h30m

I imagine it has to be somewhat "bulletproof" since money is involved.

The GP comment pretty much shows how bulletproof it wasn't, it isn't, it never will be. You also have bad actors in the space that will load ads under other ads so that the same page load continues to generate impressions. The space behaves as if there is no incentive to be honest.

lelandfe
3 replies
1d

Flash game sites were still huge and popular after Adobe’s purchase, so clearly it did not kill it. If you mean something like that the acquisition started the death… that’s a hard position to argue against, since it’s subjective. Perhaps you’re right.

mgaunard
1 replies
1d

Adobe had competing products mostly based on open standards. They shut down many active product lines and merged what was left into Adobe AIR, which didn't take off.

lelandfe
0 replies
23h59m

AIR and Silverlight and co. was a weird moment in the web’s history

phamilton
0 replies
23h10m

When flash started to fade for games, many developers moved over to adtech where their skills were still valued. Flash continued for a few years to power ads as well as shims around other videos that would collect data and even trigger secondary ad auctions.

andai
0 replies
23h33m

Adobe was well into the development of AS 4.0, then scrapped it after Steve Jobs' psyop.

moffkalast
9 replies
1d1h

With WASM it might actually run faster in the browser as well.

echelon
4 replies
1d1h

WASM still needs better multi-threaded support. We built a game in Bevy and it took minutes to sequentially load in all of the assets.

flohofwoe
3 replies
1d1h

You don't need multithreading to get concurrent asset streaming, a completion callback or async-await-style code will work too (after all, that's how most Javascript web games load their assets "in the background"). Also, browsers typically restrict concurrent download streams to about 6 (the exact number is entirely up to the browser though) - so you can have at most 6 asset files 'in flight'. In the end you are still limited by the user's internet bandwidth of course.

echelon
2 replies
1d

None of that worked out of the box, and we also spent most of the loading time CPU bound, processing the individual assets after they arrived over the wire. That was a blocking, non-async operation.

flohofwoe
0 replies
10h12m

Then the next question is why your asset formats require such heavy processing after loading. Normally you'd convert any data into custom engine formats in an offline step in the asset pipeline so that the files can be dumped directly into memory ready for the engine (and 3D API) to be used without an expensive deserialization process.

FWIW, POSIX style multithreading in WASM is supported everywhere for a while now again (it was disabled after Spectre/Meltdown) but is locked behind special COOP/COEP HTTP headers for security reasons (so you either need to be able to configure the web server to return those headers, or use a service worker to inject the headers on the client side.

andai
0 replies
23h27m

processing the individual assets after they arrived over the wire

Could this take place at compile time?

phoboslab
2 replies
1d1h

It does, but the main speedup comes from using WebGL instead of Canvas2D. Sadly, Canvas2D is still as slow as it ever was and I really wonder why.

Years back I wrote a standalone Canvas2D implementation[1] that outperforms browsers by a lot. Sure, it's missing some features (e.g. text shadows), but I can't think of any reason for browser implementations needing to be _that_ slow.

[1] https://github.com/phoboslab/Ejecta

whitehexagon
0 replies
6h52m

For things missing/hard in WebGL, is it performant enough to rely on the browser compositor to add a layer of text, or a layer of svg, over the WebGL? I have some zig/wasm stuff working on canvas2D, but rendering things to bitmaps and adding them to canvas2d seems awfully slow, especially for animated svg.

moffkalast
0 replies
1d

Ah man, I'm still looking for a general canvas drop in replacement that would render using webgl or webgpu if supported. Closest I've found so far is pixi.js, but the rendering api is vastly different and documentation spotty, so it would take some doing to port things over. Plus no antialiasing it seems.

pjmlp
0 replies
1d1h

In browser games, there is more performance gains to move stuff into the GPU, than using WASM.

nine_k
8 replies
1d1h

Why, from C to Zig, from Zig to Rust. Compile the Rust version to WASM to finally make it runnable in the browser.

leeoniya
4 replies
1d1h

i'm actually quite curious how it would perform relative to the C version. the article shows 1000x particles, but LittleJS has demos with a couple orders of magnitude more than that at 60fps.

e.g. https://killedbyapixel.github.io/LittleJS/examples/stress/

nine_k
2 replies
23h54m

JS engines like V8 are very good at JIT and optimization based on actual profiling. If we talk about pure CPU modeling, I suspect a good JIT will soon enough produce machine code on par with best AOT compilers. (BTW the same should apply to JVM and CLR languages, and maybe even to LuaJIT to some extent.)

andai
1 replies
23h26m

From my cursory reading of v8 blogs, most of its optimizations revolve around detecting patterns in JS objects and replacing them with C++ classes.

nine_k
0 replies
22h8m

Exactly. Detecting patterns that are typical for human coders and replacing them with stuff that uses the machine efficiently is what most compilers do, even for low-level languages like C. You write a typical `for` loop, the compiler recognizes it and unrolls it, and / or replaces it with a SIMD-based version with many iterations run per clock.

pjmlp
0 replies
1d1h

Not looked into the code, the correct way would be to move the particles engine into shader code, and the limit would be as much as the graphics card can take.

It appears that after all these years, not everyone has bought into shader programming mentality, which is quite understable as only proprietary APIs have good debugging tools for them.

Defletter
2 replies
1d1h

Doesn't Zig compile to WASM too?

nine_k
0 replies
23h51m

Yes, but the point of the joke was to make the loop longer, while keeping it somehow logical. I wish I managed to insert Purescript, Elixir, Pony and ATS somehow.

flohofwoe
0 replies
1d1h

Yes, for instance this is mixed Zig/C project (the C part are the sokol headers for the platform-glue code):

https://floooh.github.io/pacman.zig/pacman.html

The Git repo is here:

https://github.com/floooh/pacman.zig

...in this specific project, the Emscripten SDK is used for the link step (while compilation to WASM is handled by the Zig compiler, both for the Zig and C sources).

The Emscripten linker enables the 'embedded Javascript' EM_JS magic used by the C headers, and it also does additional WASM optimizations via Binaryen, and creating the .html and .js shim file needed for running WASM in browsers.

It's also possible to create WASM apps running in browsers using only the Zig toolchain, but this requires solving those same problems in a different way.

namuol
6 replies
13h37m

I owe a lot of the most informative programming work I’ve done to Impact.

Impact was so ahead of its time. Proud to say I was one of the 3000 license owners. One of the best purchases I’ve ever made. The only game I’ve ever really properly finished was made in Impact.

I loved that the source code was part of the license, and even modified the engine and the editor to suit my needs.

I was so inspired that I worked on my own JS game engine (instead of finishing games - ha!) for years after. I never released it, but I learned a ton in the process and made a lot of fun gamejam games with it.

I was also inspired by Impact’s native iOS support (Ejecta), but frustrated that it didn’t run on Android (at the time at least), so I fumbled my way through writing JVM bindings for V8 and implemented a subset of WebGL to run my game engine on Android without web views.[0] I made the repo for V8 bindings public and to my surprise it ended up being used in commercial software.

I won’t bore you with the startup I tried to bootstrap for selling access to private GitHub repos, which was inspired by Impact’s business model…

Anyway, it warms my heart and makes me laugh with glee to see Impact getting an update for the “modern” web with a C port!

I’d say these are strange times for the web, but I can’t remember a time when things were anything but strange. Cheers!

[0]: https://github.com/namuol/jv8

jdnxudbe
5 replies
9h33m

I might be missing something, but isn't the original Impact a JavaScript & browser based engine, thus it should run on Android in a simple Web view just fine?

msephton
1 replies
7h0m

I guess web views on Android weren't very capable 10–15 years ago?

seba_dos1
0 replies
23m

They were quite capable, but there was no mechanism to update them without updating the whole OS, so they were usually quite outdated.

esprehn
1 replies
5h18m

Most of that repo appears to have been built in early 2013. Android WebView didn't switch to chromium until late 2013. Prior to that it was usually an older hacked up WebKit sometimes provided by the phone manufacturer with unreliable features.

tracker1
0 replies
3h45m

I remember working on a QC app that used NFC and a webview right when I think the first Pixel phone came out, basically to track QC steps while expensive equipment went through a production line. Was a somewhat painful, but interesting experience. I'm not sure if I liked it more or less than dealing with Palm/Handspring development a few years before it.

It's funny, I actually forget some of the small, one-off things I've worked on over the years.

robterrell
0 replies
1h26m

Ejecta was a project that implemented WebGL with JavaScript bindings -- basically everything you need for a web game without the DOM. It implemented enough of WebGL to run Impact-based games.

fitsumbelay
5 replies
1d

"Thoughts on Flash" may just have saved the Web platform at its hour of greatest need, ie. creeping dominance of a single piece of software.

I believe that somewhere in there was frustration with Adobe who seemed to abandon the MacOS platform support for Windows' much larger user base, eg. Mac versions were always behind Windows versions. Perhaps Jobs also may've felt that there would be no Adobe without Apple as much as the other way around but that's speculative

The game looks slick af btw,

golergka
3 replies
1d

Even PSP had Flash, and it was fairly decent. I wonder how much effort and money Sony put into that.

olliej
2 replies
23h33m

Flash had numerous issues. The processing power available (especially on a PSP) is more than enough to be “good”, the problem with flash _performance_ is the power usage while achieving that perf. Even on laptops flash was a significant battery life drain whenever it was running, having it on all websites would kill battery life while browsing on a phone.

thedragonline
1 replies
20h26m

Yeah it did. I'm just sad that it took down ActionScript with it. IMO this is the language Javascript should have been.

fitsumbelay
0 replies
22h57m

... and the post has me poking around with C again. was always an ecmascript guy with a little Lingo in there from long ago, however I have an itch for all things low-resource and close to the metal (but not assembly lang close) and golfing my way towards somethings that encourage me to dig deeper

zoogeny
4 replies
1d

The next logical step is to port this to WASM so that it can run in the browser.

igor_akhmetov
3 replies
1d

The engine already compiles to WASM, see TLDR.

fragmede
0 replies
18h35m

the number of layers it takes to get it to work on my phone is so high, but it's amazing.

zoogeny
0 replies
1d

I skimmed right by that. Have you done any feature comparisons against raylib?

o11c
3 replies
21h9m

high_impact is not a “library”, but rather a framework. It's an empty scaffold, that you can fill. You write your business logic inside the framework.

I normally phrase this in a much more negative way: a "framework" is simply a "library" that does not place nice with others. It's good to hear a sensible positive phrasing for once.

mmoskal
0 replies
13h15m

The way I heard it described is that you call the library while the framework calls you.

grecy
0 replies
18h59m

I studied Software Engineering, but never quite grasped the difference between a Library and a Framework until I started my first job developing with Web Objects.

It is a joy to use such a rich and well thought out Framework that really does do 99% of everything you need. Adding in your own stuff is about the easiest development I've ever done, and it just works. It was magic, and I miss using it.

01HNNWZ0MV43FF
0 replies
4h34m

I do still think it's negative.

My ideal framework is a library or set of cooperating libraries on the inside, with as little framework as possible.

e.g. Qt is a framework. Qt "calls you". But you can run the QPainter code without starting the Qt event loop or thinking too hard about QObjects. You should ideally be able to use the event loop without buying into signals and slots, though it won't be as ergonomic.

It's not always possible, it's not always worth it, but everything else being equal, I'd rather have no framework at all.

(For game engines I understand why a little framework is necessary - When you're talking about compiling to a weirdo platform like phones or consoles, the engine must also be involved in your build process and sometimes even libc shit. So you can't just make a Win32 exe that is a PlayStation game.)

muragekibicho
3 replies
1d1h

Somewhat related. Your QOI lossless file format coupled with 7Zip outperfoms lossless PNG. Amazing work!

pornel
2 replies
1d1h

BMP coupled with 7Zip would outperform too (probably by a bigger margin). It just boils down to gzip vs gzip-replacement compressor.

zX41ZdbW
0 replies
1d

I also found that BMP with ZSTD outperforms PNG while developing https://adsb.exposed/ (it streams raw RGBA over HTTP with Content-Encoding: zstd)

vanderZwan
0 replies
20h45m

Not to mention the part where adding compressors like this somewhat defeats the purpose of using a simple format like QOI (although at least zstd is faster than gzip, let alone 7zip).

But if we're modifying things like that, then they might as well make use of Nigel Tao's improved QOIR format, and replace the LZ4 compressor it uses with zstd. That's probably faster and likely compresses better than QOI.

[0] https://nigeltao.github.io/blog/2022/qoir.html

[1] https://github.com/nigeltao/qoir

jonwinstanley
3 replies
20h32m

Looks like it’s a great game engine. Why does the article state its near end of life?

Are there new engines that are far better?

phoboslab
2 replies
19h19m

The original JavaScript engine “Impact” from 2010 is at the end of its life; the C rewrite “high_impact” is new and will (potentially) be around for as long as we have C compilers and some graphics API.

The JavaScript engine had a lot of workarounds for things that are not necessary anymore and some things that just don't work that well with modern browsers. From the top of my head:

- nearest neighbor scaling for pixel graphics wasn't possible, so images are scaled at load time pixel by pixel[1]. Resizing the canvas after the initial load wasn't possible with this. Reading pixels from an image was a total shit show too, when Apple decided to internally double the Canvas2D resolution for their “retina” devices, yet still reporting the un-doubled resolution[2].

- vendor prefixes EVERYWHERE. Remember those? Fun times. Impact had it's own mechanism to automatically resolve the canonical name[3]

- JS had no classes, so classes are implemented using some trickery[4]

- JS had no modules, so modules are implemented using some trickery[5]

- WebAudio wasn't a thing, so Impact used <Audio> which was never meant for low latency playback or multiple channels[6] and generally was extremely buggy[7]. WebAudio was supported in later Impact versions, but it's hacked in there. WebAudioContext unlocking however is not implemented correctly, because back then most browsers didn't need unlocking and there was no "official" mechanism for it (the canonical way now is ctx.resume() in a click handler). Also, browser vendors couldn't get their shit together so Impact needed to handle loading sounds in different formats. Oh wait, Apple _still_ does not fully support Vorbis or Opus 14 years later.

- WebGL wasn't a thing, so Impact used the Canvas2d API for rendering, which is _still_ magnitudes slower than WebGL.

- Touch input wasn't standardized and mobile support in general was an afterthought.

- I made some (in hindsight) weird choices like extending Number, Array and Object. Fun fact: Function.bind or Array.indexOf wasn't supported by all browsers, so Impact has polyfills for these.

- Weltmeister (the editor) is a big piece of spaghetti, because I didn't know what I was doing.

Of course all of these shortcomings are fixable. I actually have the source for “Impact2” doing all that with a completely new editor and bells and whistles. It was very close to release but I just couldn't push it over the finish line. I felt bad about this for a long time. I guess high_impact is my attempt for redemption :]

[1] https://github.com/phoboslab/Impact/blob/master/lib/impact/i...

[2] https://phoboslab.org/log/2012/09/drawing-pixels-is-hard

[3] https://github.com/phoboslab/Impact/blob/master/lib/impact/i...

[4] https://github.com/phoboslab/Impact/blob/master/lib/impact/i...

[5] https://github.com/phoboslab/Impact/blob/master/lib/impact/i...

[6] https://phoboslab.org/log/2011/03/multiple-channels-for-html...

[7] https://phoboslab.org/log/2011/03/the-state-of-html5-audio

unevencoconut
0 replies
2h17m

I loved Impact. Now finding out you were working on an Impact2?!

Any chance you're going to release it, even if its incomplete?

broodbucket
0 replies
15h30m

I loved Impact and paid for it back in the day, though I never ended up finishing the project I was working on. Did you ever throw the source for "Impact2" up anywhere? What's missing from it being releaseable?

dgb23
2 replies
4h11m

I like the part about memory management. Arenas are so simple.

In the (toy) web server I'm writing I initially also started with arenas. However I quickly realized that I don't actually need to grow and shrink memory at all.

Currently I'm simply allocating the memory I need up front then slice it up into pieces for every module.

When we're programming, we often pretend like we could be needing arbitrary amounts of memory, but that's not necessarily the case.

Many things actually have very clear limits. For the rest one can often define them. And if you enumerate those, you know how much memory you will need.

It is fun to think about and define those limits up front. It builds a lot of confidence and promotes a healthy frugality.

TillE
0 replies
2h28m

Right, I'm always annoyed when people talk about how std::vector has terrible performance (especially for games), which it certainly does if you just start with an empty vector with no memory reserved and append a few thousand items.

But it is very frequently possible to find the actual maximum you will need, allocate that upfront, and everything's great.

two_handfuls
1 replies
1d2h

Nice writeup! I didn’t see mention of the license: it’s MIT and the code is on GitHub.

F3nd0
0 replies
4h46m

MIT/Expat, to be precise.

syockit
1 replies
11h47m

The name high_impact is a nod to a time when C was considered a high level language.

Weird, considering that JS is an even higher level language.

creesch
0 replies
11h43m

"To a time", meaning the past... :)

elfelf11
1 replies
19h45m

Would love to have a ruby binfing.

jeden
0 replies
9h45m

or cristal ;)

(or mrubyc)

38
1 replies
19h6m

Except for SDL2, all libraries are bundled here (see the libs/ directory).

https://github.com/phoboslab/high_impact#libraries-used

yep. exactly why I dont use C anymore. the package management story is so bad/non existent, that the typical approach is to just vendor everything. no thanks.

jraph
0 replies
9h44m

Isn't the common approach to tell people to install packages from their distribution?

uberman
0 replies
1d2h

Thanks, this was a great read.

tomcam
0 replies
20h59m

Fun idea, open source for maximum learning value, seemingly flawless execution, vanity-free and clear writeup—what a lovely contribution to the world. I feel privileged just seeing things like this.

theapache64
0 replies
20h4m

dude is an OG!

taf2
0 replies
4h40m

Now to compile with wasm so we can use it in a browser

slowhadoken
0 replies
1d1h

I used to play X-Type all the time on iOS, it’s how I discovered Impact. I love web-based games but lately I’ve been tempted to write in C or C++. Did you notice dramatic gains in optimization porting Impact from JavaScript to C?

pjmlp
0 replies
1d1h

Love the honesty of the headline.

The game looks cool.

nottorp
0 replies
1d2h

for No Reason

Out of respect for your player's battery life, perhaps :)

kirbyfan64sos
0 replies
1d1h

Had to log in to my rarely-used HN account to mention that I had played Biolab Disaster over and over again years back but lost track of it and forgot the name. Kinda wild to find it again by sheer luck!

jokoon
0 replies
22h10m

What amazes me is how modern javascript engines are able to optimize for a "hot execution path".

hoten
0 replies
23h45m

Amazing work!

FYI - in non-fullscreen mode, on my Mac / Chrome, the bottom of the viewport is cut off. So can only play in fullscreen.

hammycheesy
0 replies
17h37m

My decision to sell it was met with a lot of backlash but was successful enough to launch me into a self-sustained career.

As someone who is interested in eventually freeing myself from the corporate job and diving head-first into my side projects, I would love to hear more about this aspect.

For some reason the idea of trying to charge folks for the work I would normally do for the fun of it on the side is daunting to me, even though I know it could enable me to focus on doing the stuff I love full-time.

esschul
0 replies
21h9m

Remember I was working on an impact.js game 10 years ago. Just can't seem to find the source code! First time I hired a guy to create some graphics. Very inspiring gaming platform.

cubano
0 replies
21h36m

ohhh I love your use of UNION to create a polymorphic-type ENTITY data structure. Nice work and design.

I still love futzing around in C...It was the original langauge I learned and God did I struggle with it for years. Like the OP mentioned, C is awesome because its such a concise language but you can go as deep as you like with it.

Thanks for all your efforts and the writeup...the game has a throwback Commander Keen-type vibe to it and I loved that franchise for a minute back in Carmack's pre-3D days.

andai
0 replies
23h16m

I did 5 game jams this year, 4 of them in various WebAssembly languages (C++, Zig, Odin, Rust).

In the end I switched back to JS/TS, because I found way more benefit from minimizing layers of abstraction and translation (WS is set up so you are forced to interface with JS to actually do anything), more than the benefits of any language or library.

(An exception might be something like Unity, due to the excellent featureset, but the IDE has become so slow it's unbearable...)

Defletter
0 replies
1d1h

As one of those 3000 licence holders, I'm happy to see a revival of Impact :) wonder how nicely it plays with Zig.