return to table of content

Porffor: A from-scratch experimental ahead-of-time JS engine

obviouslynotme
45 replies
22h22m

I have thought about doing this and I just can't get around the fact that you can't get much better performance in JS. The best you could probably do is transpile the JS into V8 C++ calls.

The really cool optimizations come from compiling TypeScript, or something close to it. You could use types to get enormous gains. Anything without typing gets the default slow JS calls. Interfaces can get reduced to vtables or maybe even straight calls, possibly on structs instead of maps. You could have an Int and Float type that degrade into Number that just sit inside registers.

The main problem is that both TS and V8 are fast-moving, non-standard targets. You could only really do such a project with a big team. Maintaining compatibility would be a job by itself.

wk_end
29 replies
21h51m

At least without additional extensions, TypeScript would help less than you think. It just wasn’t designed for the job.

As a simple example - TypeScript doesn’t distinguish between integers and floats; they’re all just numbers. So all array accesses need casting. A TypeScript designed to aid static compilation likely would have that distinction.

But the big elephant in the room is TypeScript’s structural subtyping. The nature of this makes it effectively impossible for the compiler to statically determine the physical structure of any non-primitive argument passed into a function. This gives you worse-than-JIT performance on all field access, since JITs can perform dynamic shape analysis.

munificent
16 replies
21h29m

I think the even bigger elephant in the room is that TypeScript's type system is unsound. You can have a function whose parameter type is annotated to be String and there's absolutely no guarantee that every call to that function will pass it a string.

This isn't because of `any` either. The type system itself deliberately has holes in it. So any language that uses TypeScript type annotations to generate faster/smaller code is opening itself to miscompiling code and segfaults, etc.

g15jv2dp
10 replies
17h58m

I think the even bigger elephant in the room is that TypeScript's type system is unsound.

Can you name a single language that is used for high-performance software and whose type system is sound? To speed up the process, note that none of the obvious candidates have sound type systems.

mike_hearn
3 replies
10h0m

JVM bytecode is a "language" and is proven to be sound. The languages that compile to that language, on the other hand, are a different kettle of fish.

g15jv2dp
2 replies
8h54m

This is specifically about type systems. It's easy to have a sound type system when you have no type system.

Also, I'm not too familiar with JVM bytecode, but if I load i64 in two registers and then perform floating point addition on these registers, does the type system prevent me from compiling/executing the program?

Can you say more about "proven to be sound"? Are you talking about a sound type system?

skitter
0 replies
4h16m

The type checker is specified in Prolog and rejects the above scenario:

    instructionIsTypeSafe(fadd, Environment, _Offset, NextStackFrame, ExceptionStackFrame) :- 
        validTypeTransition(Environment, [float, float], float, StackFrame, NextStackFrame),
        exceptionStackFrame(StackFrame, ExceptionStackFrame).
Fun fact: Said type system has a 'top' type that is both the top type of the type system as well as the top half of a long or double, as those two actually take two values while everything else, including references, is only one value. Made some sense when everything was 32 bit, less so today.

MathMonkeyMan
1 replies
10h18m

Maybe OCaml, but I haven't studied it much.

g15jv2dp
0 replies
8h52m

I doubt it's been proved to be sound. It shows up a lot on https://counterexamples.org/, although if I skim the issues seem to have been fixed since then.

IsTom
1 replies
4h7m

I'm a little behind times on Haskell (haven't used it for some years) – there always were extensions that made it unsound, but the core language was pretty solid.

munificent
0 replies
3h11m

Java, C#, Scala, Haskell, and Dart are all sound as far as I know.

Soundness in all of those languages involves a mixture of compile-time and runtime checks. Most of the safety comes from the static checking, but there are a few places where the compiler defers checking to runtime and inserts checks to ensure that it's not possible to have an expression of type T successfully evaluate to a value that isn't an T.

TypeScript doesn't insert any runtime checks in the places where there are holes in the static checker, so it isn't sound. If it wasn't running on top of a JavaScript VM which is dynamically typed and inserts checks everywhere, it would be entirely possible to segfault, violate memory safety, etc.

wk_end
3 replies
20h59m

So - I know this in theory, but avoided mentioning it because I couldn’t immediately think of any persuasive examples (whereas subtype polymorphism is a core, widely used, wholly unrestricted property of the language) that didn’t involve casts or any/unknown or other things that people might make excuses for.

Do you have any examples off the top of your head?

sixbrx
1 replies
19h19m

Here's an example I constructed after reading the TS docs [1] about flow-based type inference and thinking "that can't be right...".

It yields no warnings or errors at compile stage but gives runtime error based on a wrong flow-based type inference. The crux of it is that something can be a Bird (with "fly" function) but can also have any other members, like "swim" because of structural typing (flying is the minimum expected of a Bird). The presence of a spurious "swim" member in the bird causes tsc to infer in a conditional that checks for a "swim" member that the animal must be a Fish or Human, when it is not (it's just a Bird with an unrelated, non-function "swim" member).

    type Fish = { swim: () => void };
    type Bird = { fly: () => void };
    type Human = { swim?: () => void; fly?: () => void };

    function move(animal: Fish | Bird | Human) {
      if ("swim" in animal) {
        // TSC infers wrongly here the presence of "swim" implies animal must be a Fish or Human
        onlyForFishAndHumans(animal); 
      } else {
        animal;
      }
    }

    function onlyForFishAndHumans(animal: Fish | Human) {
      if (animal.swim) {
        animal.swim(); // Error: attempt to call "not-callable".
      }
      // (receives bird which is not a Fish or Human)
    }

    const someObj = { fly: () => {}, swim: "not-callable" };
    const bird: Bird = someObj;

    move(bird);

    // runtime error: [ERR]: animal.swim is not a function
[1] https://www.typescriptlang.org/docs/handbook/2/narrowing.htm...

oxidant
0 replies
15h22m

This narrowing is probably not the best. I'm not sure why the TS docs suggest this approach. You should really check the type of the key to be safer, though it's still not perfect.

   if (typeof animal.swim === 'function') {....}

curtisblaine
0 replies
20h10m

It might be useful for an interpreter though. I believe that in V8 you have this probabilistic mechanism in which if the interpreter "learns" that an array contains e.g. numbers consistently, it will optimize for numbers and start accessing the array in a more performance way. Typescript could be used to inform the interpreter even before execution. (My supposition, I'm not an interpreter expert)

obviouslynotme
7 replies
21h42m

Outside of really funky code, especially code originally written in TS, you can assume the interface is the actual underlying object. You could easily flag non-recognized-member accesses to interfaces and then degrade them back to object accesses.

wk_end
6 replies
21h6m

You’re misunderstanding me, I think.

Suppose you have some interface with fields a and c. If your function takes in an object with that interface and operates on the c field, what you want is to be able to do is compile that function to access c at “the address pointed to by the pointer to the object, plus 8” (assuming 64-bit fields). Your CPU supports such addressing directly.

Because of structural subtyping, you can’t do that. It’s not unrecognized member. But your caller might pass in an object with fields a, b, and c. This is entirely idiomatic. Now c is at offset 16, not 8. Because the physical layout of the object is different, you no longer have a statically known offset to the known field.

fenomas
3 replies
18h12m

Because of structural subtyping, you can’t do that

In practice v8 does exactly what you're saying can't be done, virtually all the time for any hot function. What you mean to say is that typescript type declarations alone don't give you enough information to safely do it during a static compile step. But modern JS engines, that track object maps and dynamically recompile, do what you described.

wk_end
2 replies
17h54m

I mentioned this in my original comment:

This gives you worse-than-JIT performance on all field access, since JITs can perform dynamic shape analysis.

We're talking about using types to guide static compilation. Dynamic recompilation is moot.

fenomas
1 replies
17h7m

Oh, I thought JIT in your comment meant a single compilation. Either way, having TS type guarantees would obviously make optimizing compilers like v8's stronger, right? You seem to be arguing there's no value to it, and I don't follow that.

wk_end
0 replies
14h37m

My claim is that the guarantees that TS provides aren't strong enough to help a compiler produce stronger optimizations. Types don't just magically make code faster - there's specific reasons why they can make code faster, and TypeScript's type system wasn't designed around those reasons.

A compiler might be able to wring some things out of it (I'm skeptical about obviouslynotme's suggestions in a cousin comment, but they seem insistent) or suppress some checks if you're happy with a segfault when someone did a cast...but it's just not a type system like, say, C's, which is more rigid and thus gives the compiler more to work with.

obviouslynotme
1 replies
20h33m

I would bet that, especially outside of library code, 95+% of the typed objects are only interacted with using a single interface. These could be turned into structs with direct calls.

Outside of this, you can unify the types. You would take every interface used to access the object and create a new type that has all of the members of both. You can then either create vtables or monomorphize where it is used in calls.

At any point that analysis cannot determine the actual underlying shape, you drop to the default any.

pjmlp
0 replies
10h28m

Which is exactly the kind of optimizations JIT compilers are able to perform, and AOT compiler can't do them safely without having PGO data, and even then, they can't re-optimize if the PGO happens to miss a critical path that breaks all the assumptions.

pjmlp
1 replies
21h18m

Such Typescript already exists, Static Typescript,

https://makecode.com/language

Microsoft's AOT compiler for MakeCode, via C++.

mananaysiempre
0 replies
7h58m

The 2019 paper[1] says: “STS primitive types are treated according to JavaScript semantics. In particulars, all numbers are logically IEEE 64-bit floating point, but 31-bit signed tagged integers are used where possible for performance. Implementation of operators, like addition or comparison, branch on the dynamic types of values to follow JavaScript semantics[.]”

[1]: https://www.microsoft.com/en-us/research/publication/static-...

cprecioso
1 replies
21h15m

A TypeScript designed to aid static compilation likely would have that distinction.

AssemblyScript (https://www.assemblyscript.org/) is a TypeScript dialect with that distinction

mananaysiempre
0 replies
8h45m

It’s advertised as that, and it’s a cool project, but while it’s definitely a statically typed language that reuses TypeScript syntax, it’s not clear to me just what subset of the actual TypeScript type system is supported. That’s necessarily bad—TypeScript itself is very unclear about what its type system actually is. I just think the tagline is misleading.

com2kid
4 replies
22h0m

Ecmascript 4 was an attempt to add better types to the language, which sadly failed a long time ago.

It'd be nice of TS at least allowed for specifying types like integer, allowing some of the newer TS aware runtimes could take advantage of the additional info, even if the main TS->JS compilation just treated `const val: int` the same as `const val: number`.

I wonder if a syntax like

    const counter: Number<int>
would be acceptable.

obviouslynotme
1 replies
21h48m

Yeah, that is why I said TS (or something similar). TS made some decisions that make sense at the time, but do not help compilation. The complexity of its typing system is another problem. I'm pretty sure that it is Turing-complete. That doesn't remove feasibility, but it increases the complexity of compiling it by a whole lot. When you add onto this the fact that "the compiler is the spec," you really get bogged down. It would be much easier to recognize a sensible subset of TS. You could probably even have the type checker throw a WTFisThisGuyDoing flag and just immediately downgrade it to an any.

com2kid
0 replies
20h8m

I'm pretty sure that it is Turing-complete.

Because JS code can arbitrarily modify a type, any language trying to specify what the outputs of a function can be also has to be Turing complete.

There are of course still plenty of types that TS doesn't bother trying to model, but it does try to cover even funny cases like field names going from kebab-case to camelCase.

the_imp
0 replies
20h15m

With Extractors [1] (currently at Stage 1), you could define something like this to work:

    const Integer = {
      [Symbol.customMatcher]: (value) => [Number.parseInt(value)]
    }

    const Integer(counter) = 42.56;
    // counter === 42
[1] https://github.com/tc39/proposal-extractors

THBC
0 replies
21h48m

Number is not semantically compatible with raw 64-bit integer, so you might as well wish for a native

    const counter = UInt64(42);
The current state of the art is

    const counter = BigInt.asUintN(64, 42);

bobvarioa
4 replies
19h25m

Contributor to Porffor here! I actually disagree, there's quite a lot that can be improved in JS during compile time. There's been a lot of work creating static type analysis tools for JS, that can do very very thorough analysis, an example that comes to mind is [TAJS](https://www.brics.dk/TAJS/) although its somewhat old.

attractivechaos
3 replies
16h18m

there's quite a lot that can be improved in JS during compile time

I wonder how much performance gain you expect to achieve. For simple CPU-bounded tasks, C/Rust/etc is roughly three times as fast as v8 and Julia, which compiles full scripts and has good type analysis, is about twice as fast. There is not much room left. C/Rust/etc can be much faster with SIMD, multi-threading and fine control of memory layout but an AOT JS compiler might not gain much from these.

itsTyrion
1 replies
9h45m

Honestly, I’m fine with only some speed up compared to V8, it’s already pretty fast… My issue with desktop/mobile apps using web tech (JS) is mostly the install size and RAM hunger.

mst
0 replies
6h4m

[raises hand] I'd be fine with no speedup at all if I can get more reasonable RAM usage and an easily linkable .so out of the deal.

jitl
0 replies
2h57m

In my mind, the big room for improvement is eliminating the cost to call from JS into other native languages. In node/V8 you pay a memcopy when you pass or return a string from C++ land. If an ahead of time compiler for JS can use escape analysis or other lifetime analysis for string or byte array data, you could make I/0 or at least writes from JavaScript to, for example, sqlite, about twice as fast.

lerpgame
0 replies
17m

Yea I came here to say this, actually I was able to transpile a few typescript files from my project into assembly using GPT just for fun and it actually worked pretty well. If someone simply implements a strict typescript-like linter that is a subset of javascript and typescript that transpiles into assemblyscript, I think that would work better for AOT because then you can have more critical portions of the application in AOT and other parts that are non-critical in JIT and you get best of both worlds or something like that. making js backwards compatible and AOT sounds way too complicated.

singpolyma3
0 replies
15h36m

You can do inference and only fall back to Dynamic/any when something more specific can't be globally inferred in the program. For an optimization pass this is an option.

refulgentis
0 replies
19h36m

You say you have "thought about doing this"..."[but] you can't get much better performance", then describe the approach requiring things that are described first-thing, above the fold, on the site.

Did the site change? Or am I missing something? :)

myko
0 replies
17h47m

Dart, maybe, but it lost

syrusakbary
15 replies
22h5m

It's awesome to see how more JS runtimes try to approach Wasm. This project reminds me to Static Hermes (the JS engine from Facebook to improve the speed of React Native projects on iOS and Android).

I've spent a bit of time trying to review each, so hopefully this analysis will be useful for some readers. What are the main commonalities and differences between Static Hermes and Porffor?

  * They both aim for JS test262 conformance [1]
  * Porffor supports both Native and Wasm outputs while Static Hermes is mainly focused on Native outputs for now
  * Porffor is self-hosted (Porffor is written in pure JS and can compile itself), while Static Hermes relies on LLVM
  * Porffor currently doesn't support async/promise/await while Static Hermes does (with some limitations)
  * Static Hermes is written in C++ while Porffor is mainly JS
  * They both support TypeScript (although Static Hermes does it through transpiling the TS AST to Flow, while Porffor supports it natively)
  * Static Hermes has a fallback interpreter (to support `eval` and other hard-to-compile JS scenarios), while Porffor only supports AOT compiling (although, as I commented in other thread here, it maybe be possible to support `eval` in Porffor as well)
In general, I'm excited to see if this project can gain some traction so we can speed-up Javascript engines one the Edge! Context: I'm Syrus, from Wasmer [3]

[1] https://github.com/facebook/hermes/discussions/1137

[2] https://github.com/tc39/test262

[3] https://wasmer.io

bobvarioa
6 replies
19h29m

Contributor for Porffor here! I think this is a great comparison, but Porffor does technically support promises, albeit synchronously. It's a similar approach to Kiesel, https://kiesel.dev/.

frutiger
5 replies
14h23m

Not sure where you mean by synchronously but if you mean what I think you mean then that is not correct behaviour. This is important to ensure predicatibility.

Eg.

    Promise.then(() => console.log(“a”));
    console.log(“b”)
must log [“b”, “a”] and not [“a”, “b”].

cowsandmilk
2 replies
8h46m

JavaScript doesn’t make the guarantee you are claiming here

jitl
0 replies
2h53m

Yes, it does. Promise continuations always run in the micro task queue per the standard. I guess if someone mutates the promise prototype it’s not guaranteed, but the spec does guarantee this order

fwip
0 replies
4h29m

What do you mean? Does JavaScript allow the `then` of a promise to execute before the contents of the promise?

canadahonk
1 replies
9h9m

This type of test does work as expected. The "sync" means that it does not feature a full event loop (yet) so cannot easily support async I/O or some more "advanced" use cases.

spankalee
0 replies
3h27m

There's a WASI async functions proposal I think? Are you looking at supporting that so you don't have to bring your own event loop?

wdb
2 replies
19h31m

You make it sound bad to rely on LLVM.

bangaladore
1 replies
19h0m

Yeah... It is unclear to me how not using LLVM is a good thing. You'd inherit millions of man-hours of optimization work, code gen, and general thought process.

Is there a technical reason why?

fabrice_d
0 replies
18h20m

In this case, being self contained will help implementing things like `eval()` and `Function()` since Porffor can self-host. That would be much harder with a LLVM based solution.

tmikov
1 replies
14h7m

For the record, Static Hermes fully supports compiling JS to WASM. We get it basically for free, because it is an existing LLVM backend. See https://x.com/tmikov/status/1706138872412074204 for example.

Admittedly, it is not our focus, we are focusing mainly on React Native, where WASM doesn't make sense.

The most important feature of Static Hermes is our type checker, which guarantees runtime soundness.

Porffor is very interesting, I have been watching it for some time and I am rooting for it.

syrusakbary
0 replies
1h29m

Thanks for the correction Tzvetan! Keep up the great work in Static Hermes

jonathanyc
1 replies
21h30m

Just wanted to say I really appreciated the high-quality comparison. How something compares to existing work is my #1 question whenever I read an announcement like this.

syrusakbary
0 replies
19h32m

Thanks!

canadahonk
0 replies
19h26m

Good comparison and thanks! A few minor clarifications: - Porffor isn't fully self-hosted yet but should be possible hopefully! It does partially compile itself for builtins (eg Array.prototype.filter, Math.sin, atob, ...) though. - As of late, Porffor does now support basic async/promise/await! Not very well yet though.

saagarjha
7 replies
21h54m

What happens when someone calls eval?

syrusakbary
4 replies
21h43m

Since Porffor can compile itself (you can run the compiler inside of Porffor), any calls to eval could be compiled to Wasm (via executing the Porffor compiler in Porffor JS engine) and executed performantly on the same JS context *

*or at least, in theory

tasty_freeze
1 replies
19h47m

I haven't used it, but reading their landing page, Porffor says their runtime is vastly smaller because it is AOT. If the compiler had to be bundled with the executable, then the size of the executable would grow much larger.

spartanatreyu
0 replies
12h45m

Yeah, but you'd only need to include Porffor into compiled code if it used eval.

And most devs stay away from eval for well-deserved security reasons.

concerndc1tizen
1 replies
12h23m

That wouldn't work: The Wasm spec does not allow for modifying an already running program (e.g. JIT).

AFAIK the only option is to include an interpreter.

syrusakbary
0 replies
36m

You don't need to modify an already running program, you can plug new functions into an existing Wasm program via a table, and even attach variables via globals or function arguments.

I'd recommend checking the work on making SpiderMonkey emit Wasm as a backend

david2ndaccount
0 replies
21h32m

With a string literal it works, with a dynamic string it just gives an undefined reference error to eval.

canadahonk
0 replies
19h37m

For now, unless it is given a literal string (eg `eval('42')`) eval just won't work.

pityJuke
5 replies
20h27m

Also funded the Ladybird browser recently. Seems to like his web.

pandemic_region
4 replies
11h27m

Not familiar with this stuff at all, is Porffor a js engine that Ladybird could end up using? Or are they still writing their own?

simlevesque
0 replies
4h23m

You'd have to wait before using any website using js. The AOT introduces a delay. Not sure it's achievable.

meiraleal
0 replies
5h19m

If that happens I would thank defunkt so much. Great way to spend money.

giancarlostoro
0 replies
4h15m

Porffor compiles the JS to WASM, so it would be kind of a waste. Though there might be no reason the two projects cannot share some logic, like parsing the JS and such. I kind of doubt this is why its being funded. It sounds like a useful project.

Sammi
0 replies
10h39m

Or just two separate moonshots for now.

vanderZwan
3 replies
8h1m

I unironically appreciate that it supports String.blink. It's always a good sign if the developer has a sense of humor and playfulness.

chrismorgan
2 replies
3h8m

If it’s interested in behaving as though “the ECMAScript host is a web browser”, of course it does, it’s part of the spec: https://tc39.es/ecma262/multipage/additional-ecmascript-feat.... And given how trivial it is (function() { return "<blink>" + this + "</blink>"; }), it makes a fair amount of sense to implement it even if the ECMAScript host is not a web browser, in which case it’s optional (see the top of the linked document). I wouldn’t expect it to have any association with “a sense of humor or playfulness” whatsoever.

vanderZwan
1 replies
2h48m

The reason I would argue that it does imply a sense of humor is that on any web browser that supports WASM, the <blink> tag itself has been deprecated and non-functional for ages. In fact, it doesn't even have an entry on MDN, only an indirect reference through String.blink()

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

chrismorgan
0 replies
14m

<blink>, sure, but String.prototype.blink is still part of the spec, and unlike the HTML Standard, the ECMAScript specs are much more pseudocode that you largely just copy and turn into actual code as necessary, to the point that, if as an ECMAScript host it’s playing the web browser, I’d be extremely surprised (as in, “wait, what!? This is literally the weirdest technical thing I’ve seen all year, maybe this decade”) if that one method wasn’t there. When you’re implementing specs written like this, you just do it all; you don’t—you never—pick and choose based on “that thing is obsolete and no one uses it anyway”.

nick_g
2 replies
21h52m

I'm a bit suspicious of the versioning scheme described here[0]

If some change were required which introduced a regression on some Test262 tests, it could cause the version number to regress as well. This means Porffor cannot have both a version number which increases monotonically and the ability to introduce necessary changes which cause Test262 regressions

[0] https://github.com/CanadaHonk/porffor?tab=readme-ov-file#ver...

derdi
1 replies
21h38m

Presumably the idea is that any work that causes Test262 regressions is temporary, takes place in a separate branch, and is only merged to main once the branch also contains all the necessary fixes to make the regressions go away again. A new version number would only be used once that merge happens.

canadahonk
0 replies
19h7m

Exactly. The versioning system is definitely unique and controversial, but I think it fits for a fast moving project like this, so I don't have to really consider versioning which could slow development. When it becomes more stable, I'll likely move to a more traditional semver scheme from 1.0.

CharlesW
2 replies
19h11m

What subtleties am I missing that makes "ahead-of-time JS engine" a better description than "JS-to-Wasm compiler"? (If it's mostly a framing strategy, that's cool too.)

chillfox
1 replies
18h11m

There are already projects that does JS-to-WASM by bundling a JS interpreter. So, it's likely to make the difference to those clearer.

canadahonk
0 replies
9h11m

Yep! Also as it is technically more of an engine/runtime (sometimes) than "just" a compiler, folks in the JS space are more familiar with engine as a term :)

rvnx
1 replies
21h46m

Seems like the same idea that Facebook had with PHP which was to transpile PHP to C.

It was called hiphop-php, then they eventually gave up, before creating hhvm on a complete new concept.

pjmlp
0 replies
21h15m

Historically the sequence is a bit different.

After HHVM proved that its JIT compilation engine was faster than their HipHop AOT attempt, did they decided to focus only on HHVM going forward.

rubenfiszel
1 replies
22h44m

At windmill.dev, when users deploy their code, we use Bun build (which is similar to esbuild) to bundle their scripts and all their dependencies into a single js file to load which improve cold start and memory usage. We store the bundle on s3 because of the size of the bundles.

If we could bundle everything to native that would completely change the game since as good as bun's cold start is, you can't beat running straight native with a small binary.

canadahonk
0 replies
19h9m

Hey, dev here, I agree that is an interesting application which Porffor could potentially help with! Happy to chat sometime :)

mproud
1 replies
21h19m

“Purple” in Welsh

zem
0 replies
19h39m

with etymology from the greek for purple; the english "porphyry" (a purple mineral) is probably the commonest word with the same root.

giancarlostoro
1 replies
4h9m

The most interesting bit about Porffor in my eyes is it lets JavaScript compete with something like Blazor (or allows JS to stand its ground), which kind of makes using any JS in your project redundant, since all your front-end logic can be done in C#. The reason I say this is, because obviously, there are JS devs, but if WASM tooling in other languages grows it will make JS redundant or feel incomplete / outcompeted.

I wont be surprised to see a SPA framework that uses Porffor once it is more mature, or even the major ones using it as part of their tooling.

WASM is the next step after SPA's essentially.

If you have never touched Blazor, I recommend you check it out via youtube video if you don't do any C#, it is impressive. Kudos to Microsoft for it. I have had 0 need or use for JavaScript since using it.

tempodox
0 replies
3h53m

That sounds like we could finally get rid of the worst programming language to ever reach as widespread adoption as JS has today. I can't wait!

WatchDog
1 replies
18h52m

How does this compare to quickJS, which can also compile JS to native code(with a C compiler)

spacechild1
0 replies
10h42m

I think QuickJS only compiles to bytecode and then embeds it together with the interpreter in an executable. The JS itself is still interpreted. Others please correct me if I'm wrong.

xiaodai
0 replies
18h52m

Stop trying to retrofit garbage on garbage. Go direct to WebAsm already

userbinator
0 replies
13h21m

Porffor can compile to real native binaries without just packaging a runtime like existing solutions.

Any language that allows generating and interpreting its own code at runtime will have the "eval problem". From some other comments here, it sounds like Porffor's solution is to simply ignore it.

solumos
0 replies
22h42m

Just out of curiosity, how does the performance (compilation + runtime) compare to something like bun[0]?

[0] https://bun.sh/

ijustlovemath
0 replies
22h39m

I'd love to know if there's a way to compile NodeJS to native libraries with this! I have a process [0], but it's a bit hacky and error prone

[0] - https://github.com/ijustlovemath/jescx

brundolf
0 replies
13h17m

There's a subset of JS that's trivially compilable, it's the long tail of other stuff that's hard. But cool to see research happening on where that boundary lies and how much benefit can be had for that subset

THBC
0 replies
21h54m

This seems like an opaque supply chain attack waiting to happen.

Sytten
0 replies
14h45m

Its refreshing to see all the various JS engines that are out there for various usecases.

I have been working on providing quickjs with more node compatible API through llrt [1] for embedding into applications for plugins.

[1] https://github.com/awslabs/llrt

FpUser
0 replies
19h3m

I find this very interesting. Keep it up and bring it to production shape

Borkdude
0 replies
21h34m

I got "TodoError: no generation for ImportDeclaration!" for this script:

import * as squint_core from 'squint-cljs/core.js'; console.log("hello");