return to table of content

Deep Learning in JavaScript

modeless
26 replies
3d17h

Someone needs to do a TypeScript compiler plugin to add multidimensional array slicing and operator overloading to the language, so these libraries can actually work the way PyTorch does.

I know operator overloading is controversial, but the way it allows automatic differentiation to work transparently through regular arithmetic expressions is very helpful. Without it these libraries will never feel like PyTorch.

JavaScript is so much faster than Python, and the package management experience is so much better, that I really think a JS library would have a chance to take market share from PyTorch in the long term if it had those features.

throwaway4aday
7 replies
3d7h

What's wrong with creating a function that does those things? It would be less surprising to people new to the library, would be self-documenting by having a name and an easily inspected declaration with named arguments, and it would be idiomatic JS. You could also have variants that are purely functional and return a new value or ones that mutate in place that you could use depending on your needs.

QuadmasterXLII
3 replies
3d7h

In python when I try a new gpu accelerated array library, to write norm_arr = arr / lib.sqrt( lib.sum( arr*2, axis=-1, keepdims=True) , I have to read documentation for sum to see whther they use axis or dim in sum.

In javascript, to write the same thing I need to read documentation for a sum function, a broadcasted division function, a multiply function. I can probably assume that the sqrt function behaves and is named as I would expect

4hg4ufxhy
1 replies
3d4h

Why can you make assumptions about operation overloads but not functions?

foolswisdom
0 replies
3d4h

Because there is nothing to make assumptions about. In the example code, both multiplication and division have a scalar on one side, there's no possible ambiguity of behavior. But there is the eternal question of terminology: do you specify dimensions by "axis" or "dim" and does your API actually use both terms in different places?

(that's what I think the GP meant, anyway).

throwaway4aday
0 replies
3d3h

If each of those operators were implemented as functions then you'd have different names for different implementations in order to avoid confusion over what type of division or multiplication they were performing. It's more verbose but that's a good thing since it prevents you from making incorrect assumptions about what's going to happen when you do a * b.

modeless
2 replies
3d2h

What's wrong is it obscures simple math expressions behind tons of dots and parentheses. The thing is that the core of deep learning algorithms is usually very simple math. It's useful to be able to translate that math directly from research papers into straightforward expressions that mirror the structure in the paper like a = b / c + d * e rather than something less similar like a = b.divide(c).add(d.multiply(e)).

throwaway4aday
0 replies
2d17h

Depending on what you learned first, dots and parentheses are a lot simpler to understand than math expressions.

DanielHB
0 replies
2d10h

You could probably build tagged template literal like:

a = e`${b} / ${c}`

Not ideal, but much better and without magic pre processors

eduardoleao052
6 replies
3d16h

Could not agree more hahaha! I tried to work around it building methods like “torch.add(a, b)” for operator overloading and “torch.at(index)” for slicing. But it’s definitely not as seamless as these features you proposed.

modeless
3 replies
3d16h

You should do it! If you actually had a solution for operator overloading you'd really stand out from the other various JS deep learning libraries. Save me from pip and conda please :)

eduardoleao052
2 replies
3d14h

I can try implementing it in the future lol It would surely be a quality of life improvement. But with the current tools, I tried my best to make the syntax as similar as possible to PyTorch’s!

throwuwu
1 replies
2d3h

If you do want to add evaluation of mathematical expressions you should check out Math.js since they provide a parser among other utilities. Please make it optional though, it would be a nightmare to debug if everything was written in strings.

https://mathjs.org/docs/expressions/parsing.html

eduardoleao052
0 replies
1d17h

Thanks for the tip, will look into it! Yes, I think it would always be better to leave a "vanilla" option available.

noduerme
1 replies
3d15h

Curious, why do you need to construct these as class instances, like operation = new Exp() ? Seems like a lot of extra overhead constructing those objects. Why not just have Exp contain static methods for forwards and backwards?

[edit] nevermind, I missed the cache step. Still not sure it wouldn't be more performant to centralize caches as plain objects somewhere rather than to call new() on every op...?

eduardoleao052
0 replies
3d14h

I centralized the entire backpropagation around the Operation objects. They store the data about the forward prop in the cache, and serve as the connections in the graphs between tensors. Each tensor has a “parent”, “child” and “operation”. These store who generated the tensor, what tensors it generated, and how it was generated (what operation). I could store the backward function inside of each tensor instead of an Operation object, but I chose the slightly more verbose option because I think it is a little more interpretable and simpler to add new operations.

goatlover
5 replies
3d4h

Would JS be faster than Python when it comes to Pytorch? For example, I seriously doubt that would be the case for Numpy, since it's a wrapper for C code, with the ability to use Fortran libraries for optimization.

throwaway4aday
0 replies
3d3h

The benefit of having it in JS is not speed but portability and access to the JS ecosystem of tools. Having the code run in the browser without needing a complex setup is a huge benefit for sharing demos. Node.js provides a way to use native code as well and it's quite commonly used https://github.com/nodejs/node-gyp so there's no reason you couldn't use those same or similar libraries in a JS implementation.

modeless
0 replies
3d2h

C running on the CPU isn't fast enough for ML. You need to run on GPUs or TPUs if you're serious.

Yes, most of the tensor operations in PyTorch do their math in native code. However, Python still does orchestration and other tasks like data loading and because it is so slow it still ends up causing a ton of overhead in many cases despite offloading most of the work. It's very common for the GPU to sit idle between kernels while Python spins. So JavaScript being faster could still be a big advantage.

jsight
0 replies
3d3h

It'd be shocking if it was. PyTorch isn't particularly slow.

fulafel
0 replies
3d3h

Very much the opposite, since this is pure JS. PyTorch uses tuned native-code and GPU components for the heavy lifting and can in some cases compile Python code using PyTorch JIT / torch.compiler / torch.fx.

dagw
0 replies
3d2h

There is often a lot of work that has to be done before and after PyTorch gets involved. For example the code I'm working on right now involves reading and parsing a bunch of files, filtering and extracting a bunch of data based on various criteria, formatting that data, passing it to a PyTorch model and then taking the results from PyTorch, validating it, reformatting it and then writing it to disk. The PyTorch part is probably as fast as it can get, but most of the overall runtime is spent doing all that other stuff and if you can speed that up then that is a clear win in many cases.

sroussey
2 replies
3d16h

Agreed. My suggestion is for Bun to do that since they are already transforming typescript to JS.

But you can do it —- there a babel plugin in the TC39 proposal:

https://github.com/tc39/proposal-operator-overloading

csjh
1 replies
3d13h

I'm glad that proposal was withdrawn; it would've been by far, in my opinion, the worst implementation of operator overloading, in any (mainstream) language

sroussey
0 replies
2d22h

I had glanced through it a long time ago. Maybe it’s time for someone to create a new (and better!) proposal.

mejutoco
1 replies
2d7h

You could use eval to expand whatever syntax you like into function calls. It could be done once at the start of the program and act like a js preprocessor.

I know eval is not kosher but this problem is also not real, so why not.

eduardoleao052
0 replies
2d5h

That's an interesting idea as well, could definitely see that working in some use cases.

ralusek
10 replies
3d16h

Many people seem to be unaware of tensorflow.js, an official JS implementation of TF

https://github.com/tensorflow/tfjs

I'd love to see PyTorch in JS, but I think unless you get it running on the GPU it won't be able to do much.

ipsum2
6 replies
3d10h

tfjs is dead, looking at the commit history. The standard now is to convert PyTorch to onnx, then use onnxruntime (https://github.com/microsoft/onnxruntime/tree/main/js/web) to run the model on the browser using webassembly/webGL (and nodejs if you wanted to, but why?).

fire_lake
5 replies
3d9h

nodejs if you wanted to, but why?

Node.js is better backend than something like Flask.

ipsum2
4 replies
3d9h

Performance is a lot worse on NodeJS with a WebAssembly/WebGL backend versus Flask with a PyTorch/CUDA backend.

throwaway4aday
3 replies
3d3h

If you're using Node you can write whatever you want in C++ and then add a binding to call it from within your Node app. Don't need WebGL.

goatlover
1 replies
2d18h

But you're having to write whatever in C++ versus just using Flask/Pytorch.

throwaway4aday
0 replies
2d17h

A lot of what you need is already written, you just need to find the right libraries and write the bindings. From my encounters with Python ML it seems like "just use Pytorch" is a bit like "simply walk into Mordor".

ipsum2
0 replies
2d21h

Yeah, but why...?

richrichie
0 replies
3d16h

With PyTorch dominating the landscape, my guess was that tf will resurrect itself through tfjs. Seems may not.

jsight
0 replies
3d3h

I had the same thought, but it does seem that tensorflow usage is in steady decline.

eduardoleao052
0 replies
3d12h

Adding GPU support soon is absolutely my goal in the future! I think a PyTorch-based JavaScript library could be useful, as PyTorch has been way more dominant than TensorFlow recently.

treprinum
7 replies
3d18h

Do you plan to implement WebGPU acceleration to make it production-ready?

eduardoleao052
6 replies
3d18h

I am currently studying Typescript and other JavaScript libraries to improve the library’s performance. So adding GPU support is in sight in the future.

dleeftink
5 replies
3d16h

You might already be familiar, but a GPU.js backend can provide some speedups via good old WebGL -- no need for WebGPU just yet!

[0]: https://github.com/gpujs/gpu.js/

eduardoleao052
2 replies
3d12h

Thanks for the tip! I’ll definitely take a look. Adding GPU support is my next step!

eduardoleao052
0 replies
3d4h

Thank you! That’s going to help out a lot!

pzo
1 replies
3d7h

FWIW also taichi is quite popular in python and seems has some javascript related implementation (I haven't used it though), taichi.js [0]

[0] https://github.com/AmesingFlank/taichi.js

eduardoleao052
0 replies
3d4h

Cool! Will take a look

GaggiX
5 replies
3d18h

The nn.Block should probably be renamed to nn.DecoderBlock as it's not very clear if it's supposed to be an encoder or a decoder block, also an option to disable the mask from attention. That said, very cool project.

jsight
3 replies
3d3h

If you have the option to disable the mask, isn't it then a generic nn.Block?

GaggiX
1 replies
3d1h

The ability to disable the mask in nn.MultiHeadSelfAttention, then having nn.DecoderBlock and a nn.EncoderBlock.

eduardoleao052
0 replies
3d

I think that’s it. I’ll probably add that soon

eduardoleao052
0 replies
3d2h

It would be with some simple tweaks. For instance, the current block does not support Cross-Attention, just Self-Attention.

eduardoleao052
0 replies
3d18h

You are absolutely right! Will solve that as soon as possible. Thanks for the feedback!

eduardoleao052
4 replies
3d19h

Learning Machine Learning, I’ve always been interested in PyTorch and its Automatic Backpropagation.

In this project, I tried to reimplement most of PyTorch in Javascript. It is implemented from scratch in a well-documented, unit tested, and interpretable way, so it can help other JavaScript learners get into Machine Learning!

Hope you enjoy!

sroussey
1 replies
3d18h

You might look into node api or ffi and add some native code to speed up some operations on the server.

I helped with this project to do that and get prebuilt binaries: https://github.com/ashvardanian/SimSIMD

I haven't looked at ffi over node yet.

eduardoleao052
0 replies
3d18h

Thanks for the tip! I will look into it. Speed in the tensor calculations is definitely one of the largest challenges in this project.

krohling
1 replies
3d19h

This is awesome! Congrats. Clearly a lot of work and a very cool library.

eduardoleao052
0 replies
3d18h

Thank you very much for the feedback! If you have any question about it, let me know.

mhuffman
3 replies
3d18h

I like to see progress with multiple languages in the space. Julia, R, etc. It does seem though, that python really has a big and commanding lead. So much so that mojo is betting their business on it.

keithalewis
1 replies
3d18h

What language is Python calling to do this? Hint: a language closer to the silicon.

datascienced
0 replies
3d17h

C++, CUDA apis?

eduardoleao052
0 replies
3d18h

That’s true. I myself learned ML in python first, and I’m just now trying to implement some stuff in JavaScript. I guess it could be useful to make web integration with neural networks easier in this case, or at least as an educational package!

marviel
3 replies
3d19h

Very very cool to see!!!

So yours is SO MUCH MORE fleshed out than mine, but I have a similar (again very fledgling) one going here that uses Typescript:

https://github.com/Marviel/lab-grad

I think there are some really neat opportunities to increase the approachability of ML algorithms when you take advantage of the Typescript templating system to explicitly ensure you're not accidentally composing tensors of incorrect shapes.

Unfortunately, my above lib above doesn't yet handle tensors, it's still at the level of individual neurons. Hopefully someday I can bring it further along, use WASM or WebGPU, etc. :)

eduardoleao052
2 replies
3d18h

Thanks for the tip! I will look into your project, for sure! I am still a beginner on Typescript, but when I get the hang of it, I think I might refactor my package to Typescript. Again, thanks for the tip!

pcthrowaway
1 replies
3d7h

Typescript is something you learn progressively over years, and from what others have posted it might not be possible to refactor your package in Typescript (some approaches to problems that are possible in JS aren't supported by TS). But type declarations at the very least should be possible

eduardoleao052
0 replies
3d4h

Fair enough, I’m currently studying what would be the best option, but I think you are right!

csjh
3 replies
3d17h

Why not using a faster language that compiles to WebAssembly?

datascienced
1 replies
3d17h

Wouldn’t WebGPU be the key to unlocking speed?

csjh
0 replies
3d17h

That too, but WebGPU isn’t very well supported yet

eduardoleao052
0 replies
3d17h

Now that I've tried Javascript, I could branch out to other languages in the future. I chose Javascript initially for its popularity and easy use on web browsers.

jzombie
0 replies
3d13h

I agree. This advice should not be ignored; and if the OP is complaining about performance, the answer is lying here in plain sight.

eduardoleao052
0 replies
3d12h

I had read about this as a possibility, but will definitely look more into it now, thank you for the tip!

eiriklv
2 replies
3d4h

Edit: Great work! I'd love to have a nice alternative to PyTorch in JavaScript :)

Edit: formatting

Making JavaScript look like Python in this case could potentially bite you in the ass.

From the example:

    const torch = require("js-pytorch");

    // Instantiate Tensors:
    x = torch.randn([8,4,5]);
    w = torch.randn([8,5,4], requires_grad = true);
    b = torch.tensor([0.2, 0.5, 0.1, 0.0], requires_grad = true);

And:

    class Transformer extends nn.Module {
      constructor(vocab_size, hidden_size, n_timesteps, n_heads, p) {
        //.....
        this.b1 = new nn.Block(hidden_size, hidden_size, n_heads, n_timesteps, dropout_p=p);

For both `requires_grad` and `dropout_p` you wouldn't be able to change the ordering + you're creating global variables.

    /**
     * All of the arguments for this function are positional
     * and cannot be provided in a different order than defined
     */
    function performOperation(values, arg1 = false, arg2 = -1) {
        //....
    }

    /**
     * This works, but only because of the order
     */
    const result = performOperation([1, 2, 3], arg1 = true, arg2 = 10);

    /**
     * This does not work
     */
    const result = performOperation([1, 2, 3], arg2 = 10, arg1 = true);

    /**
     \* What is actually happening
     \*/
    arg1 = true; // global variable
    arg2 = 10;   // global variable

    const result = performOperation([1, 2, 3], 10, true);

eduardoleao052
1 replies
3d4h

Thats true, it’s a limitation of working between these languages. I tried to mitigate it by using clear JSDoc, so that each variable pops up alongside an explanation when calling a function.

eiriklv
0 replies
3d4h

I feel you - Python has much better (more flexible) argument support than JavaScript in this case. Converting the entire set of arguments into a keyed object is usually what happens, but then it wouldn't look like PyTorch anymore.

yoaquim
1 replies
2d19h

This is great!

I was looking for something like this.

Now I just need a guide that tackles the principles of this, but from a typescript/javascript perspective.

PS: TyTorch (from Typescript) sounds nice!

eduardoleao052
0 replies
2d17h

I plan on adding Typescript support soon! Thanks for the feedback

throwaway4aday
1 replies
3d7h

Wow! Thank you for doing this. It looks like a great starting point for anyone approaching deep learning from the JS ecosystem. It is very plainly written and looks like it will be a joy to learn from. Thank you for adding JSDoc comments with type hints!

Are you open to pull requests? If I have the time I'd love to contribute. I'm sure others would as well.

You should write up a short article on this, even something really simple like one of the examples in the README but with some commentary and examples of output and then post it to a few places like https://dev.to/ or maybe https://hashnode.com/ or even Medium (even though I'm not a big fan). There aren't many newer implementations of PyTorch in JS and I've been looking for one to learn from for some time so I'm sure there are a lot of other JS/TS developers out there that would be interested. Getting to the front page of HN certainly helps but having an article somewhere will help everyone after this week find it through a Google search.

Again, thanks so much for doing this work! It's really helpful to have everything spelled out in JS for those of us who haven't used Python much (I'm sure Python devs can relate when they think about JS projects).

eduardoleao052
0 replies
3d4h

Thank you so much for the feedback! I’m open to pull requests, absolutely. And about the article, that’s a great tip, I’ll definitely do that as well.

synergy20
1 replies
3d17h

Is the goal here to do everything in Javascript, or it will also be Javascript for the glue and c++/c/fortran/other-compiled-language for heavy lifting? This is more of a node-based effort, or it will work both frontend and backend.

eduardoleao052
0 replies
3d17h

Initially I'll stick to Javascript (plus some libraries to improve performance), but I'll likely look into that in the future.

ngcc_hk
1 replies
2d13h

One of the learning tool is notebook. I even has a lisp backend to enabled it to be used. It will be nice if there is a notebook version and easy to run on web (like colab).

eduardoleao052
0 replies
2d13h

I’m planning on creating a small article explaining the syntax and the functionalities of the Deep Learning library. I think that could be a useful learning tool, do you think it would help?

hexhowells
1 replies
3d9h

Cool project! I worked on something similar a while ago to learn about automatic differentiation: https://github.com/hexhowells/onegrad.js

Needs more layers to be really useful though.

I always thought examples using browser extensions would be neat since their built in JS and you only need to download the model once.

eduardoleao052
0 replies
3d4h

Really cool implementation. Thanks for the feedback as well!

Sn0wCoder
1 replies
3d12h

This is great. Code seems to have good comments / jsdoc from the bit studied, tests and all. Will take a closer look in the morning but congratulations on the release.

eduardoleao052
0 replies
3d11h

Thank you for the feedback! If you have any questions or suggestions looking it over tomorrow, I’d love to hear them!

fmiras
0 replies
2d19h

nice work but why code is in closures using ugly `var` statement?