This seems like it might be interesting to me if I already had some understanding of neural networks. Unfortunately for me, I can't even complete the RNN because there's nothing to even suggest what I'm missing when I connect the dots in the only way that the UI suggests I can.
I can’t figure out how to connect the dots on mobile.
Drag the little gray circle on the top or bottom of each node, to another node's gray circle
Try doing that on a moving bus!
I don’t think this works on mobile, or at least it’s not working on my iPhone.
Looks nice, but it took a while to see how to connect the dots.
Hmm I'll think about how to make the dots more obvious visually
On mobile it'd be helpful if the clickable area for the dots was larger
noted, will add to the list for v2, thanks!
Good idea, needs bit more text explaining what is happening, and how to make the connections. I gave up because I couldn't figure out just the UI on how to click.
Adding a little gif showing how to connect nodes right now
Saw it, thank you. That did help
Also.
The hitbox for the connections. can you make them bigger. It is pretty difficult getting the mouse to hover just right to do the drag/drop.
Will do today!
The tutorial rick rolled me! I trusted you sabrina_ramonov!
lol what link did you click on? were you on mobile?
On the help modal, clicking the text container at the bottom links to it.
I have always found neural network diagrams like the RNN one here to be very vague and even slightly misleading. What does it mean that h_t loops onto itself? While I know that it means "take as input h_{t-1} also", the diagram itself does not illustrate the concept to the primary person looking at such a diagram, i.e. someone wanting to learn about the architecture.
I came to post the same comment. I was confused by the lack of "t+1" or "t-1" nodes, and then it took me a while to realize I had to connect the "ht" node to itself.
If something takes two inputs (e.g. an adder), then I'd expect it to have two separate connectors, not "also" plugging in the second thing.
"RNN
A simple RNN with a single hidden layer"
This applies everywhere, but especially if you are making a teaching tool... Spell out the damn acronyms/initialisms!
I shouldn't have to look at the last square, "Deep RNN", to piece together the what the first square, "RNN", means.
Sure I'll do this in v2, thanks for the feedback.
Thanks for this wonderful game. The UI is neat and it helps understand the concepts well.
Thank you, I appreciate it! For v2, I'd like to add more NNs and a little explanation for each one.
UPDATE: as a quick bandaid, I added a blue HELP button in the top right corner showing how to connect dots
Thanks, that helped, my kids just used our house LLM to understand the game :D
Suggested title change: "I built a game to test your knowledge of NN architectures."
agreed, I find it pretty useful to check if I remember where LSTM stuff connects
I remember reading the original paper a while ago but always forget (pun intended) where to connect stuff
then I realized that memorizing it visually is not the best approach, it's better to think about it in this sorta loose fashion -- I remember there is forget gate, well it forgets previous stuff so there is probably some hadamard product somewhere, it probably needs some inputs and previous hidden states...there was some -1,1 forcing in candidate memory so probably needs tanh instead of sigmoid...and then piece by piece i can reconstruct it pretty closely
I joined up some dots, I learnt something, and I got rick-rolled. Awesome game.
It did really get me a bit OCD that the deep RNN had the inputs at the top and the outputs at the bottom initally. The inputs have to be connected at the top edge so need to be at the bottom!! :)
Yeah the input/output is annoying me too, I'll improve it in v2!
Transformer / attention block is missing
On my todo list to add this as another level.
ANOTHER UPDATE: here's how I use multimodal ChatGPT-4o to parse the neural network architecture diagrams into the data format I need to create a new game level: https://www.youtube.com/watch?v=4GOWzuykh1c&ab_channel=Sabri...
What does made with mushroom mean?
Never gonna give you up...
I just got rickrolled by the tutorial. Never gonna let my guard down ever again. (or let you down)
Great idea but without trying to start a long discussion, you might want to support batch norm before or after the activation function.
Someone can correct me but I believe all of these networks have been rendered obsolete by Transformers?
slick! seems more designed to help you practice/review your knowledge of neural networks rather than teach you network architectures?
Love it! Is the source code available somewhere?
There are two outputs, two inputs, and three edges. The answer might not be obvious, but it's not a UI problem.
No, that's something you've inferred from your domain knowledge.
There is a set of dots labeled "xt" in blue, a set of dots labelled "ht" in purple, and a set of dots labelled "yt" in green. Additionally there's a scoreboard with "0 clicks" in blue, "3 edges remaining" in red, and "0 extra edges" in green.
With a bit of color matching I might assume "yt" maps to "extra edges," but that could be a red herring, because I don't see how "clicks" maps to "xt" or where red and purple come in.
It could also help if "RNN" had been defined, but it wasn't...
I guess "decades of clicking things" is a domain one can be knowledgeable in? Usually boxes with draggable things on the top are inputs, on the bottom are outputs.
It flows in the reverse direction of what I’d expect (out is at the top, in is at the bottom, the opposite of any visual programming or diagram I’ve ever seen). It’s also represented in a way I’ve personally never seen ANN’s drawn. I thought you had to connect the dots in the middle and thought “huh? It mustn’t work on mobile” until I read the comments and tried again. And this is with decades of clicking things domain knowledge, and a small bit of neural networks knowledge.
Hmm, my decades of clicking things lead me to assume a flow from top to bottom, so something with a connector on the bottom is a source that will output data through that connector, and something with a connector on the top is a sink that will accept input data through that connector.
I didn't encounter the issue you described with adding an already existing edge though -- the counter stays the same
the answer to RNN is non-trivial, but perhaps Recurrent in RNN might be of some use
Even knowing what the R in RNN stands for requires some pre-existing knowledge of neural networks. Which isn't something that's helping _me_ learn about them, particularly.
There is a help button in the top right that shows you need to focus on the circle node connectors to "solve' the problem.
At least for the first example:
You have a blue box labeled xt with a single node connector at the top. You have a purple box labeled ht with a node connector at the top and bottom. You have a green box labeled Yt with a node connector at the bottom.
The game tells you at the top you have 3 edges remaining.
Dragging a line from one node to another, releasing, and it turning green means you have placed a "correct" connection.
i.e. xt -> ht [bottom] will give you a green line.
Repeat until you have all edges solved for.
It's not spelling it out for you, but once you complete the "game" you'll at a very high level understand the moving pieces within the network, and the "flow" of data.
The help didn’t exist originally: https://news.ycombinator.com/item?id=40430064
Yeah, that was one of my concerns launching this v1. So in v2 I plan to add a little explanation for each neural network, perhaps along with an animated video showing what the final architecture looks like. Thanks for the feedback.
At the moment it doesn't feel like a game but a demo or tech preview of the UI. For a game I'd expect to have rules or a goal, and then be guided through it as the complexity increases. If the goal of the game is to learn, this would be a great medium for it. Good luck with v2, I hope to remember and see it when I can more actively enjoy it.
The goal is to build the network in the fewest clicks. I agree and have noted many of the comments about adding more explanations for those unfamiliar with the particular network.
Uh, just to be clear - having an example network is one thing. But more importantly, explaining the blocks is what is being asked for.
If you play any game around building computers from logic gates, or any factory optimization game, the idea is to start with components, understand thoroughly their tiny single function, then begin to combine them in different ways.
So yeah - seeing a RNN would allow me to draw the connections, but what I want to understand (what would help me learn from this game) is to know what h(x) means. Before we even construct a network, we should have static inputs to play with those blocks to see what they do. Ideally, we should be asked to construct those blocks from other constituent parts (logistic functions? I dunno).
I've only been able to solve the RNN problem because out of frustration I tried every combination possible. It was 3 edges.
When I got to the second one without any stated goal or meaning, and it said "19 edges", I gave up.
I had the same experience. A better description would be a tool to test your understanding of neural network architectures, not a tool to teach you about neural network architectures.
True but if it was also adapted to try to meet the promise of the current title then that would be really nice.
I am happily taking all this feedback from HN to improve the learning experience, add more graphs for v2, thanks all!
Yeah - you aren't "learning" anything. You're guessing-and-checking until it lets yo go. No idea what blocks do what, or why you're connecting them - which would be the basis for learning.
This would be great if it would train a toy-problem, and then of course it would only work if you built the right architecture.
I suspect that's a bug because if you connect Xt to Ht twice... it succeeds.
Edit: This no longer repros and only the correct solution works now from what I can tell.
It won't let me do that.
It'd be nice if there were more explanation to prime you about the concept so you don't simply revert to guess and check.
Hint: The R in RNN stands for Recurrent.
What NN stands for is left as an exercise for the reader.
This was very helpful.
All this game convinced me to do is to bury my head in the sand and hide from ML further.