Wow that is a great side-project, and a great README to boot. I've been meaning on working through Nand to Tetris after playing around some with Ben Eater's 6502 Computer (https://eater.net/)
Ground up projects like this are fascinating!
It’s also neat how “ground” has been deepening. It used to mean building mainframe from source. Then building a compiler. Now building from logic gates.
How much deeper can you get? Building a mainframe out of Gödel numbers?
One curious idea my friends have entertained is to go one level even deeper and emulate the very transistors that make up the NAND gates on the web, too. It would certainly spell disaster for performance, but it's without-a-doubt interesting.
That would be fascinating!
Do you know any resources that document the transistor to logic gate translation?
In https://nandgame.com/ (mentioned elsewhere, a game version of NAND to Tetris) you start by making a NAND gate out of relays. The relays are electromechanical components, but you can choose to think of a transistor (within certain "regimes") as being directly electrically equivalent to one. (This simplification isn't appropriate for all engineering tradeoff purposes, although I don't know the details of how it fails or how we can benefit from knowing more about transistors' actual behavior.)
The electromechanical relay is a very simple device to understand, if you're willing to just believe that electromagnets produce magnetism (without understanding why the universe works according to Gauss's laws on the relationship between electric current and magnetism). It's a coil of wire where an electric current produces magnetism that physically pulls a switch open or closed.
Like... the physics?
If not, I think a NAND gate is made of just two transistors, so if you mean emulating how transistors should behave then I don't think it will affect performance more than ~50%
Wow, seriously impressive. And the fact that this is the work of basically a high-schooler.
I fear for the kind of competition my kids will have just to make it to college.
This is a natural extension/expansion of the “NAND to Tetris” course on coursera, and is free if you don’t want to be graded.
The course walks you through it all, and there is an accompanying book that you do not need to buy to finish the course.
Anyone who wants to do this and can focus on it for enough time can complete it and extend it into whatever shape they like, like this person.
It really is a good course.
Absolutely true, I'm working my way through it now; it's challenging and time consuming, totally worthwhile imo.
I primarily used the physical book to learn about the nand2tetris platform. I highly recommend it, it's an enthralling read
Liar. You used NAND gates and a clock.
...a clock which can be made from a ring oscillator, consisting of an odd number of NAND gates wired as NOT gates.
How do we know that that will converge to a single constant period of oscillation? Could you have a few different-sized square waves continue to cycle through the circuit?
(I've never built or simulated that, I'm just trying to imagine what could happen!)
Oh wow, I didn't actually know that. Thanks for the interesting trivia
NAND is popular probably because of nand2tetris, but it's worth mentioning that NOR is also a universal gate; and many early computers like the https://en.wikipedia.org/wiki/Apollo_Guidance_Computer#Logic... were entirely made of NOR gates.
I thought it's the other way around, nand2tetris used NAND because it was already popular? At least I remember hearing in university that NANDs are used for everything? Can't remember why they're used for everything though (and why not NOR, for example).
That's because in NMOS logic (maybe there's a symmetric reason in TTL, but I don't know for sure) you can implement a NOR with two parallel transistors between a pullup and ground, producing a zero output if either input is high. The symmetric NAND circuit requires two transistors in series, and therefore switches more slowly.
Cool project. It reminds me of a theoretical issue. As the project page says, this system is clearly Turing equivalent. Since it runs software, it even implements a _universal_ Turing machine. But the design uses only (synchronic) sequential logic [1] and Wikipedia seems to suggest that automata theory considers sequential logic only equivalent to finite state machines. Not Turing machines. Isn't that clearly a major bug in automata theory?
My guess is that automata theory consideres it critically important that a "Turing machine" has an infinite tape, while intuitively it instead seems relevant that it has something like a tape at all, some sort of random access memory, even if it is finite. I think such a memory system can't be implemented with classical finite state machines, at least not with comparable time complexity for read and write, but can be realized with sequential logic.
Real-world computers are equivalent to linear bounded automata, not true Turing machines, because they have finite memory. This technicality is mostly ignored because a computer with a large finite memory is a decent enough approximation to a Turing machine for practical purposes. But, for example, the halting problem is decidable for linear bounded automata — because there are only finitely many states, every computation must either halt or eventually revisit an earlier state and get stuck in a loop — so in theory it’s an important distinction.
It seems you didn't really read my comment though? I was arguing the relevant difference between Turing machines and FSMs was the memory system, not its infinite tape. It's interesting that the Wikipedia article on LBAs doesn't tell us whether they are considered equivalent to FSMs. It seems that by standard automata theory, they must be. Which is intuitively not the correct, since they are much more similar to Turing machines.
Seymour Cray would have loved this. Some of his computers were all NAND gates.
The supercomputers (all?) used wirewrap rather than PCBs. I heard a story once about someone coming in for a demo of a supercomputer and Cray realized there was a bug in the hardware during the demo and while the potential customers were at lunch, he rewired the machine to fix the bug.
Right. Seymour Cray said that the two big problems in supercomputing were "the thickness of the mat" (of wires on the backplane) and getting rid of the heat.
This is a Cray-I backplane.[1]
Fantastic work. NAND to Tetris helped me land my first job out of college.
How did it help?
Resume padding and conversation starter during interviews. It also filled in some gaps in knowledge.
This is amazing work. I wanted to build something similar (virtual) while I was taking the Nand2Tetris course. I'm so impressed that you actually did it. You must have a really good understanding of how computers work now.
And I was just thinking about the same thing this morning, using SVG to model the basic components. And lo and behold somebody has done a magnitude more amazing job then what I was imagining!
Can anybody recommend challenges similar to this one?
Try emulating Message Passing Interface. Could be lot more challenging though.
Curious, how many NAND gates are there in total?
I've inspected my code closely. Every clock cycle, the NAND gate is used 3,234 times :)
See also https://nandgame.com
Nice job. Now we should program it in subleq2[0] :D
[0] https://en.wikipedia.org/wiki/One-instruction_set_computer
Doing a design for this (specifically, design a microcoded, pipelined RISC processor, from the bottom up, with nothing but NAND gates) was the main problem on the Computer Hardware quals exam at UC Berkeley in the early 1990s. We didn't have to physically build it, though, just produce the detailed design on paper.
Awesome work! Bookmarked for in-depth perusal later. As a fan of NAND-to-Tetris, but never made it all the way through, I look forward to poking around in your project.
I could make a few college classes out of this. Well done material.
Incredible achievement! Good job.
this is fantastic! great work...
Great work! You have seen the levels of abstraction that most programmers won't throughout their careers.
Turing Complete[0] is a fun game similar to this where you create your own computer from NAND gates, including your own assembly language.
[0] https://store.steampowered.com/app/1444480/Turing_Complete/
Thank you. First principles FTW!
Would it be at all feasible to build a physical NAND-to-tetris computer? Or is it purely a virtual exercise?
There's this one that goes one step beyond that, it's built out of 40,000 discrete transistors: https://www.youtube.com/watch?v=z71h9XZbAWY
EDIT: there's more information here: https://www.megaprocessor.com/
I kind of want something midway between the FPGA version and the all-transistor version, something that just uses 7400 series chips (or, presumably there’s a 26-pin equivalent with 6 gates instead of three). Heck, I think even something that goes ahead and uses the full panoply of basic logic chips available could be kind of cool to see.
I think Ben Eater's 8-bit computer is closer to what you want: https://eater.net/8bit/
It's been a few years since I studied it (I even built the clock module, registers and the ALU), but from what I remember the biggest departing point from what you want is that the control logic (from instruction decoding to deciding which sub-units to activate for each instruction) is done with an EEPROM instead of individual logic gates, as described here: https://eater.net/8bit/control
Slu4 has a great series of videos about exactly what you are looking for, with his Minimal 64 computer https://youtu.be/FJsnKu20ch8
61 TTL chips mentioned in this 8 minute overview https://youtu.be/3zGTsi4AYLw
Probably doable, but takes a lot of dedication. Especially debugging such physical endeavors is crazy
A few nand2tetris fanatics have actually done this! And by a few, I mean quite a lot of people. Here's one such hardware project of nand2tetris: https://gitlab.com/x653/nand2tetris-fpga/
But you can Google "nand2tetris fpga" for more.