Off topic but I did not know that Hatetris and "there is no antimemetics divisions¹" have the same creator : qntm !
Tetris, er, um I mean the use of interlacing cube blocks in random ways not intruding on existing IP, is such a deep well of variations on a theme from the hard core speed cults to a jelly gummy wiggle version to my fave that adds a z-axis [1] -- it just keeps on entertaining.
Tetris, er, um I mean
TTC has such an iron grip on Tetris that even clones which don't use the name or anything similar to it are still at risk of being shut down, it's ridiculous. It's possibly the only example of game mechanics being de-facto copyrighted, in spite of game mechanics ostensibly not being copyrightable, due to some legal sleight of hand where they successfully argued that the look of Tetris is their exclusive trade dress. That look is inextricably tied to the mechanics, there's no way to make a Tetris clone which doesn't look like Tetris.
Lego tried the same strategy, but it didn't work for them. What's different for TTC?
Lego protection was based on patents. They tried to play the "make a slight variation and extend the patent" game but were slapped down. Now anyone can make Lego-compatible blocks legally, though of course they can't identify it as Lego. (Although based on the various ones I've gotten as presents, Lego never had anything to fear. The knockoffs sucked before the patent expired and they didn't get any better afterwards. It's still a terrible idea to let knockoffs mix with your real sets, and that's 0% Lego "purism" and 100% pragmatics. Maybe there's a good specific knockoff somewhere, but the odds seem poor.)
TTC protection is based around other IP constructs. Here's a good sample link from a source that seems to know what is up, which I link to for the legal analysis rather than the details of a 2009 court case: https://www.gamedeveloper.com/game-platforms/exclusive-i-tet...
Your information is outdated. Cada is at parity with lego now, they have even introduced brick designs that lego later copied (flip flop technic beams). The rest of the Chinese brands are similar, and even aliexpress mystery bricks are serviceable. Sometimes their color matches are not exact, but lego had the reddish brown issue so that's also at parity.
How long does the license persist? Wikipedia says the first version came out in 1985. Or is it somehow a life of the author + century kind of deal?
IANAL, but a quick Google suggests that trade dress protection can be extended indefinitely, the only requirement is that it is still being actively used.
TTC exists solely to collect rent on Tetris so they're never going to let it slip away if they can help it.
That look is inextricably tied to the mechanics, there's no way to make a Tetris clone which doesn't look like Tetris.
If we lived in a just world, that would make the look uncopyrightable, rather than making the mechanics copyrightable.
Do not miss P.S. section at the end. It is very inspiring! Quote:
"We’re not researchers at all. We’re just two people who became obsessed with a problem and put their meager knowledge to use, beating rocks against rocks in different configurations until something resembling a spearhead came out. ... You, too, can do something like this. Find a problem. Become obsessed with it. Learn everything you can about it. Fall down dead ends. Give up, and then keep thinking about the problem at night. Have Eureka moments that lead you down other dead ends. ... "
Sounds to me like they are superb researchers.
(Eureka moments leading you down more deadends is so wel put too.)
Yup, this IS research from my perspective.
Superb researchers indeed. Most would lose all enthusiasm after the riddle was "solved". Yet they still found it in them to document.
As a former researcher I can attest this is how research is done :-)
Related:
Losing the World Record in Hatetris (2023) - https://news.ycombinator.com/item?id=36558013 - July 2023 (1 comment)
Getting the World Record in Hatetris - https://news.ycombinator.com/item?id=32495842 - Aug 2022 (1 comment)
Hatetris – Tetris which always gives you the worst piece - https://news.ycombinator.com/item?id=27063894 - May 2021 (245 comments)
Hatetris - https://news.ycombinator.com/item?id=4846607 - Nov 2012 (2 comments)
Hatetris: Tetris That Hates You - https://news.ycombinator.com/item?id=1253492 - April 2010 (19 comments)
The "Losing the World Record..." article is especially worth reading.
Maybe we can have another thread about that one specifically, once enough time has gone by.
This is gold.
Reading the back-and-forth about the Hatetris world record progression has the Summoning Salt music playing in the back of my mind, and I absolutely want to see one of his YouTube expositions on the subject.
This shows how important the right title is in your news article or blog post.
That post that got 9 points and 1 comment should have been called "Hatetris has been SOLVED (infinite score)" instead of "Losing the World Record in Hatetris".
And indeed, the original Hatetris has been solved: a loop has been discovered that lets you get any score.
I also burned myself on an overambitious machine learning project in the past. I had and still have little practical experience but I think I learned a common beginner lesson. Existing ML architectures apply worse to new problems than we think. The only sane way to ML is to reproduce something that works and then make small incremental changes.
I think it's because most people's mental models for machine learning isn't great. They're not like brains or neurons, they're like a really convoluted quadratic regression calculator over an arbitrary number of variables. So if you stick to domains that this is appropriate for, you can actually spin together some pretty neat stuff. I think once one understands the XOR Problem [1], it all starts to mentally come together pretty quickly.
[1] - https://www.educative.io/answers/xor-problem-in-neural-netwo...
I think one of the general takeaways about ML is that barring a few experts, its really challenging to reason about your system. Like, yes, you might expect that a convolutional layer will behave in a specific way under ideal conditions, but the way that behavior manifests is often wildly hard to predict during the early days.
I agree that step 1 for most beginner projects should be to start with something that works and then tweak.
Very good article. I hadn't seen it before. I wonder why it never mentions SAT solvers at all. I can believe that the approach is hopeless, but a few words on the topic would still have been nice.
From Dave:
Short answer: SAT solvers are hard.
Long answer: I actually discussed it with Tim once, long after part 2 of our blog post and after the whole thing settled down. Tim was making an SAT solver based on a post about a homemade Sudoku program that got out of hand (https://t-dillon.github.io/tdoku/), and HATETRIS has a binary grid representation, so it's a logical thing to attempt. So, how would you answer the question of the longest possible game with SAT? The idea would be that you can start with a set of wells S_0, generate a new set S_1, and continue generating sets of all possible wells until you find some N for which S_N is not satisfiable; N-1 is therefore the longest game.
Suppose S_0 consists of the starting well, W_0. W_0 is a conjunction of 160 different clauses, each of which is initially set to 'not':
W_0 = !x_0_0 && !x_0_1 && ... && !x_15_9
Once you have that, you need some way of getting from W_0 to its possible descendants. There are 2457 possible piece positions in a standard 16x10 HATETRIS well, each of which interacts with at most four squares (fewer for the piece positions within the top four lines), and each of which can be reached at most four ways (from another piece moving down, moving right, moving left, or rotating). This puts a rough estimate of ~39,000 clauses needed for a function which converts W_0 into its children: W_0 -> W_1a || W_1b || W_1c || ... = S_1. Which isn't too bad, as far as SAT solvers go.
The problem is that this is very similar to what our first version of the emulator did and that version was a hundred times slower than our current version. SAT is NP-complete in the worst case, and without some huge simplification from putting it in Boolean form, it didn't seem likely to be worth the additional cost. I think there's still a possibility for some kind of solver to aid searches, e.g. "Given this specific well, you need to clear lines 5 and 6 in order to clear line 4, and you need to clear lines 7, 8, and 9 in order to clear line 6...", and I think certain properties (such as the minimum number of pieces needed to clear a given line) are computable with SAT, maybe even to the point of making a a pruned-but-provably-optimal game tree search feasible.
Putting the raw emulator in SAT form is natural. Putting constraints like these in SAT form requires coming up with a new level of abstraction ourselves. Our only attempt at it was in the Mumble Mumble Graph Theory section; what we learned is that making a new level of abstraction is a lot harder, and we don't know how to do it.
But I mean, what happens if you just chuck those 39,000 clauses into Z3 and let it spin to see what happens? Solvers use a bunch of cute backtracking heuristics to not explore too much of the state space, and they keep getting better at it. Particularly, SAT is sort of like the halting problem, with a class of easily identifiable satisfiable instances and a class of easily identifiable unsatisfiable ones. The difficult instances are a narrow band between the other two classes. SAT solvers are so successful because so many naturally occurring instances turn out to be easy.
I absolutely love love love seeing things like this.
I've been doing a fair bit with Monte Carlo search lately to explore different spaces, but mine has been in the context of Llama.cpp and Magic: The Gathering. [1] I'm going to compare and contrast my approaches with the approach used by the authors.
First, I want to ask a question (either to the authors, or to the general audience), about trees vs. DAGs. The author wrote:
It was only after the initial tree implementation that we considered that there would be a lot of repeated wells: after all, in this game you can reach the same position in a number of different ways. A lot of deliberation and a re-factored codebase later, we opted for a directed acyclic graph (DAG) instead, taking identical positions and merging them into the same entry in a graph, rather than making them distinct entries on a tree. This complicated things significantly, but reduced our memory needs by an order of magnitude, at least. Refactoring from tree searches to DAG searches was more work than we’d expected to put in to the project, but was yielding promising results.
I'm curious about why this was such a large refactor. I've made this change in two different projects now, and in both cases, I was able to change from an exhaustive tree structure to what is essentially a DAG by simply searching for duplicate states and culling any branches that are equivalent. For instance, I implemented this change in Llama.cpp and got a >400% speedup -- and it was essentially a two-line change -- no drastic refactoring needed. [2]
Am I missing something here? How is a culled tree structure fundamentally different from a "true" DAG -- and more importantly -- would I stand to gain even more performance by switching to something else? I don't see what's so complicated about this, but I feel like I must be missing something.
we discovered that the majority of our time was now spent accessing the hash of positions, since whenever a new positions was explored, we had to check if it already existed. A bit of a further dig and we discovered that most of that time was spent running a hashing algorithm on the positions data for comparison. Which again, made sense…but we knew all our positions were unique among each other. We didn’t need to do any hashing, we could just use the position’s representation as a key directly, if we could find a reasonable way to encode it. So we replaced the hashing algorithm with our own custom version that encoded the position as a binary representation of position, rotation and piece type. This was much faster than having to hash a position, and we knew it guaranteed uniqueness. This doubled the speed of our move finder.
In the case of my Magic: The Gathering searcher, I am using a text summary of the game state (essentially `gameState.toString()`) -- what's on the battlefield, what's in the hand, what's in the graveyard, how much mana is available, etc -- as the key. I sort the cards in each location alphabetically, so that it doesn't matter which order they were put there -- only what's actually there. This GREATLY increased the speed of execution, but again -- it was just a simple change.
That said, the fire graph showing how much time is spent in string processing is eye-opening, and it makes me think that I should probably profile my Python code with a similar tool, because a binary representation might be worth it.
It's also fascinating to me that the author attempted to train an ML model to play the game. My initial thought was that an exhaustive search was the only way to approach this (because it's perhaps the only way to arrive at a max score that is provably correct), but I think I am not truly appreciating the vastness of the search space at play here, and exhaustively searching all possible inputs might just be too unimaginably large.
All that said, this read has kept me on the edge of my seat. Kudos to the developers and authors for writing this up for us -- and CONGRATULATIONS ON CAPTURING THE RECORD!!! Even though it was later surpassed, that should in no way diminish the satisfaction of the accomplishments here.
It's an absolute blast reading this. I've enjoyed it, I've learned a ton, and I'm going to try implementing some of their suggestions in my own projects!! Thank you!! It's articles like this that keep me coming back to Hacker News day after day. :)
It's also fascinating to me that the author attempted to train an ML model to play the game. My initial thought was that an exhaustive search was the only way to approach this (because it's perhaps the only way to arrive at a max score that is provably correct), but I think I am not truly appreciating the vastness of the search space at play here, and exhaustively searching all possible inputs might just be too unimaginably large.
I think when we did the math with our best emulator it would take 100 billion years? There are, by our estimation or a game lasting a thousand moves 10^23 possible wells. With current computing, even if we threw all the resources of the planet at it, we'd still be a few orders of magnitude short of finishing in our lifetimes. It is a very very very large search space.
Searching for duplicate states and culling any branches that are equivalent.
Branch pruning and comparison are both expensive relatively speaking. You want, in a perfect world, to essentially have a hashmap lookup for duplicate nodes, so deciding if its new or not is O(1), rather than having to look at your node and all its children to know if it's duplicated. It doesn't matter unless you're really chasing performance.
Also this was our first serious rust project so implementing it was a bit more daunting than expected. In a sane codebase it's probably not a bad refactor, at the point we did it we had six methods with a bunch of stuff jammed into them and names like `explore_tree` so it was a challenge, made significantly easier by rust types, and more frustrating by the borrow checker.
That's very helpful, thank you!
I find it interesting how the author's first approach was to use a black box neural network instead of the evidently simpler beam search. As far as I'm aware, beam search was widely considered to be the simple method for game optimization just a decade ago.
Sure, new methods will always replace old methods, just like CNNs replaced SIFT for image processing. However, I feel that beam search is one of those elementary methods that you'd always want to check first, similar to A* and quicksort. Even though there are fancy neural networks for game optimization, pathfinding and sorting, it's easier to get started with elementary methods because they're simple and tractable.
The main impetus was trying to show that you could do AlphaZero on a regular computer. It didn't pan out that way.
“As long as you keep thinking about the problem, even if its in short bursts every few years, you’re still making progress. And if you never finish? If all you find are side-paths and obstacles, and it turns out the entire mission was doomed from the outset? That’s okay too. Projects like this nourish us, because there’s a part of the human mind that wants nothing more than to climb the mountain, rappel into the cave, explore the unknown and grapple with it.”
In addition to the impressive technical details, this is some really beautiful writing
For some months, we’d had a very interesting idea. The idea consisted of the words “graph theory”, which we’d occasionally gravely recite to each other and nod knowingly, with some vague gesticulations, and not much else.
Yes! This part and the cloud budget gag were also awesome. He's a great writer!
My favorite/least favorite tetris variant is Not Tetris 2, the one that doesn't snap to a grid. It's maddening. All this guy's games are mad at you.
Reminds me this lovely game right here: https://www.trickytowers.com/
I tried writing an AlphaZero clone to play Chess on my home PC (I only had an RTX 3070) and I failed for essentially the same reason as they mentioned: iteration time was too slow and you couldn’t tell if the model was getting any better at all after weeks of training.
I thought I’d work on it further and maybe write some blog or put some dev vlogs on YouTube but never got around to doing it. Might do it some day.
Till then I’ll just post some links to my Github if anyone wants to check out what I was doing:
1. https://github.com/krkartikay/AlphaZero-proto
Passing this on from Dave, since he doesn't have a HN account:
He recommends trying NNUE on a CPU: https://www.chessprogramming.org/NNUE
Mostly because he hasn't seen anyone try it on the personal computer scale and would be interested to see how it pans out
The guy who wrote hatetris, sam hughes, aka qntm.org, also writes some good scifi.
oh god I've never hated a game so quickly
I used to play "bastet" which had the same over all idea with a slightly different algorithm since 2005: https://fph.altervista.org/prog/bastet.html
Heed my advice, do not play it.
Its horrible, this is what hell will look like.
Forever trapped waiting for a piece that will never come - by design.
It is mentioned that there is the Tetris Effect, which means you do an activity so much that they enter your dreams, thoughts ... etc. This happens to me as a developer, too. Especially for mind-bending stuff, like miniKanren, or first time learning Scheme, learning Emacs. I love that it has a name.
"Hatetris", not "Hatris", which is a different game.
So after reading "Getting the World Record" I was brimming with ideas on how one could improve upon their work, only to be squashed upon reading "Losing the World Record." All parties involved has done an excellent job breaking the game, and aside from the (really) hard questions of understanding game, there's not much left except further exploring the parameter space.
I simmered on the latter overnight, and a few thoughts occurred to me:
- a parameter for fullness (holes + filled blocks that are below the "surface"). It might be its own parameter, or used to augment other parameters to discourage behavior that might lead to a the end game.
- reachable surface height: rather than taking the lowest height of the surface, compute the reachable height of the following piece
- an alternate definition of reachable surface height: the lowest point any of the seven pieces can reach (perhaps augmented by the number of pieces that can reach it)
https://qntm.org/ra is also great, especially if you've read Sam's work and want more. It will appeal to technical types: magic is real and has a rigorous mathematical theory behind it, which is cool as far as it goes, but he takes it in an interesting direction from there.
Ra shows the main flaw of that author in my eyes - he is incapable of making optimistic endings
I despise stories that are awesome enough to make me invested, but then make main characters (essentially) lose
Fine Structure's ending is very optimistic, in fact. (edit: a lot of his short stories are, too.) But yeah, Ra is rather bleak. I guess it's not for everyone, but if you came in from "There Is No Antimemetics Division" it won't bother you.
exactly because I came from antimemetics it bothers me as his default
Then I'm confused about why you followed qntm from Antimemetics, which is already pretty bleak, but not enough to prevent you from reading Ra, which was somehow enough to put you off of his work forever. Why did you stop exactly there, reading more from an author who did something you don't like but not reading any farther to learn that he is capable of more?
Underrated book IMO.
I enjoyed it more than the 3 body problem: the characters are better written, their motivations make more sense despite having much less time for actually developing them.
I was really having fun trying to figure out how to make the best of an impossible to solve situation. And the SCP lore is wonderful.
I haven't read _Their Is No [...]_, but I have read _Fine Structure_, another book by QNTM. It's an exhilarating read with some forgivable issues and I'd heartily recommend it to anyone interested in heady sci-fi.
I would never recommend 3 Body Problem. I found it and its sequel to be irredeemably unpleasant.
That was a fun book
Perhaps not all that offtopic - Hatetris is what happens when you subvert normal the rules and make the game play against you. Anti-mimetics stories are what happens when you subvert the rules of ideas and make them play against you.
I can imagine a common space of inspiration there.
I really enjoyed Fine Structure too