A big issue with the $1 recognizer is that it requires strokes to be drawn in a specific way, for example to draw a circle you need to go counterclockwise, if you go clockwise (as seems more natural to me) it gets recognized as a caret. This makes it not really usable in a context of free drawing were the users are not aware of the details of your implementation.
IMO the RNN is overkill of this problem, compared to a simple and elegant algorithm called "$1 unistroke recognizer". That one works beautifully even when trained with just a single sample of each gesture.
I hope $1 unistroke gets more recognition because it can be integrated in an afternoon into any project to add gesture recognition and make the UI more friendly.
It works quite reliably for palm style "Graffiti" text entry, as long as each letter is just a single stroke. The original paper also makes great effort to be readable and understandable.
https://depts.washington.edu/acelab/proj/dollar/index.html