Is the idea to supplant lottie?
Didn't we already do this experiment with Flash? Macromedia/Adobe made the player free, and charged for tools. In the same way, it seems that Rive wants to open source the player and then charge for the editor. Maybe its a bit different since Rive seems targeted at game devs using other platforms like Unity and Unreal - presumably to embed things like cinematics? But then of course you have this (OSS) player, which implies Rive also wants to own the whole experience. Hence the reference to Flash.
It's not really my area, but best of luck to you!
Flash was great, someone creating a better Flash has been long overdue. I think Rive looks like a great attempt at a new Flash.
The Flash tooling is still around, actually. It's now called Adobe Animate and targets existing web standards.
Did that pivot ever really work?
I know Adobe renamed Flash, and I haven't talked to anyone in the cartoons space since then, but it seems like innovation/attention to the Flash toolchain died with the player.
We used to use a tool called flump to rasterize flash animations for iOS games (I ended up writing our runtime and renderer in Unity and later SpriteKit to consume that data). I bet it wouldn’t be too hard to build an exporter from Flash (the program, now Animate) to this runtime if the features are fairly compatible.
Flump was a fantastic tool in the early mobile days when there wasn’t really anything else to create quality 2D animations. I also wrote a few Flump runtimes and have since written a few more runtimes utilizing the Export as Texture Atlas feature in Adobe Animate. If the goal is to just render the rasterized output from the exported texture atlas animations it should be pretty simple to create a converter.
If the goal is to render vectors I imagine it is more complicated due to quirks in how Macromedia triangulated their vectors, but someone has done it before http://blog.bwhiting.co.uk/index.php/2014/03/17/introducing-...
That's a really great comparison, and great point that this is just the renderer. I'm happy to see it released under an MIT license, but I wonder how many OSS tools there are to build compatible content, and how easy it would be to build exporters from other formats in existing tools.
Our entire runtime and format are open-source. https://rive.app/runtimes We charge for the editor, but everything else is open.
and think of how much better flash could have been if the player and file format were open, and third party tools didn't have to reverse engineer them!
There was Scaleform, and it looks like it's been discontinued along with Flash - https://en.wikipedia.org/wiki/Scaleform_GFx
How does the rendering performance of this compare to something like Skia or Pathfinder? Note that the latter can also optionally do the paths-to-triangles conversion step using GPU compute, if the hardware supports that. There's also Vello for a more comprehensive compute-based approach to 2D rendering.
We need to do some newer benchmarking, the Rive Renderer has gotten faster since these initial tests: https://twitter.com/guidorosso/status/1595187838454140928
Can you talk a bit about text rendering? How does it compare to slug? [1] Would it be suitable for rendering text heavy applications? (like, say, a PDF viewer, a code editor, web browser, etc).
--
The Rive renderer takes an animation-first approach to all path rendering, including text. Meaning, every glyph is redrawn from raw bezier curves with full precision antialiasing, every frame. (Just like any other path.)
This optimizes the smoothness of animation, is fast enough to render fullscreen pages on top of pages of high dpi text at 120fps (on a decent GPU), and the antialiasing quality is always full precision in the (0..255) color channels.
If you want more niche text-specific features like hinting and clear text, those aren’t supported because they don’t animate smoothly and/or don’t work as well with colors other than black and white.
I recommend trying all the options for your specific needs and finding what works best!
Looking at the GPU tech used and the committer names in that repo, I'm pretty confident this will at very least compete with both Skia and Pathfinder. I'm not super familiar with Vello.
These are solid tech choices by people who know what they're doing... definitely worth the effort to check its performance and quality out for yourself.
These GPU-first renderers all have slightly different approaches and each probably occupies a best-in-class niche for some multidimensional space of quality x performance x hardware/driver support x ease of integration. If you're not happy with one of the four, the others are all worth checking out.
I'm pretty confident this will at very least compete with both Skia and Pathfinder
Couldn't Vello be embed / used by Skia, Cairo et al, if those renderers wanted to use GPU Compute instead of CPU's for preprocessing?
Skia does include some support for using Vello for rendering but it's just experimental, not enabled by default.
At least one of their runtime implementations I checked out was implemented on top of Skia. Looks like they support a number of possible backends!
I'm looking forward to doing careful benchmarking, as this renderer absolutely looks like it will be competitive. It turns out that is really hard to do, if you want meaningful results.
My initial take is that performance will be pretty dependent on hardware, in particular support for pixel local storage[1]. From what I've seen so far, Apple Silicon is the sweet spot, as there is hardware support for binning and sorting to tiles, and then asking for fragment shader execution to be serialized within a tile works well. On other hardware, I expect the cost of serializing those invocations to be much higher.
One reason we haven't done deep benchmarking on the Vello side is that our performance story is far from done. We know one current issue is the use of device atomics for aggregating bounding boxes. We have a prototype implementation [2] that uses monoids for segmented reduction, and that looks like a nice improvement. Additionally, we plan to do f16 math (which should be a major win especially on mobile), as well as using subgroups for various prefix sum steps (subgroups are in the process of landing in WebGPU[3]).
Overall, I'm thrilled to see this released as open source, and that there's so much activity in fast GPU vector graphics rendering. I'd love to see a future in which CPU path rendering is seen as being behind the times, and this moves us closer to that future.
[1]: https://dawn.googlesource.com/dawn/+/refs/heads/main/docs/da...
So I love Rive—the product and the company. And I love open source.
But this is an MIT license for Rive's rendering abstraction layer, a subset of the Rive runtime that requires the Rive Editor to build content for.
I'm just curious about the goals for open sourcing this and what your hope and plan is for the larger community you'd like to build around it. Can you think of perhaps another project that would benefit from adopting just the renderer? Thanks!
Vector graphics do not require a special editor to be built.
Plenty of projects from games to UI toolkits need a vector graphics renderer, which is why libraries like Skia and Cairo (and now Rive's) exist.
vector graphics != SVG
Vector graphics can absolutely require specialized editors to be built. SVG editors are an example of this.
I'm well aware that vector graphics doesn't equal SVG. On the other hand, you seem to be conflating "vector graphics file formats" with "vector graphics".
"I'm just curious about the goals for open sourcing this"
I suppose, continue to make money with the proprietary editor, but have an open source renderer to deploy and enable an ecosystem, where other people target this renderer, as it might be superior. I am definitely interested, I routinely hit the limit of 2D drawing with my limited 16ms per frame (currently I use the canvasAPI and pixi).
It seems like a smart strategy of "not becoming Macromedia Flash"
a subset of the Rive runtime that requires the Rive Editor to build content for
From github[0]: This repository contains the renderer code and an example for how to interface with it directly.
I've been looking forward to this ever since they announced it. Previously they used Skia and their entire app is (and rendering output was) built with Flutter. Now Flutter has a new rendering engine via Impeller which is more optimized than Skia for Flutter specific challenges.
When I asked the Impeller team before what they thought of the Rive renderer, they said that it is great for vector graphics but Impeller needs to also deal with all of the UI related rendering challenges like displaying text properly, so they're not a one to one comparison. Hopefully with this renderer being open source now, both teams can learn from each other.
Correct me if I’m wrong, but aren’t fonts rendered from vector graphics already? TTF and OTF are vector formats, correct?
Fonts begin as vector graphics, but every single character on your screen isn't rendered as a tiny, individual vector graphic. Text rendering libraries use a lot of optimizations to cache rasterized glyphs because they're reused all over your screen.
It also gets extraordinarily complex when you have to handle text layout, kerning, languages that read in different directions, styles, and all of the other complications that come with font rendering.
TrueType fonts even support a small virtual machine to process tiny code programs related to font rendering. It's an incredibly deep topic.
This, but see also Text Rendering Hates You https://faultlore.com/blah/text-hates-you/
Thank you for your insight!
small virtual machine to process tiny code programs
Or even WebAssembly (though more of an experiment than anything): https://github.com/harfbuzz/harfbuzz/blob/main/docs/wasm-sha...
These glyphs are indeed defined in terms of vector paths (quadratic or cubic Bezier splines depending on the font format), but it’s rarely a good idea to directly render these shapes to display text.
Performance optimizations mentioned in the other comments is not the sole reason. Another one is visual quality.
Direct rendering of these vector glyphs would look good on high DPI displays and/or for very large font sizes. For normal-looking text on typical displays you won’t achieve great results without hinting: https://en.wikipedia.org/wiki/Font_hinting
Another quality related problem is subpixel rendering https://en.wikipedia.org/wiki/Subpixel_rendering Just like hinting, hard to solve while directly rendering vector shapes.
I like this a lot, but it represents a turning away from desktop technology (as in having a native editor as a desktop app) that just makes me say “no”.
You can also download some windows or apple binary. No idea if that is just a packed web browser, but even if, if it is fast, what is the problem?
Electrons gotten pretty good at the speed thing (or processing speeds caught up with it), but I can still feel the missing native elements and small touches that make native macOS apps a joy.
It has an open-source format, you can create animations yourself as well. Plus this renderer is literally a desktop app, isn’t it? It is as cross platform as it gets, with this project being the most optimized implementation.
Just because it's open source and I can create animations myself (by hand) doesn't mean either solve the underlying problem that we are killing off desktop application technology.
Right. "Available for iOS, Android and web" ... if that's the future, I don't want to live in it (which is OK, because I won't be).
Have my bitter sweet upvote.
Looks kinda interesting but I am not gonna touch anything that thinks an acceptable set of drawing tools is "pen and a couple of basic shapes", I quit pulling out every single point by hand in Illustrator a decade ago and my art and my job satisfaction has been much better for it. Finding the right settings for Illustrator's pencil tool (the defaults are absolutely useless) sped up my work by at least one order of magnitude.
Using the pen tool for all your shapes is about as efficient as using Photoshop's pencil tool to set every pixel yourself, or writing your entire app in assembly.
Re:
Looks kinda interesting but I am not gonna touch anything that thinks an acceptable set of drawing tools is "pen and a couple of basic shapes", I quit pulling out every single point by hand in Illustrator a decade ago and my art and my job satisfaction has been much better for it.
How do you work differently now?
Pencil tool. The defaults are absolute trash that make it impossible to sketch with, and useless for drawing filled shapes; once you discover it has settings and play with them, it becomes absurdly fast.
Lately I've been thinking it's time to move on to Moho or Toon Boom though, I've been wanting to make stuff move, and I'm real tired of Adobe leaving all kinds of annoying broken edges unfixed for years.
FWIW Most folks are animating in rive not drawing with the reduced set of tools. In their tutorials you see them demonstrating their PSD layer support, for example.
Moving around stuff drawn in another program is boring, and gives me flashbacks to back around 2000 when I was moving around stuff at the end of a lengthy multi-person chain that started on paper. Drawing stuff in the same program I'm moving stuff around in and using its rigging/distortion tools to do inbetweening is fun. I really gotta make time to seriously dig into Moho or Toon Boom soon because I keep on missing animation lately.
cmon, this is just a renderer
They have tools but you'd have to pay for them, big bucks
yeah, I was curious about the animation program this is a renderer for
It isn’t hyperbolic to say what you can do with Rive is about to change in an earth-shattering way.
Actually, I think this is exactly what "hyperbolic" means
If not, that would be extremely concerning.
"We've studied their language enough to get a good understanding. It seems the creatures on this planet created the source of their own undoing - a UX so well antialiased, so quickly rendered, that it shattered the planet."
Well if any component of a game engine would render our planet uninhabitable, I imagine it would be the renderer.
It will literally blow your mind. It's already been responsible for several deaths from traumatic head injury.
If it's not hyperbolic, I think we ought to ban this... Call me picky, but I think breaking the only planet we live on right now is a bad idea
So does this mean someome can create a free editor I can use for basic animation that I don't have to pay $25 for the rest of my life to use?
Edit* $39 per month if you don't pay for an entire year in one go. Wtf haha, anyone actually paying this for basic solo-dev work?
We've really been spoiled by modern software competition.
Once upon a time middleware like this would have been "Call Us" with a custom 5 figure license and royalties.
Now it's not even an hour of a single developer's time cost per month and you get complaints.
Right? Also one "hack" I use quite a bit is to subscribe for when I need a tool and then unsubscribe when I don't. I use privacy.com to create virtual credit cards that can only be charged once, or a certain amount, or for a certain period of time.
Of course you'll need to backup your data, but honestly that's good practice anyway. I just search Google to ensure the service makes it easy to unsubscribe and easy to export your data.
Many projects have phases where you use certain tools extensively—no sense in continuing to pay for say an online video editor on the months you aren't, well, editing any video.
It's the market distortion created by people who work for free and expect to get stuff for free
It seems like OSS is more and more bigcorp making money anyway so probably that perception will keep shifting until we're back to normal conditions and OSS developers will be able to earn a salary from their work.
It seems to be just the very basement level of an open source Rive-type project. You would need to build the runtime around this renderer and then of course the authoring app. It would still be quite a lift, but hey, success does start with high-performance vectors.
More likely, I see perhaps other projects with vector-rendering needs (in-game UIs, video creation tools, svg tools, etc) perhaps studying or adopting the render to make their own graphics more performant.
But to answer your question on "who pays for this?" I can raise my hand and say "I do!" and it's a great value. But just like Copilot probably isn't the best monthly investment if you're not doing quite a bit of coding, Rive probably isn't the best investment if you aren't doing quite a bit of animation.
Edit* $39 per month if you don't pay for an entire year in one go. Wtf haha, anyone actually paying this for basic solo-dev work?
Out of curiosity, how much do you expect to make for your basic solo-dev work?
Very often I find that devs' attitude when it comes to their own work is "I deserve a six figure salary/equivalent sales" but when it comes to other people's work, that flips to "software should be free". I have zero idea where the salaries are supposed to come from then.
There's a lot of active research around rendering 2-d vector graphics with GPU tessellation (Raph Levien's work for instance), so this is pretty cool that they're shipping a product with this technique.
I've never used Rive so I'm wondering if its strictly for making cool animations or if it can be used for building dynamic UI's (the kind that you might use an immediate mode gui lib for)?
We have customers shipping full UI with Rive! Games have been adopting us (some cool AAA titles already in progress) and products too. https://rive.app/game-ui https://rive.app/blog/how-age-of-learning-uses-rive-to-a-b-t...
Bevy but not Godot?
Bevy is written in Rust and according to https://github.com/rive-app/rive-bevy/, the backend used for the integration uses Vello (also Rust), not the Rive renderer. Could be that integrating Vello into the C++-based Godot would be finicky. With the Rive renderer open-sourced, maybe both Rive and Godot will see an integration using the Rive renderer?
One of the things that makes Rive great for dynamic UI components is the excellent state machine deeply supported by the editor: https://help.rive.app/editor/state-machine
While I've built some fairly complex UI with Rive, one area I haven't explored is programmatically adding elements or say changing UI text based on external events.
This problem seems to create a continuous stream of software attempting to solve it with no definitive solution.
It's kind of strange, because there is a single objectively correct rendering for any vector graphics scene given a pixel sampling function and colorspace metric (the one where each output pixel's value is the closest representable color to the convolution of that function with the scene, expressed as a function from R^2 to R and a linear color scheme respectively), and it seems like it would be achievable with GPU compute and some care with error bounds on either curve tessellation or numerical computation of the exact symbolic integrals of curves.
And it also seems easy to make such a solution have a tunable level of approximation to have faster rendering.
I think you've got a point here. 2D rendering is reasonably well defined. The problem is the overwhelming complexity of the GPU ecosystem. If you just had a decent parallel computer, it wouldn't be so hard. But instead you have to adapt your rendering logic to the particular combination of pile of hacks your GPU infrastructure supports, each having an exquisitely unique set of limitations and tradeoffs.
You'll likely enjoy our upcoming GPU-friendly stroke expansion paper - the core of it is exactly numerical methods with error bounds for the exact symbolic integrals of (certain curvature metrics) of curves. And the code is in Vello main (shader/flatten.wgsl) if you're curious and want to look at it now.
I poked at this a few times over the years, think my initial intent was to add SVG textures to Blender.
What I always found is the 'good' renderers were a small part of a massive project (totally impractical to add as a dependency) and the small, standalone ones weren't necessarily 'good'. I think I found one (nanoVG IIRC) that could have probably been beaten into submission with some hacking but that was around the same time I was winding down my involvement in Blender.
A quick peek at the code seem like this is exactly what I needed those 15 years ago and it just so happens I have some python bindings that could be easily be ported over to this if anyone is interested.
Sounds like you should write a vector graphics library.
Man, I was literally at their booth in GDC 25 minutes ago.
Truthfully the program they have looks fantastic at empowering UI/UX designers to do more with games.
I'm not sure if everyone is aware, but the normal flow is that a UI/UX designer will spend days in Figma; give the output to a programmer and the programmer will (often) say that it's impossible to implement for X,Y reason.
What follows is a back and forth between the programmer and the designer until eventually a design exists that can be implemented.
This is compounded by the fact that there's no good UI framework in Unreal engine, at least not for AAA games.
So, while I have not done a feasibility test yet, the workflow of this tool is definitely the best I've seen and seems to integrate seamlessly with Unreal Engine and blueprints.
I'm personally really excited, because with this it's possible for a UI/UX designer to work without the aid of a programmer.
(also, game programmers tend to hate doing UI).
i've often thought that if i were a UI/UX designer, i would want to work in games. most productivity software already has deep-rooted paradigms and abstractions, and attempting to challenge any of these results in grumbling users. in video games, however, a novel UI/UX decision is usually met with curiosity rather than irritation. presumably this can be ascribed to the user's frame of mind when they sit down to do work versus play a game.
This would depend, I reckon, on whether ones emphasis was on user experience or user interaction.
Games are great for user experience, because gamers are, by definition, looking for an experience. They aren't there to get work done, they want to have fun. So a fun interaction paradigm, with creative visuals, sound effects, various game-related skeuomorphisms: all of that is what they want.
User interaction, on the other hand, is about keeping the software from standing between the user and what the user wants to do. A large part of this job is insisting that developers don't do surprising things, but rather, stay within the lanes of expected interactions for the platform. There's room for creativity and novelty, but the budget for those must be spent carefully.
Agree that design<->programming handoff is tedious but curious where you think UMG is lacking. I’ve found it pretty fun to implement UI in once I got the hang of it.
It's so insane they don't offer any way to export movie files. The animation IDE is decent. Nowhere near as good as Flash was. But at least let me animate in it and export movie files.
It's hilarious how so much new tech is just barely scratching the surface of what Flash offered in terms of a creative production tool. Every passive consumer fixates on Flash the plug-in and has no idea how incredible the tool was.
Their pricing page, https://rive.app/pricing suggests that they support MP4/GIF/PNG Sequence and WebM exports? Is that not what you're looking for?
I believe its in reference to QuickTime File Format (.Mov files). The fact its not supported is odd and one of the reasons I have yet to try rive
If it can export to mp4 or a sequence of pngs what stops you from using ffmpeg to covert that to quicktime?
How feasible is it to build web games with this? I’d like to build something with this treating it like you would Pixi.js, it looks to me like it’d be as performant if not better but with the amazing advantage of having a vector asset pipeline built into the system. Using vector graphics with Pixi, especially when rendering in a web worker, is a pain in the ass.
That was my question as well some weeks ago and my resume was, to wait a bit more. I could not find ressources on how to bind from the web at all, but maybe I missed it.
"Using vector graphics with Pixi, especially when rendering in a web worker, is a pain in the ass."
Yes it is.
I currently load SVGs on the UI thread, draw them on a canvas, and then use `createImageBitmap()` to convert the canvas into an ImageBitmap, which I send back to the web worker. This method allows for scalable vector assets to be rasterized on-demand, even at high resolutions like retina, maintaining quality comparable to the browser's rasterizer. Third-party libraries didn't offer consistent or satisfactory results.
Wow. And MIT license. This is crazy impressive.
And MIT license.
For me, that’s very much the unimpressive part.
Why?
Any chance of publishing the algorighm details? The main header (pls.hpp) mentions some links to private google docs.
Thanks for pointing this out! We need to make these public.
For the 10,000 foot view, watch our presentation at the WebGL/WebGPU meetup at GDC!
https://github.com/rive-app/rive-renderer#install-premake5
heh, relevant: https://news.ycombinator.com/item?id=39754770 (Build System Schism: The Curse of Meta Build Systems)
Impressive. Would this work with OpenGL ES 3.0?
The GL backend has WebGL support, so looks like the answer is "yes":
https://github.com/rive-app/rive-renderer/tree/main/renderer...
This looks extremely cool and I'm excited to use it in my own personal hobby projects.
It looks like the somewhat standardised Cairo/Skia/canvas/NanoVG API (moveTo, lineTo, etc.) is provided so should hopefully take not much effort to learn, I'm hoping.
(I do see lineTo here at the very least. https://github.com/rive-app/rive-renderer/blob/main/renderer... )
Does anybody know what that figure on the home page is called? It’s sort of like a Lissajous plot but with extra wiggles.
How does this renderer compare to Qt Quick's renderer (if it's at all a fair comparison)?
Is there technical documentation to understand the method behind the engine? because at first sight it seems like the classic "path flattening to polygon and then triangulation" that has been used for 20+ years (at Mazatech our first AmanithVG on OpenGL used that technique) but I'm sure there is more than that on Rive renderer ...to be recognized as a such powerful engine.
Happy to see this!
Would love to see somebody try to re-implement some limited Canvas2D API but using Rive Renderer as a backend. Would likely be faster and more flexible in some ways (e.g. I've found that Canvas2D clipping is not anti-aliased in some browsers).
That's exciting! Congrats on the release.
I didn't know about rive but it looks like a better framer / lottie.
Especially with an OSS renderer with an eye to performance.
I tried out the bevy integration (definitely the best implementation of ECS, anywhere) and there's a working (still unmerged) fork but it's still using vello.
Looking forward to see more!
This is really exciting to see for me personally.
I've been pushing over the past six months or so for multiple clients, from healthcare companies with basic mobile apps to deep gaming companies / products, to adopt Rive over Lottie and other past solutions, as I think it's finally hit its stride and is "ready-for-adoption".
This was the last piece that came up in some of those discussions as a potential concern (latest renderer being "closed source" / not quite final).
Really excited to see this problem space continue to improve thanks to this decision and the work the Rive team is doing in general (drop shadows, blur, etc. are all going to be very exciting as they ship)!
But can I export video assets with this!? That's my question - I've had to resort to WebGL for a lot of this so far.
Seems so: https://rive.app/blog/rive-as-a-lottie-alternative
I'm not sure how intellectually honest that is.
The animation of the walking bird is 246kB MP4. Bigger than Rive, sure, but it essentially takes no heap or runtime, which is at least 195kB of WASM.
The walking bird animation could probably be optimized much further through automatic decisions that the encoders can take.
You shouldn't use either Lottie or Rive for static animations.
I mean, that fundamentally misses the point of vector graphics, no?
I could also argue that a screenshot of a rendered SVG icon takes no heap or runtime, which would also completely miss the point of why a designer would want vector icons. JPGs and MP4s alike are not infinitely scalable.
Yeah, that’s correct. SVG is not a good delivery format, I agree. It is an excellent intermediate format, like a ProRes encoded video is.
It’s tough. We’re chatting on a website with no graphics besides a low resolution letter Y logo. The most exciting UX development maybe ever in the history of computing is a chat interface. On mobile, it’s all scrolling through rectangles. It’s not looking good for the designers.
> “It’s not looking good for the designers.”
Only if you think that the job of a designer in the software industry is to add graphics.
By similar logic, the past hundred years have been bad for architects because there are no more stone gargoyles added to buildings.
What's a static animation?
Probably he/she meant just a video file (as opposed to computing the vectors at runtime)
I'm gonna guess they mean an animation that's the same every time you play it.
"You shouldn't use either Lottie or Rive for static animations."
I guess that depends on your use case. If you only have this animation on your website, then probably no. But if you load and run the rive engine anyway, because you build a game with it - then why not also use it for "static animations", if the result can be way sharper?
I'm excited about this. As that post explains, it's very hard to have anything beyond basic interactivity with a lottie animation. It looks like Rive provides much more facility for interactivity. Also if it works I can cancel my Adobe subscription.
I've replaced Lottie animations with Rive, and am pretty happy doing so thus far.
Seems like Duolingo apparently uses both? Or perhaps one of these is out of date.
[0] https://lottiefiles.com/case-studies/duolingo
[1] https://blog.duolingo.com/world-character-visemes/