return to table of content

Show HN: Revideo – Create Videos with Code

andrewstuart
4 replies
2d23h

How is this different to / better than remotion.dev?

justusm
3 replies
2d22h

Revideo is different to Remotion.dev in a couple of ways:

First, we use generator functions to describe the flow of animations - every yield within the generator function corresponds to a frame in the video. As a result, our API feels quite "procedural" (animations described at the start of the function appear in the start of the video, animations described at the end appear at the end). Remotion's React-based approach is rather declarative - it gives you a frame number and lets you describe what your video should look like as a function of the frame number. Personally, we find our "procedural" API a bit more intuitive and easier to write than the declarative approach, but we might obviously be biased here.

Secondly, we render to the HTML canvas instead of the DOM. Both have advantages and disadvantages: Rendering to the DOM lets you define animations using CSS, which most programmers are already familiar with. On the other hand, an advantage of using the HTML canvas is that it should allow you to render entirely in the browser rather than using server-side rendering, as you can simply capture the current canvas using canvas.toBlob(). We have not yet implemented this for Revideo, but people in our Discord server have made good progress towards it.

Thirdly, we're MIT licensed while Remotion is not FOSS (if your company has more than three employees, you need to purchase a company license to use Remotion). As described in the post, this was one of our original motivations to build our own video editing framework while we were building video products.

kgritesh
2 replies
2d17h

I built a large remotion powered video generation pipeline for a major AI video company and one of the major challenges was cost of generating a video. 1 instance of Remotion at best generates 1-2 fullhd frame per second on server and that turns out to be quite expensive for slightly larger videos. From my benchmarking, taking screenshots was the slowest part of their pipeline. Any sense on how many frames revideo can generate. Since you could do canvas.toBlob, I am wondering if it's faster.

hkonsti
0 replies
2d11h

It really depends on the complexity of the scene. We're only getting started on optimizing rendering and I'm convinced that we can get it to be significantly more performant/cheaper than Remotion. Right now we're faster in some cases and slightly slower in others. Feel free to contact me at konsti at re.video. I would love to understand your use-case and see if and how we could help you.

thenorthbay
3 replies
2d21h

Interesting stuff! Which use cases do you think developers will use you most for?

There could be really interesting abstractions that people might build on top of this. Like automatically creating and animating infographics, making background sounds, or video cutting and recycling. If you spin this 100x further an entire video creation studio might emerge.

Which parts of Video Infrastructure do you want to build first? Which other higher-level parts could be built by you or users? Where could this go?

hkonsti
2 replies
2d20h

Thank you! As of now, many people use Revideo to programatically create Youtube videos (primarily entertainment content, think of AI generated memes and Youtube Shorts) and for use cases in video marketing (think automatically generating and A/B testing ads from a product portfolio). As the project becomes more mature and better, we want to enable developers to build more complex, and especially interactive video editors using the framework.

I'm glad you raise the point of reusability and people building abstractions - what I find really exciting about code as an interface for video editing (rather than GUI) is that it makes it much easier to create and share isolated, reusable components and functions. Especially as we are open source, I hope that many developers will build abstractions and share them on Github

Out of curiosity, what do you mean by "Video Creation Studio"? Would you add a visual interface for editing to Revideo?

thenorthbay
1 replies
2d20h

Maybe not for editing per se. But for providing meta-functionalities to services built on top of revideo. Rendering, for instance, but maybe even reusable embeds, video watch analytics, download links, programatic re-rendering with webhooks and their configuration, etc. What other functionalities could you offer?

hkonsti
0 replies
2d20h

I think you already mentioned a few interesting ideas. Ultimately, we'll just build what our users want and see from there.

sansseriff
3 replies
2d20h

How does Jacob (aarthificial, creator of motion-canvas) feel about this? Will you compensate or include him in some way? I understand the license is MIT so you can do what you want. Just seems like it would be polite to maintain a good relationship with him and other motion-canvas maintainers.

hkonsti
2 replies
2d20h

We tried to reach out to Jacob but unfortunately didn't get a response from him. In general, Revideo and Motion Canvas have very different goals and we were very reluctant to go the route of forking his work. We are always open to contribute changes back into Motion Canvas if that is wanted, so far we haven't seen that to be the case.

supermatt
1 replies
2d7h

You mention in your README that you had to fork it in order to make it so you could trigger renders programatically instead of having to click a button. Couldn't you have just used render from the core lib directly?

https://motioncanvas.io/api/core/app/Renderer#render

justusm
0 replies
2d6h

You can use the render function from motion canvas to trigger headless renders (which is what we do!) without forking the project.

However, this is not the only change we made. Our goal is (as stated in the Readme) to enable developers to build entire video editing apps with Revideo. This encompasses a few more things than just triggering headless renders. For example, here are some of the changes we made that were quite drastic and were not possible without forking:

- we export the audio of <Video/> elements in a scene. Enabling this required us to modify parts of the core package: https://github.com/redotvideo/revideo/commit/d0f72b6df68b380...

- we made video-in-video renders a lot faster by requesting video frames from a backend process rather than seeking video frames using the HTMLVideoElement API. This required us to make all draw() functions in Motion Canvas async: https://github.com/redotvideo/revideo/commit/a6e1bcdf0ca8200...

mike31fr
3 replies
2d7h

Remotion is pretty good, but we didn’t want to rely on it as it is not FOSS (source-available only).

Noob question : How would you explain in the simplest form the difference between FOSS and source-available. In other words, what does Remotion do not have that would make it FOSS?

justusm
1 replies
2d6h

In the simplest form: Revideo is MIT-licensed. That basically means that you can do anything you want with our code - you can modify it, fork it, build commercial products, resell it, etc., without owing us anything.

While you can look at Remotion's code on Github (that's what we mean by source-available), the project has a custom license that doesn't grant you the permission to do anything you want with it. Remotion retains ownership of the code (i.e. you cannot simply fork it and give it away for free), and when you are a company with more than three employees, you have to pay Remotion if you want to use their code to render videos

ramengrowth
0 replies
2d6h

This is the one of the clearest explanations of the differences between the licenses I've seen, thanks!

LelouBil
0 replies
2d6h

A more restrictive license ?

Remotion has a custom license that requires you to have a Company license if you don't meet certain criterias for the free license.

Revideo just has the MIT license.

hubraumhugo
3 replies
2d23h

Just curious, are you the founders of https://haven.run (YC S23)? I noticed that the Linkedin company page now redirects to Revideo.

Would you mind sharing a bit about your pivot? I always find these stories interesting!

hkonsti
2 replies
2d22h

Yes, that's us! We started out in the open source LLM space, but realized that most companies didn't really have any real use cases for custom fine-tuned LLMs. Maybe we were too early, but everyone we talked to was best served by just using OpenAI. Late last year, we started looking at other areas and became fascinated with AI-generated social media content, which sent us down a rabbit hole that led us to build Revideo. It's been a very long and painful process, but we now have a lot of really cool and smart users building really useful stuff, which is something we haven't experienced with any of our previous pivots.

andrewstuart
1 replies
2d22h

Removed - conversation tone not right.

justusm
0 replies
2d22h

no worries at all - I appreciate the feedback. Out of curiosity, what makes you think this is a tarpit idea? I roughly understand the term as "ideas that seem obvious and easy, but have non-obvious problems".

Ideally, we would like Revideo to be used to build any kind of web-based video editor. A lot of video editors exist, and many of them make a lot of money. Based on my experience, building a video editor from scratch is also really hard - I would think that people would choose a framework that makes it easy to build them if it exists.

My biggest worry is more about the technical difficulty of the problem (becoming the standard way to build a category of products is probably not easy), rather than this being an actual problem. I'd love to hear your opinion though!

probson
2 replies
2d10h

This looks very cool! I have built a project using remotion to bake in subtitles with some effects to a video from a .srt file, but this approach looks nicer and FOSS is amazing so I'll have a go at porting it. Thanks!

justusm
1 replies
2d10h

Thank you! If you have any questions or run into issues, please let us know! You can find out Discord server in our repo or can email me at justus at re.video

ashia
2 replies
2d23h

Looks promising - I've been using Shotstack's visual editor to create video templates but keep running into limitations. Looks like Revideo has an "editor" that allows previews but not edits? Is editing through the GUI on the roadmap?

justusm
0 replies
2d23h

Thanks for the feedback! Justus here, I am one of the co-creators of Revideo. Initially, we intended Revideo to be a developer tool and not necessarily something that could be modified through a user interface. However, we've heard this feedback many times at this point, so we're definitely considering building editing capabilities into the UI. One challenge here is to find a way to keep the UI changes and the code in sync.

andrewstuart
0 replies
2d22h

> I've been using Shotstack's visual editor to create video templates but keep running into limitations

What limitations are you hitting?

What’s your functionality wishlist?

simonbarker87
1 replies
2d21h

How does this compare to MoviePy beyond the JSX like syntax and being JS?

hkonsti
0 replies
2d21h

The fact that it is JS actually implies more than just being written in a different language. You can ship Revideo as part of your website and run it inside the browser. One of the reasons we struggled with Moviepy was that we couldn't preview our changes in real-time which made the dev-experience a little tedious.

rjeli
1 replies
3d

Very cool! I assume it uses WebCodec VideoEncoder to encode in browser, maybe with a wasm ffmpeg fallback? How reliable/easy to use have you found that?

hkonsti
0 replies
2d23h

Encoding the video in the browser is on the roadmap, but currently we stream the frames one by one to an ffmpeg backend process and concatenate them into a video there. This has been very reliable and fast so far. I've only just started looking at WebCodec, but it seems to be more challenging, especially as most of the APIs aren't supported by some of the mainstream browsers, so as you said, some fallback will be necessary.

rikroots
1 replies
2d9h

I love mucking around with canvases and videos, so I will certainly be checking this out!

On a selfish note, as a canvas library developer/maintainer, I do have questions around your choice of Motion Canvas: what attracted you to that library in particular (I'm assuming it's the Editor, but could be wrong)?

On a broader note, my main interest in canvas+video center around responsive, interactive and accessible video displays in web pages. Have you had any thoughts on how you'd like to develop Revideo to support these sorts of functionalities?

justusm
0 replies
2d8h

Glad to hear that! I would say that the choice of Motion Canvas was motivated by their API for defining animations rather than the editor - we really like the approach of using generator functions and find the "procedural" API (the time of the yield corresponds to the time in the video) quite nice to work with.

Given that our goal is mainly to let developers build their own editors, the Motion Canvas editor is not that important for us - we only use it for previewing code changes, so merely projecting the canvas without any of the remaining editor interface would also be sufficient.

I also agree that interactivity is super important. We have not yet started to work on this, but something we definitely need to make easier with the Revideo player is building drag-and-drop editing features (i.e. moving elements around on the canvas to modify their position).

popalchemist
1 replies
2d20h

Does it work with vue/vite? I am really hoping someone will make such a solution some day.

hkonsti
0 replies
2d20h

So far, we only have a prebuilt Player component for React, but it is definitely possible to integrate Revideo into a Vue project with a bit of hacking. I'd love to better understand your requirements and help you get started. I already did this for React so porting this over to another framework like Vue should be straight forward - feel free to message me in our Discord server (invite link is in our repository) or email me at konsti at re.video.

pavi2410
1 replies
3d

I see that Revideo uses generator functions which seems intuitive to me as it linearizes frame sequences wrt time as the function yields.

How does this compare to Remotion^ which uses "React" mental model?

^: https://remotion.dev

hkonsti
0 replies
3d

Yes, exactly! We're also big fans of the generator function model. In Remotion, you get the current frame of the animation through a React hook called useCurrentFrame(). You then use React to build the UI based on the return value of the hook. By structuring it as a generator function, later parts of the function are for later parts of the animation, which makes it a bit easier to read and write. We found this to be a little more intuitive. Ultimately it probably comes down to personal preference.

mvoodarla
1 replies
2d21h

Congrats on the launch! I work at Sieve (https://www.sievedata.com/). We do a bunch of stuff with AI and video. Excited to check this out :)

hkonsti
0 replies
2d21h

Hi Mokshith, we talked about releasing a project using the Sieve API and Revideo before! This is definitely something we still want to do. I'll reach out!

mattdesl
1 replies
2d8h

Looks great. How are you encoding the video into MP4? Ffmpeg with wasm? Or WebCodecs?

I’ve struggled to find a pure client-side encoder that is as fast, lightweight and high quality (in terms of lossiness) as what I had going with mp4-h264[1]. I suspended the project out of legal concern but it seems like the patents are finally starting to run their course, and it might be worth exploring it again. I’ve been able to use it to stream massive 8k near-pixel-perfect MP4s for generative art (where quality is more important than file size), compared to WebCodecs which always left me with a too-lossy result.

[1] https://github.com/mattdesl/mp4-h264

hkonsti
0 replies
2d8h

Thanks! We're using a server side ffmpeg process for the encoding right now, so running a server is required for rendering. We are exploring fully client-side rendering too but I'm hesitant to use WebCodecs because of the limited compatibility with many browsers. Ffmpeg-Wasm does look promising so I'll give that a closer look very soon. I'll also check out your project, thanks for sharing.

liampulles
1 replies
2d23h

How does this compare to VapourSynth or AviSynth?

hkonsti
0 replies
2d22h

Hey! I'm not really familiar with either one so take this with a grain of salt. On the first glance, it doesn't seem like these projects are designed to be run in the browser. We're mainly focussing on the use-case of using Revideo as a part of web-apps (we're building on top of the HTML canvas API which makes it really easy to preview instantly). There might be some use-cases where a lower level implementation is more suitable.

hobofan
1 replies
2d9h

I can't be the only one that has assumed an affiliation with Retool based on the "Re-" prefix and similar logo, even though there doesn't seem to be any.

justusm
0 replies
2d9h

Justus here, I'm one of the co-creators of Revideo. I agree that there's a high similarity. We unfortunately only noticed this a few weeks after launching - our logo was inspired by the tracks you see in video editors.

Given that we are not a Retool competitor and operate in very different spaces, I would think and hope that this is not an issue for the Retool team

franciscop
1 replies
2d9h

Looks nice! Might want to disable pixel snapping for the text resizing since it yanks a little bit (Firefox on Mac at least).

I made an experiment in a similar style a while ago, but I decided it was too difficult to keep going as a "tiny" side project so never really released anything beyond a demo that you can see here:

https://francisco.io/demo/terminal/

hkonsti
0 replies
2d8h

Thank you, I'll try to disable it! I noticed the little yanks too but didn't know where to look and therefore didn't take the time to try and get rid of it.

Just checked out the demo, nice work!

fiehtle
1 replies
3d

Is this like langchain but for video?

hkonsti
0 replies
3d

The main use case is not necessarily about AI, but about putting all sorts of assets together into a video. However, we definitely see a lot of people using AI-generated voices and images/videos for their videos, which we think is really cool. We will probably integrate these APIs more into Revideo in the future.

fanfanfly
1 replies
2d8h

Does Revideo equivalent with Pymovie or it has some advantages?

hkonsti
0 replies
2d8h

If you mean Moviepy, then yes, there are some advantages. As I mentioned in another comment, you can ship Revideo as part of your website and run it inside the browser. One of the reasons we struggled with Moviepy was that we couldn't preview our changes in real time which made the developer experience a bit tedious. We also like the syntax of our generator functions better, but that is of course subjective.

epiccoleman
1 replies
2d21h

This is really cool, I love this sort of thing.

hkonsti
0 replies
2d21h

Thank you! Love to hear it :)

earlyriser
1 replies
2d23h

I have used Revideo for a personal project and I really like what you're doing.

justusm
0 replies
2d23h

Thanks a lot! Really glad to hear this feedback :)

creativenolo
1 replies
2d20h

This looks like lots of fun.

I’ve only skimmed the docs and nothing jumped out on this: would it be possible to use a 3d canvas context? For example, integrate a dynamic three.js layer/asset into the video?

hkonsti
0 replies
2d20h

We haven't played around with this yet but this is possible. There are people working on this stuff in the Motion Canvas community, which is the animation library we're based on. It should be pretty straight forward to adapt this for Revideo. https://github.com/motion-canvas/examples/tree/master/exampl...

bobosha
1 replies
3d

python support?

hkonsti
0 replies
3d

This is unfortunately not something that is currently on the roadmap. Revideo mainly runs inside the browser which is why we built it entirely in Typescript. You can check out moviepy (https://github.com/Zulko/moviepy) for a similar project written in Python.

andrewstuart
1 replies
2d23h

Are there commercial use cases for this?

hkonsti
0 replies
2d23h

Certainly! You can pretty much build full video editors with Revideo. There are many companies with video-editing at the core of their product that could have been built with us (think veed.io, invideo.io or the editor part of www.synthesia.io). We're thinking about building a template in this direction so people have a starting point when building these kinds of apps. In terms of video automation, we've seen people build apps for marketing and enterprise learning. One marketing tech company uses Revideo to generate video ads from their customers product catalogue.

albert_e
1 replies
2d8h

Great idea.

When text-to-code capabilities of LLMs become more mature, libraries like these are going to create a lot of novel uses cases and opportunities.

justusm
0 replies
2d8h

Thank you! A lot of people have mentioned this to us already - we did not think about the LLM use case when getting started with Revideo, but it does make a lot of sense that it can be used to build text-to-video products and fully automate video editing.

As you mentioned, letting LLMs generate high-quality Revideo code is probably not yet possible, but something I believe could make a lot of sense is using LLMs to choose parameters for your video. Video parameters can get as complex as you like in Revideo (any JSON object is accepted), so you could build a very flexible template that lets you define pretty much any video by using a "universal" JSON representation (essentially, this would be a list of elements such as audio files, videos, images, texts, along with attributes describing the timing, position, etc. of the element)

Letting the LLM generate input variables for your template instead of the code would at least allow you to do type checking and ensure that there are no bugs in your video code - however, I still doubt that the resulting video will be of high quality, LLMs are probably not yet smart enough for this.

akio10
1 replies
2d22h

Really nice - need to try it with a few hobby use cases.

justusm
0 replies
2d22h

Thank you! Would love to hear your feedback when you give it a try :)

SquidJack
1 replies
2d8h

How to build a simple video editor like spilt video like that

justusm
0 replies
2d8h

The best way to get started if you want to build a simple editor is our Saas-Template. It's a simple NextJS app that embeds the Revideo player and lets you export your video with a button click: https://github.com/redotvideo/examples/tree/main/saas-templa...

To learn how you would build a template for video splitting, feel free to check out this example: https://github.com/redotvideo/examples/tree/main/stitching-v.... It shows how you can render and concatenate videos for desired timestamps (e.g. show only second 5 to 10 of a video)

KhoomeiK
1 replies
3d

Interesting—LangChain seemed kinda like unnecessary abstractions in natural language (since everything is just string manipulations), but with AI video, there's so many different abstractions that I'd need to handle (images, puppeting, facegen, voicegen, etc).

Seems like there might be room for a "LangChain for Video" in this space...

hkonsti
0 replies
3d

I agree! We were definitely motivated by the emergence of AI tools for video when building Revideo - as mentioned in the post, we think that a lot of video creation can be automated using AI. Currently, there are probably some higher-priority challenges related to the core rendering library that we need to solve, but we’ve definitely already thought about building a universal client library for common AI services that are useful for generating videos (e.g. text-to-speech, text-to-image, text-to-video)

Jayakumark
1 replies
2d6h

Does it support Lottie graphic templates ?

matsemann
0 replies
2d7h

Is it possible to render and export the video in browser, preferably faster than playback speed?

Use case is a service where people can upload certain data and I use that to generate a video. Let's say I gave you the option to make a speed gauge video, that display the values you input, one after another, for a second each. If you upload 60 values, that will be a minute video. But if you upload your speed each second for an hour, that will be an hour long video. But should ideally not take an hour to render. Unfortunately I've seen most browser based tools can't render faster than playback. So would have the user watch the whole video to actually download it.

darepublic
0 replies
2d

The thing I am dubious about with many of these AI tools is having fine control over the details.

Loiro
0 replies
2d20h

Looks like a cool tool. Will play around a bit, thanks for sharing!

LittleOtter
0 replies
2d15h

That's so cool!!!Thanks for your wonderful job!