I click on the file and "open with handbrake" and press "convert". I can't imagine how anything night be faster.
I click on the file and "open with handbrake" and press "convert". I can't imagine how anything night be faster.
Not obvious how to make a file smaller with handbrake without a lot of trial and error
isn't that exactly the same with ffmpeg?
The handbrake GUI has a drop down list of encoding presets. I find that as an amateur, selecting one of those presets is the best way to make a file smaller.
Handbrake is supposed to make things simpler than ffmpeg, except that it does not.
I find that as an amateur, selecting one of those presets is the best way to make a file smaller.
These presets are not helpful when you want to try to make something specific like "I want to make this file fit in 3GB" which is something that an amateur typically wants to achieve.
It's not even that hard to actually do, which makes me wonder why Handbrake has never implemented this kind of things.
Given how CD/DVD/BluRay went out of mainstream, I think the more typical use case is "I want the file smaller, while retaiming FullHD, HD, SD quality" for archival / streaming, for which the profiles are fine.
Handbrake gives you no indication whatsoever what quality you are getting at the end of the day. Even though it's technically possible to do many things in that regard (make previous of different compression rates and let the user decide, or draw comparison of video clips before and after compression highlighting where the image details are lost, etc...).
There's very little that is user-friendly in Handbrake.
I feel everyone saying handbrake is fine have never used it; it’s the least user friendly app ever.
Much easier to google / llm for the right ffmpeg arguments
Much easier to google / llm for the right ffmpeg arguments
hahahahha
It might surprise you to learn computers are not magic...
I’ve used the CLI tool for a lot of my media library and it really isn’t that hard, there are even wrapper scripts in GitHub that simplify the whole thing and are easier than handbrake
you're conflating user-friendliness with user power and capability.
An app can be very user friendly, but does not give the user the power they didnt know they wanted. This is what handbrake is - you get a nice GUI, you get to choose from a list of presets (unless you know what you're doing and customize it). Then you click go - you can even queue up more files.
Someone who is using handbrake is not going to learn the command line. How would they know how to queue up ffmpeg? Don't say batch scripts, because that's not something a normal user would use.
"I want the file smaller, while retaiming [sic] FullHD, HD, SD quality"
The problem is that "retaining X quality" is extremely vague and subjective. Also, a lot of people conflate resolution with quality (see YIFY Torrents). While yes, you need enough pixels to have reasonable quality, if the encoder set CRF to 50, 8K video won't save you.
I agree with ekianjo, it would be great if Handbrake had a 'fit to 700MB' option.
After trial and error, I found the original file size to be preserved and not to correspond to the presumed smaller output of say a lower resolution output.
You can do this with a two-pass encode and a calculator. Take your desired filesize and divide by the runtime of the video. Put that bitrate into the UI. (It does seem they could add this to the program.)
It would also work with a one-pass encode, but setting the bitrate that way risks wasting space in simple parts of the video and degrading quality in complex parts. Two-pass encoding takes longer but distributes bandwidth better.
Melton’s transcoding tools for one
I haven’t used in a while, but handbrake did have really useful presets for a variety of devices: it was useful because there are a lot of video compression knobs to turn (you can still do so after selecting the preset)
It seems like presets still exist.
https://handbrake.fr/docs/en/latest/technical/official-prese...
Handbrake can also rip dvds, if you give it the external libraries needed to decode them. The UI is great for finding the track with the actual movie and pulling in all the audio tracks and subtitles you want.
I have used this in the past, but I've never been able to get the subtitles to sync up properly. Typically they're copied over, but the first subtitle will start as soon as the video starts (as opposed to when the first spoken words start) and end when the video ends, with all the rest of the subtitles spaced out apparently according to the correct proportions, just at the wrong times.
I've tried fiddling around with when the subtitles should start, but because the subtitles end up stretched out so much, that makes all the later subtitles wrong. And I'm typically ripping a bunch of TV shows, so I want to do as little manual work as possible for each episode, because that all adds up quickly.
Do you know if I'm missing a library or a setting somewhere? Maybe I just need to try again with the latest version and see if it's been fixed over time.
Subtitles are an issue because even if they're "correct" from source (as a synced track) they often have other issues, raw subtitles are still very much YMMV (Your Mileage May Vary).
If you care enough about subtitles and are converting for future reference then it's worth using SubtitleEdit to correct | align | correct case | spell check | generate translations | etc and then merge final (video + audio) tracks from HB with SubtitleTracks using (say) MKV Toolnix.
https://nikse.dk/SubtitleEdit/
These are tools that can be streamlined and batched (with some degree of learning curve).
so I want to do as little manual work as possible for each episode
I typically transcode Audio+Video, seperate out subtitles automatically along with "most common least destructive" touchups scripted and then watch.
You can achieve this by dropping input file in a watched folder and having the results plopped out in a "to be viewed" folder.
Most of the time everything is A-OK .. when the syncing is out I correct it via SubtitleEdit and continue watching.
You can save Episode.mkv and Episode.srt together, or batch bind the srt into the mkv as you store OR you can go to town with multiple subtitles if you're a media meta data nerd.
Check out Subtitle Speech Synchronizer [1]. This uses speech recognition to listen to the audio track, and make whatever corrections to the subtitles, outputting it as a .srt file. Works great.
Thanks for that tip, and it seems to have a Posix version buildable from source, so I hope I can soon test it on Linux.
I rely on gaupol for subtitle editing.
HandBrake requires a patched version of libavcodec/libavformat libraries. Some Linux distributions like to link it to their system libavcodec/libavformat versions, and this breaks many things, DVD subtitles too. However, if you are using a version that's linked with the right libraries, it's probably a bug that should be reported.
I think the problem is linking to the “right libraries”.
I tried using the libraries from MakeMKV but found it easier to just two step the process.
Additionally, vobcopy (with those libraries you've mentioned) is a great CLI video dvd ripper to clone the top level directory tree to disk rather than immediately transcode. I've used it many times to save old family DVD video stuff. From disk I can transcode it into modern formats with tools like Handbrake or Avidemux, depending on the recipient's needs.
I click on the file and "open with handbrake" and press "convert". I can't imagine how anything night be faster.
I use both. One thing I prefer about ffmpeg is that I prefer it when I'm being a control freak.
Basically, if I'm encoding a lot of videos, I will sometimes fail to notice that some box has been ticked in Handbrake, that will screw up my encode. In particular, Handbrake always defaults to resizing my videos, 100% of the time, and I always have to turn that off manually. If I fail to do that, it will resize my videos and I'll end up wasting an hour doing encodes, or 3-4 hours if I'm using CPU instead of GPU.
I can customize it
Which implies you actually understand all the ffmpeg opts and how they interact.
I very much doubt trail an error, or reading manpages is "mich faster" than picking a preset, ticking a box or dragging a slider in handbrake.
I don’t know anything about them, ChatGPT figures it all out for me. I just say if it needs to lower quality or resolution, keep original, remove audio, keep subtitles etc
So you end up learning even less from that process than by using a CLI.
As long as you have this behavior for non-critical code it's just a little sad because you are delegating something you could easily have learnt. For hype sake I guess.
But if you do use this technique in a work related context you are just going to produce average code that you won't be able to debug when something break...
Not defending intellectual incuriosity here, but in fairness ffmpeg is the antithesis of "easy to learn." I capture Laserdiscs (i.e. sampling the laser's RF output and decoding in software [0]) for fun, and use ffmpeg as part of that chain – I still barely scratch the surface of what it can do.
do you use a domesday duplicator? seems like fascinating stuff, hilarious (and impressive [0]) to see it used with VHS via VHS decode [1] .
ffmpeg’s documentation and wiki are pretty comprehensive, though. I’d characterize it as ‘easy to learn, but hard to master’
you are delegating something you could easily have learnt
This is the only case I would ever use a text generator. If you cannot understand it, you cannot trust the output and you cannot learn it in case of doubt.
This is why it is so great for grammar and protocol, but very problematic for actual research questions.
You don't have to take the first output you get. It's as if you can play with it before you make it production. No one is suggesting putting "make the thing in a way" in a pipeline. Stop fighting strawmen.
ChatGPT is like gambling, even though with slightly better odds. It can suggest complete nonsense with seemingly great confidence.
If you have true experience in some field try to ask ChatGPT some questions and you will be shocked what nonsense it suggests, put in very nice words.
From a random blog post that a search machine brings up you can often get some clues whether the author has good understanding or just wrote down a random finding they had during trial and error. And that finding is still more on the side of error than doing it correctly.
In ChatGPT answers I don't see such hints.
You can further gauge the accuracy of a blog post if the author included links to sources. ChatGPT doesn't bother with sources because it can't trace any statement it makes to a source.
It will actively invent fake sources. Ask it to write a Wikipedia article with references and it makes up fake ones that sound like real things.
But chatgpt has no knowledge of the format of your file. Unless you provide the output of an ffprobe as part of your prompt. Like if the source is a dvd, there are all sort of ratio issues, deinterlacing, how you deal with subtitles, etc.
But ChatGPT said so, so it must be true!
Fun test:
Feed ChatGPT's output to a question, back as input to it, say it was from a human (1, amateur, 2, expert) and for each case, for that question, ask it if it is correct.
A ChatTuring (pronounced chattering) Test.
ffmpeg is the only program where I want to see visual no-code applied
But now I'm seeking a --chatgpt option similar to --help so I can navigate any man page.
Faster and more error-prone (and you can customize an app as well, so that's a wash)
Why do people use Handbrake? I throw up videos on Youtube, but it doesn't seem like Handbrake is something I need for that - or is it?
I am not a handbrake user but not everyone want to upload their vids to youtube.
From the answers to my question, the use case primarily seems to be ripping DVDs.
My niche use-case: I have four young kids and a minivan with a DVD player for them. I've used it to rip DVDs that we own and burn just the main tracks to blank discs. That way the movie will automatically play, and play on a loop, when we insert the disc instead of going to a menu screen and requiring interaction with the remote control (which my wife and I can't do from the front seats, and my kids couldn't do in the early years). It also means that we handle the original DVDs much less often, so they don't wear out; I can always burn a new one when the DVD-Rs get scratched up.
An example is if you have a DVD as an ISO file and want to transcode that to a different format. HandBrake is more convenient for this since it handles ISOs natively, whereas with ffmpeg you will need to mount that ISO and operate on the VOBs manually.
Lots of platforms have a maximum file share size.
For instance, if you want to post a video to Discord, you have a ~50MB cap unless you pay like $10/mo
I use Handbrake to compress videos over that cap. I could post it to YouTube but I don't want to clutter up my account with hundreds of 20-30 second uploads.
YouTube will re-encode videos to optimize their playback experience. Which means there will be a generational loss in the re-encoding. YT have maximum requirements for their playback. If the source file exceed those, they will lower it.
Handbrake is useful for lots of things. My partner have to upload his dissertation defense video and the university have strict requirement of the formats. He used Handbrake to convert his video to their formats of choice. It works well for him on his aging 10 years old MacBook Pro (upgraded the HDD to SSD years ago).
The only gripe that I have with HandbrakeCLI is that it cannot encode input piped into stdin. FFmpeg supports this and I was under the impression that handbrake uses FFmpeg under the hood.
Named pipes don't work?
Or some bash magic: `handbrake-cli -i <(cat video-file.mp4)`?
I never used handbrake-cli, only used the gui before, so I have no idea.
No need for cat…
I think it is needed in bash. In zsh you can do <(<file) but I don't think that redirect without a command works.
The cat command is meant as a placeholder for whatever other commands are used for printing out a file stream to stdout.
HB (surprisingly when I firstly learned) builds lots of components from scratch instead of just using FFMPEG (of course, it still uses FFMPEG lib extensively in some other parts).
It's one of the very few transcoders that isn't just a wrapper of FFMPEG (which is both an advantage and a disadvantage).
HandBrake uses some parts of FFmpeg libraries: libavformat, libavcodec, and libavfilter. But even so, it's an entire different app, the decoders, some demuxers and some filters are the same, but the way they are connected together is entirely different than FFmpeg cli app.
A while ago even after they've made HB pipeline 10-bit a lot of the filters still remained 8-bit, thus it was easy unknowingly degrade the encoding experience by picking the wrong filter. Think now most (all?) filters are 10-bit capable HB also had inferior AAC codec in the released version due to FDK-AAC licensing preventing it from being bundled (though might not be an issue now as I've heard the codec isn't that inferior anymore)
Are there any big gotchas with the current version of this nice app?
All filters are capable of high depth since 1.6. I would say that AAC encoder quality is not too bad now (on macOS it can use Apple's AAC encoder, so it's not an issue there).
Anyway, the main issue is the lack of man power, so many requested features that would be nice to have are stuck in features paradise. I guess that's the same as every Open Source project.
Is it possible to use Apple's AAC encoder on Windows in HB?
You can already do it manually (installing iTunes and some 3rd party open-source CLI tools), but it would be nice to incorporate it into HB.
I don't think so unfortunately:
We can't link to the Core Audio DLL at all, so it's not an option unfortunately.
Apache isn't GPL compatible. Nor is linking at run time to proprietary non-system libraries.
https://github.com/HandBrake/HandBrake/issues/191#issuecomme...
Windows has an autodiscoverable system of plugins for media playback, you don't even know what codecs you load. Linux analog is gstreamer.
I know too little to properly contribute to the code, and unfortunately they do not accept donations which I would have hoped would allow some to work a little extra on the project.
For anyone disappointed that Handbrake doesn't allow you to specify a final file size and automagically figure out the rest, calculating this is straightforward.
average bitrate [kbps] = target size [kilobits] ÷ length [seconds]
Example: You have 2'48" file that you need to be 5GB or less. - 2'48" is 10,080s
- 5GB = 40,000,000 kb
- average bitrate = 40,000,000 kb ÷ 10,080s = 3,968 kbps
- If audio is 256 kbps, average video bitrate should be 3,712 kbps or less
If anyone from the Handbrake team is reading, thank you for all of your work on Handbrake. <3This calculation only works if you encode at constant bitrate.
I usually encode at constant quality. The output size heavily depends on the input video. So, I wrote a wrapper in python that parses the HandbrakeCLI output estimates the final size based on percent completed & the current output file size. Then I can stop early if I realize the file is gonna be huge / the output quality is so shitty that I have to bump the quality factor.
This calculation only works if you encode at constant bitrate.
This will need to be your average bitrate over the entire file, but the bitrate doesn’t have to be constant. It can (and ideally should) vary widely between static scenes and action sequences, for example.
Average bitrate would only be possible with a 2pass encoding though, I believe.
Yes! Or if you’re a masochist, lots of trial or error.
Technically still a multipass encoding if you have to manually launch the encoder multiple times with different parameters.
How’s this compare to ffmpeg?
It’s a GUI for ffmpeg they do the same things
It’s not a GUI for ffmpeg. Use some parts and libraries from ffmpeg but it’s not a wrapper and not 1:1.
Broadly, handbrake supports far less customization, accepting fewer source formats, and outputting fewer destination formats than FFmpeg does. This specificity allows Handbrake to do more work to understand input videos at a deeper level than FFmpeg defaults to. For instance, FFmpeg includes options to concatenate, slice, superimpose, or filter multiple videos while Handbrake only slices the input. Similarly, Handbrake includes options to select or burn-in certain language subtitles tailored to the needs of TV show or anime watchers; FFmpeg defaults to copying only the first stream unless told otherwise.
As a second example, Handbrake only supports h264, h265, AV1, and a couple MPEG codecs. This means Handbrake can convert HDR video; it extracts that data and supplies it to the output encoder automatically, while FFmpeg doesn't do that for you.
Can anyone ELI5 why Handbrake says they can't implement a "target file size" option?
I use an Android video compressor that does this fairly well. But on the FR for this on the Handbrake GH one of the maintainers says it's not really feasible.
Its literally a feature of all the underlying encoders. And you can do it even with exotic ffmpeg/vapoursynth filter chains. So I can't imagine why.
TBH I would just recommend Staxrip if you are on Windows: https://github.com/staxrip/staxrip
There is a vapoursynth-based linux equivalent, I can't remember what its called.
Or maybe some av1an GUI. All of these things support a target file size with many more features than Handbrake.
Yeah I am not sure what the difficulty is tbh, but I know nothing about codecs and media encoding.
The FR issue is here, by the way:
https://github.com/HandBrake/HandBrake/issues/4640
Thanks for the suggestion on Staxrip, I've given it a download!
I always prefer ffmpeg to HandBrake, except when working on HDR videos.
I couldn't find a proper ffmpeg command to copy the HDR metadata from input source to output. Last time I checked it was not possible. I needed to extract the metadata manually (e.g. using MediaInfo), then pass each metadatum as an argument for ffmpeg.
Does anyone know if it's still the case?
Yeah, I think you still need to extract the metadata yourself and supply it manually.
To elaborate---and you already know this, but for the benefit of others---there are two common HDR video standards: Dolby Vision and HDR10. Both require custom support within the encoder (eg this is more of a libx265 thing, less of a libavformat/ffmpeg thing).
Fortunately, if your source video is HDR10, that means you can extract the global (unchanging) transfer functions and tone mapping and apply them yourself to the output metadata. FFmpeg can supply this to the encoder, but it doesn't copy it from the source to the destination by default. Here's an article that describes how to do this: https://codecalamity.com/encoding-uhd-4k-hdr10-videos-with-f...
I've been able to reencode one HDR10-encoded video from one format to another while preserving the metadata, the final command from my shell history was something like:
ffmpeg -i Movie-with-HDR.mkv -c:v libx265 -map_metadata:s:0 0:s:0 -map_metadata:g:0 0 -x265-params crf=21:master-display="G(13250,34500)B(7500,3000)R(34000,16000)WP(15635,16450)L(10000000,50)":max-cll=1000,240 Movie-output.mkv
where the `master-display` and `max-cll` settings are the color transfer functions from the first video that I had to extract from some other tool. These settings are documented in the libx265 parameters: https://x265.readthedocs.io/en/master/cli.htmlThe process for Dolby Vision is harder. Since that metadata is dynamic, I'm not sure how one could get it from the source, but it can be supplied to libx265 through a command line argument. Unfortunately, it's only exposed through the command line and isn't available to the API, so ffmpeg can't do this for you yet.
Other references:
- On the process of extracting the proper transfer functions and supplying them to ffmpeg: https://medium.com/@yllanos/how-to-encode-a-4k-hdr-movie-usi... and https://codecalamity.com/encoding-uhd-4k-hdr10-videos-with-f...
- A bunch of folks working together to do same: https://www.reddit.com/r/ffmpeg/comments/g3uucr/how_do_i_enc...
- On converting from dolby vision to HDR10, HLG vs PQ, https://www.reddit.com/r/ffmpeg/comments/nkxbay/how_to_conve...
- On subtleties of dolby vision: https://www.reddit.com/r/ffmpeg/comments/a32yv4/deleted_by_u...
Have you tried -movflags and use_metadata_tags?
ffmpeg -i $input_file -movflags use_metadata_tags -crf 22 $output_file
Source: https://video.stackexchange.com/a/26076Why is there still no simple "limit video X to file size Y" feature? Handbrake thinks I care about some obscure Vimeo preset (of 50) when I really just want a 5gb video file.
Why do you want some obscure 5Gb limit? Do you have a drawer full of 5Gb USB sticks you're trying to fill up? 5Gb is clearly larger than a CD, and too small for a Blu-ray unless you're trying to put exactly 10 of them on there.
Your use case is as strange to the rest of the world as you thinking a Vimeo preset is strange to you.
Because file size limitations are still systemic on the web in thousands of services?
It brings joy that “Put that cocktail down. Your HandBrake encode is complete!" Has lasted through all the years.
Thats a nice touch, I love when people put some soul/ghost back in the machine a little.
I find FastFlix much better personally.
I prefer https://vidcoder.net/ on Windows, which is an alternative Handbrake UI
I used it in the past and this used to be one of my favorite free software for MacOS.
The release itself[0] would have been a better link.
Has the changelog I imagine most want to see.
0. https://github.com/HandBrake/HandBrake/releases/tag/1.7.0
Nice feature list! I’m particularly excited about:
* Improved performance on arm64 / aarch64 / Apple Silicon architectures
* Latest FFmpeg provides faster HEVC decoding, 30% faster bwdif filter
* New SVT-AV1 assembly optimizations provide up to 4x increase in performance
* Improved video conversion speed by removing unneeded frame copies for better memory efficiency
Nowadays I just ask ChatGPT to give an “ffmpeg” terminal command
Works much faster than any app and I can customize it