Our biggest GHA fees come from running on MacOS. Do you offer MacOS as a managed service (or plan to?) and how much cheaper is that than GitHub?
Our biggest GHA fees come from running on MacOS. Do you offer MacOS as a managed service (or plan to?) and how much cheaper is that than GitHub?
Congrats on the launch. Looks interesting. Quick thoughts on the landing page:
- Pricing looks awesome.
- I'm not currently the target audience because everything I'm doing right now is open source with free GitHub actions.
- I'm left wondering what the catch is / why it's cheaper and faster.
- Visual nit: lacking horizontal padding from 990px to ~1200px, a common window size on my 14" MBP.
Ubicloud is an open source cloud. Think of it as an open alternative to cloud providers, like what Linux is to proprietary operating systems. You can self-host Ubicloud or use our managed service.
I find this hard to parse and the first few times I thought you were saying it's a linux alternative. I just clicked to the docs and the "What is Ubicloud?" section is clearer because you say concretely and directly what it is rather than how I should think of it metaphorically: "infrastructure-as-a-service (IaaS) features on providers that lease bare metal instances, such as Hetzner, OVH, and AWS Bare Metal. It’s also available as a managed service."
There's some old "counterintuitive" adage I'm too lazy to look up about how the best marketing to engineers is just saying in concrete language what it is rather than the benefit it provides. In this case, I'd do both: tell me what it actually is and why that makes it cheaper/better. Also a minor note, there's a typo in that paragraph (missing space "systems.Ubicloud").
I'm left wondering what the catch is / why it's cheaper and faster.
So, while this is the first time I've heard of ubicloud, I use gha extensively.
Abd frankly, I think it's just because Github has a crazy markup on their actions compute over the raw compute. Taking a quick look, it appears that their base rate is like $0.008 per-minute!! That's a rate that wouldn't look crazy out of line with EC2's hourly rate.
I've worked on projects where we saved significant money, and improved build times just by launching a single EC2 instance and connecting it to Actions.
Iops on GitHub runners is also terrible slow, you easily get a 5x to 10x improvement
We did the same, and set up GitHub actions runners on hetzner
Halfed the integration test time and made them more reliable.
Just a catch with Hetzner vs AWS: pricing at AWS is per-second while Hetzner is per-hour, which is (very) inconvenient if you're launching ephemeral runners.
while thats true.. i am using TestFlows-GitHub-Hetzner-Runners, which recycles runner for the 1 hour lifetime, works like a charm so far
also main reason for switching was performance, not cost for us.
I've heard CI/CD in general and GitHub Actions specifically is where old cloud hardware goes until it dies.
If we're doing nits, because this product looks cool, here are a couple of potential tweaks:
Imagine to do more
I'd get rid of this. I don't understand the phrase, and it sounds like fluff.
Fast runs even at this price point
I'd get rid of the "point". "Price point" isn't a synonym for "price", which I think is what's being attempted here. I'd be tempted to just have no tagline, and retitle this section "Faster than GitHub Actions". You've already said it's cheaper.
Ubicloud is an open, free, and portable cloud. Think of it as an open alternative to cloud providers, like what Linux is to proprietary operating systems. You can check out our source code in GitHub or see Ubicloud runners in action for our GitHub Actions. An open and portable cloud gives you the option to manage your own VMs and runners, should you choose to.
This is woolly. How about: Ubicloud is an open and free cloud. You can run it on the hosting provider of your choice, or bring your own hardware. Check out our source code on GitHub!
Much appreciate the nits! I've made a few minor tweaks right now, we will do a more complete revision later on.
Fast runs even at this price
what about: 'Cheaper doesn't mean slower'? It's pithier, and (in particular for anyone reading before/without looking at the page) better IMO in its place as a subheading. Scans better. Or even 'Cheaper != slower' (again, subhead).
Nitpicking is contagious. Once one does it, then immediately others feel compelled to do it too, me included...
Their pricing page looks amazing. Not because of any funky UI stuff but because of the slight displacement of the decimal point.
I'm left wondering what the catch is / why it's cheaper and faster.
I don't think it's very expensive to run a build server.
Not until it works. But when it doesn't you have an office full of people waiting for someone to fix that broken build server. In terms of lost productivity this disaster is very expensive. Over the years I have suffered only three hardware disasters. One was a NAS+backup screwup but other two were both related to build servers...
This reminds me. It must have been 2009ish. I was at Microsoft. Part of the code I was responsible for ran on the Windows build servers. This code was triggering a kernel bug in the Windows registry. The only way I knew how to reproduce it was by building Windows. Because thousands of Windows builds happened per day, maybe 1 of them would hit it every 1-3 days.
The way I got it in a kernel debugger was to constantly run the problematic race condition inside a VM, when it detected that it hadn't hit, it would roll back the VM to a save state of the code already running in progress ... Took an overnight run to hit it on a machine in the office.
All that said, I'd still say it's not very expensive to run a build machine.
Thank you for your feedback! I've just made several edits to the text for clarity; will also fix UX bugs and do bigger a update in the upcoming weeks.
big improvement! good luck
We've been using Ubicloud builders for our Rust project [0] for several months, and it's worked very well. We've seen CI times go from 10-15 minutes to 6-7, and our bill has gone from $300/month to $30.
One counter-intuitive thing we found is that it's slow to save and restore caches, but the machines have good CPU, so for us it's been faster to disable cache entirely and just redo everything on each build.
so for us it's been faster to disable cache entirely and just redo everything on each build.
I wonder what the consequential "carbon footprint" of this is, but at scale for all companies/all jobs of similar nature
Also consider the carbon footprint of using caches! Apparently it takes longer, because it has to send/recv more data, compress and decompress, cause load on other systems ...
It's really kinda impossible to judge the carbon footprint for these kinds of things, and if it's justified. Consider the carbon footprint of all the dumb AI feature rolled out at Facebook, Google, etc. Consider the carbon footprint of everyone trying to use ChatGPT now. Do you know how much power GPUs are guzzling to give each little answer? It's really huge, compared to a traditional google search! Is it worth it? ... who can judge eh
There’s a French think tank called “The Shift Project” which actually produced a report estimating CO2 impact of data transfer a few years ago [1]!
The numbers are obviously very rough; there’s a LOT of factors to consider and which vary from one node to another. But the methodology is quite comprehensive, e.g. they factor in power consumption of the end user’s device while waiting for the data to transfer over the wire
Green software engineering is interesting to me because it seems like a good way to impact greenhouse gases _without_ requiring consumers to change anything about their lifestyle (which is the hard part of climate change…) There’s a cool presentation from Rasmus Lerdorf from around the time when PHP7 released where he estimated a 50% adoption rate of PHP7 would result in a saving of 3.5B kg CO2/year iirc, purely because of the compute efficiency gains from PHP6 -> PHP7.
I used their numbers to calculate that swapping our CI/CD clones at my day job over to shallow clones saved (maybe) 5.5 kg/week of CO2 emissions [2]. Not quite as impressive as the PHP7 figures, but I still think it’s kinda neat.
It was PHP 5.4 -> PHP 7.0. There was no 6.
One of the goal of my own hosted runner project [1] is to display the carbon cost and money cost after each run. Too many people are oblivious of the costs and I think as an industry it would be great to at least get the data points.
[1]: https://runs-on.com
This is pretty much exactly what I wanted (yay for custom images) but the pricing makes it a non starter. Please consider a tiered monthly pricing based on build minutes.
I mean, you could also consider how much the transition to Ephermial builders and containers is costing us in general, moving from (admittedly more brittle) build machines that would keep build artifacts alive on the local hard drive just as matter of fact.
One counter-intuitive thing we found is that it's slow to save and restore caches, but the machines have good CPU, so for us it's been faster to disable cache entirely and just redo everything on each build.
The link to SPDK was very interesting: https://www.ubicloud.com/blog/building-block-storage-for-clo.... I use filesystems for very high performance applications, and I've found ZFS to often be the limiting factor when compared to simpler solutions of XFS +- mdadm +- encryption.
It's a controversial point, but others have made similar findings: https://klarasystems.com/articles/virtualization-showdown-fr... : "Although I suspect this will surprise many readers, it didn’t surprise me personally—I’ve been testing guest storage performance for OpenZFS and Linux KVM for more than a decade, and zvols have performed poorly by comparison each time I’ve tested them"
OpenZFS seems to be starting to consider optimizations to better perform on modern drivers (SSD, NVMe) which have very different performance profiles to what ZFS was built for (spinning rust)
In the SPDK summary they say "To make VM provisioning times go faster, we changed our host OS from ext4 to btrfs" (...) "Also, when we switched the host filesystem to btrfs, our disk performance degraded notably. Our disk throughput dropped to about one-third of what it was with ext4."
Ubicloud: the problem seems to be generic to CoW filesystems, and it's interesting you came with a slight variation (CoA) but have you considered the even simpler alternative of any journaling filesystems (XFS, Ext4...) with overlays?
Or just UFS2 + snapshots to restore from a given state (initialized, ready for each test) then restore to this state between tests?
I think customers finding that disabling cache works better means the CoA has similar issues to CoW.
Personally, I'd have just tried to using SR-IOV with a namespace per customer, and call it a day instead of bringing extra complexity, but there must be good reasons for it. I'd love to know what these reasons are.
In this case, I don't think the issue is due to filesystem performance. Someone from Ubicloud can correct me, but my understanding is that for custom runners Github still stores the cache on their side. So Ubicloud (in Europe) needs to transfer the cache from Github (in the US) on every run.
Hi, I work for Ubicloud.
Yes, that is correct. We are also working on implementing our own caching, which should speed up cache downloads/uploads significantly.
Thanks for sharing! I've been looking at the repository and noticed that some jobs are still running on Github hosted runners. What is the point of using them and not running everything on Ubicloud?
We have one very slow job (our Rust CI, for example: https://github.com/ArroyoSystems/arroyo/actions/runs/7702793...) and a bunch of little jobs that take a few seconds (checking lints, etc.). We never bothered to switch those over because they complete quickly on github and fit within our included runner minutes.
I should say we also use BuiltJet for docker builds because of their arm support. Now that ubicloud has arm we may switch those over as well.
I love to see examples in the wild of repos having switched to a non-official GitHub Action runner. I'm slowly preparing a collection of repo timings comparing GitHub vs Buildjet/Warpbuild/Ubicloud vs my own solution RunsOn.
In your case your workflow can run in less than 5 minutes on AWS ephemeral machines, for the same price as ubicloud: https://github.com/runs-on/arroyo/actions/runs/7723361513/jo...
wipe out the block storage device attached to the VM
Does this provide guarantee that subsequent job won't be able to recover the data?
If the data is also encrypted at rest (as they claim it is), then even if the raw data is recovered it shouldn't be useable without a combination of key leak.
I work at Ubicloud.
Although we have a KEK and DEK code for regular VMs, they are not operative on GHA...yet. The reason has to do with a technical conflict with copy-on-write we aim to close, not least of which because Ubicloud needs to grow its own copy-on-write features for block device snapshots, things we lack today.
I expect within a few months, all expired GHA vms will be cryptoshredded upon their deletion. This is already true for regular virtual machines or managed postgres machines.
because Ubicloud needs to grow its own copy-on-write features for block device snapshots, things we lack today.
Just, why? CoW (or your own CoA) are ripe with performance problems. How exactly do you benefit from their use?
The GitHub image is 86GB and people want an action VM to start reasonably quickly, so a full copy for every run isn’t going to work so well.
As a side note, isn’t it nuts that GHA operates on a principle of “installing the universe” (and apparently the universe is 86GB) and updating about every week, and it’s not total chaos? I was surprised, but it seems to work.
As a side note, isn’t it nuts that GHA operates on a principle of “installing the universe” (and apparently the universe is 86GB) and updating about every week, and it’s not total chaos? I was surprised, but it seems to work.
As someone who has authored a GitHub Action to delete like 85% of the hodge-podge stuff blasted all over in a default runner image to free up more space for a Nix store. It's "nuts" indeed.
Any GitHub link for your code?
Yes, a subsequent job won't be able to recover the data.
We completely shut down the VM and remove the block device (all files associated with the block device).
I think the concern is that a subsequent allocation would have blocks from a previous allocation that could be readable.
Well, it’s the same as regular GHA runners. On GitHub provided runners you need to explicitly move data between jobs. I find that useful in preventing weird side effects in one job affecting another.
SOC2 ?
are you pushing PHI/PII through github actions?
Does not matter - pipeline needs to be trusted because it has access to sensitive resources for deployment tasks, can fake test results etc.
Even though it is a bit of a PITA to maintain self hosted runners, it is the reason we do it.
GARM can easily manage ephemeral runners for you: https://github.com/cloudbase/garm (Ephemeral runners are also more secure)
Actions have access to environment secrets . Those secrets can open door to PII.
Sigh. Please don't call the Elastic license "open source". It's nice that the source is available, but this is not an open source license.
EDIT: per responses, it looks like this is outdated information and the project now uses AGPL!
The docs still say the Elastic license is used but looking at https://github.com/ubicloud/ubicloud/blob/main/LICENSE it looks like the project might have switched to GNU Affero General Public License v3.0 in the last day.
thank you and that's correct, just updated the docs as well.
Wow. When you guys first launched this was my biggest concern. This is absolutely awesome, thank you team!
That's wonderful news, thank you very much for switching to an actually open license!
The linked page at https://www.ubicloud.com/docs/github-actions-integration/qui... still says "Source open under the Elastic V2 license", and https://www.ubicloud.com/docs/about/pricing still says "it's open and free under the Elastic V2 license". Not sure if those were missed or if the docs just need some time to refresh from their sources.
Am I missing something or GitHub can easily block all 3rd party runners if they want to?
Of course, but doesn't GitHub embrace these runners?
Depends on how you define "embrace". Services like Ubicloud violate the GitHub Terms of Service.
Can you elaborate on how and why?
I was looking for macOS M1 runner, which is not provided by GitHub. I'm willing to pay for that, but it seems that there are only Linux types now.
WarpBuild[0] provides Apple Silicon macOS runners powered by M2 Pros. Note: I'm the founder. [0] https://docs.warpbuild.com/runners#macos-m2-pro-on-arm64
Launched today: https://github.blog/changelog/2024-01-30-github-actions-intr...
Cirrus Runners being doing it for a long time now:
Feels like there is another one of these every week.
It is relatively easy to roll out a solution to this problem and GitHub doesn't care about competition in this niche.
Might be a sweet spot. I'm thinking (a) well-understood problem (b) hardware's rented, and scales with use (c) lots of teams have CI costs high enough to be annoying, but not so high that they need authorization to change their supplier (d) github are marking up the service so hard that it's easy to compete on price.
Yes, I wonder when/if GitHub decides to drop prices at some point.
But in the meantime I’ve recently released RunsOn [1] with the same promise (10x cheaper, faster) but the whole thing runs in your AWS account.
Hi there, I'm Ozgun, one of Ubicloud's founders.
We have dozens of customers using Ubicloud runners in production today. We’re now designing our caching layer (Docker instance registry, Docker layer cache, or package cache). We wanted to put this out there for any comments.
Also, if you have any points related to the broader topic of an open and portable cloud, please pass them along!
Yes, please do what Depot does and put fast persistent disks close to builds to cache docker layers. Github action runners and circleci and all the others adding expensive network calls to manually cache layers has always been such a time sink and I think moves lots of people to remove caching entirely.
Depot [0] founder here. Thanks for the mention. We're also planning on bringing a bit of a different take to GitHub Action runners that's not tied to Hetzner directly. It will be entirely open-source as well, so you can take it and run it on your own instances if you'd like. Similar to how Depot supports self-hosted builders in your own AWS account [1].
what stops GitHub from shutting down these offerings from a legal or technical perspective, these alternatives clearly violate their terms of service:
"Additionally, regardless of whether an Action is using self-hosted runners, Actions should not be used for: the provision of a stand-alone or integrated application or service offering the Actions product or service, or any elements of the Actions product or service, for commercial purposes"
No that means you can't create a CI/CD competitor that's 'hosted' in Actions. (e.g. Install OJFordCI GitHub app, pay at ojfordci.com/sign-up, view your CI results at ojfordci.com, but actually also at github.com in the Actions tab on your repo.)
They absolutely support custom runners, it's how all these work, they don't need to stop them via ToS, they can just not allow it as an option. `runs-on: ubicloud` only works because GitHub implements it right.
I read that as them not wanting you to use Actions as part of your commercial offering.
E.g. don’t use Github as Infrastructure as a Service.
How does this compare to BuildJet?
It's cheaper, for one
given that the landing page looks almost pixel by pixel inspired by BuildJet, I'd say the answer is "very comparable"
Gitlab + Home laptop runners = free
Been using this setup for year, very happy with it. I don't see the point of Github to be honest?
In a similar vein, using Gitea + self-hosted runners (including macos) here, very happy although we did do some of the work to make the "CI stack" (contributed back to gitea and act projects) so not totally batteries included yet. One thing that helps quite a bit imho is to avoid virtualization -- our approach is to run all CI jobs in containers, not VMs. Yes this has isolation implications, and requires some futzing to get docker-in-docker and docker-in-docker-in-docker to work (shout out to the Earthly team for figuring out how to host kind/k8s inside a container), but the "runs on any computer" property of containers (vs virtualization) is powerful. Want CI on your laptop? No problem. On a Windows machine? No problem.
For what it's worth, you can also self host your runners without Ubicloud on Github. That doesn't remove other reasons to not run Gitlab, but it's also not unique. I've self hosted runners for a variety of reasons on Github with great success.
How does this pricing work? With purely spot based AWS runners we can barely reach the 10x cost treshold compared to github runners.
Edit: Oh, Hetzner. They’re really ubiquitous in cheap computing these days.
You can reach 10x cheaper with just AWS, especially on the larger runner sizes. That is even including disk costs.
I put a calculator for RunsOn at https://runs-on.com/calculator/
I mean, I did just say that, but it seems to me they can’t just resell AWS spot instances because there’d be no margin left.
This looks great and the pricing is obviously a big improvement, I will try it out for our Linux jobs.
Sure would be good to see competition for the MacOS and Windows runners too, these are the ones that tend to cost us the most.
We at WarpBuild currently support mac instances (M2 Pros). Windows support will come soon.
One thing I find frustrating about the Github Actions's runner pricing is that it's calculated on a minute basis. Couldn't you bill by the second instead? Maybe set a minimum to 1 min but after that charge by the second?
I assume this is done to cover the time that the VM reboots between jobs?
This is exactly what we do at WarpBuild, more in the interest of fairness and not passing on random costs to users.
The VM reboot times add up quickly and that's likely the rationale. However, it gets hard when there are users running linting jobs that take ~2s but running it with 16vcpu instances so we kept the 1 min floor.
Seems like lots of people are thinking about this. We are rolling our own, and I spent a good chunk of today writing a new scheduler. Definitely fun to play around with.
We've been happily using BuildJet [0] for over a year now.
Saved over $25k in CI costs compared to GH Actions - they're also using super powerful bare metal servers with Hetzner - we got about a 94% reduction in build time!
Absolutely chuffed, good to see more companies on the market though.
Looks neat. I've been using https://github.com/philips-labs/terraform-aws-github-runner to save on $
Hello, can I pay with bitcoin?
Congrats on the launch! I hope Ubicloud will be even more successful than the previous project, Citus!
Coincidence maybe? GitHub just launched M1 macOS machines today. https://github.blog/changelog/2024-01-30-github-actions-intr...
Amazing, testing this out tomorrow !
I always imagine the github runners as a vast array of early model raspberry pis.
Nice intro, but I am a bit worried not seeing SOC2 or something similar.
didn't see any mention of caching in the docs. This tends to be pretty important for fast build/CI at many different key points of a given workflow (dependency cache, build cache, docker layer cache, image cache, etc). Wondering if I missed something.
Where is your company incorporated? (e.g do you comply with the GDPR?). This should be stated on the website.
I am hearing about Ubicloud for the first time and it sounds very good. I don’t have a need for a cheaper GitHub runner now, but I’ve been dreaming about a modern, less complex open stack alternative for some time. Also, having a cloud service combined with the open source product seem like a great fit!
Will FreeBSD be supported in the future?
Looks like it's a long running node? Or am I wrong?
Does it have on demand workers like those Kubernetes providers, where you use shared master nodes (or with a very small fee), and scalable node pools on demand (charged when used).
We've been using the Ubicloud runner for a while at PeerDB[1]. Great value and specially the ARM runners have been helpful to get our CI costs down. The team is really responsive and added the arm runner support within a few weeks of us requesting it.
The people buying github/actions are getting at least 10K minutes included as part of their EA agreement, and they can use their own runners for zero cost. I think most enterprises are more concerned with having their bits on a cloud provider, versus pricing, but that's my limited experience.
I want the vendor that can take care of this for us, but that can guarantee a private egress range, or can run things within our VPC.
Several builds connect to internal resources, so running them on external nodes suboptimal and expensive when it comes to network egress.
Congrats on the launch, certainly good to see competition.
I was reading https://www.ubicloud.com/blog/ubicloud-hosted-arm-runners-10... but seems like a bit misleading to compare an ARM workload over QEMU vs a native ARM.
What about native x86 vs native x86 or at least native x86 vs native arm?
We've been running Ubicloud for a while at Resmo. It is indeed 10x cheaper. We upped our instance sizes by 2x for a slightly more performance but it's still 5x cheaper.
The main reason is their platform is hosted on Hetzner dedicated instances.
We support GitHub MacOS 13 runners on M2 Pros at WarpBuild [1]. They're about 25% faster and 50% cheaper per minute compared to the equivalent GitHub hosted runners.
[1] https://docs.warpbuild.com/runners#macos-m2-pro-on-arm64
Nice, I don't think you supported them last I looked. I'll be glad to get off GH's mac runners which are straight garbage, like it's embarrassing or it should be for them. The sheer audacity to charge 10x the linux runners and still be so slow... At 10x price they are 2x+ slower than the linux runners for doing the same thing (checking out repo, installing dependencies, running webpack).
I'm curious what you need macOS for if you're doing a JS project. Is it Electron?
Cross platform app development using JS (TS). Quasar is the framework I use. I build my Android apps on linux but need macOS to build the iOS apps.
Thanks! Hadn't heard of Quasar, looks cool though.
That is correct. We went live with them about a week ago. Will likely introduce it to the HN friends in the next couple of days.
I agree with the stuff you've mentioned, obviously, but also empathize a bit with the GitHub folks because of all the licences and limitations Mac runners come with.
The perf for comparable operations is squarely on them though.
Any timeline for supporting repositories in personal accounts? I'm willing to create an organization and move the repository, but I won't bother if it's right around the corner. macOS ARM runners are a gamechanger given the offensively high pricing GitHub offers them at.
We're not prioritizing personal accounts at this point, one of the reasons being to limit spammy accounts. Sorry about the hassle - you sound like a genuine user.
No worries. I've already created the organization; I doubt this will impact many serious users. I look forward to seeing your new Mac support on HN as a top level submission. Since GitHub doesn't give free macOS ARM minutes at all, this should be immediately interesting to anyone running such builds.
Have you seen this announcement by GitHub regarding Apple Silicon:
https://github.blog/changelog/2024-01-30-github-actions-intr... ?
Today, GitHub is excited to announce the launch of a new M1 macOS runner! This runner is available for all plans, free in public repositories, and eligible to consume included free plan minutes in private repositories. The new runner executes Actions workflows with a 3 vCPU, 7 GB RAM, and 14 GB of storage VM, which provides the latest Mac hardware Actions has to offer. The new runner operates exclusively on macOS 14 and to use it, simply update the runs-on key in your YAML workflow file to macos-14.
I had not seen it--thank you!
Not for the foreseeable future.
Ubicloud runs on bare metal providers and they don't lease Mac hardware. Technically, we could run MacOS VMs on arm64. However, our interpretation of Apple's End User License Agreement (EULA) tells us that we can't do this.
This repo has some good references on the topic: https://github.com/kholia/OSX-KVM?tab=readme-ov-file#is-this...
Would something like https://www.macstadium.com work for your use case?
That's exactly what we had looked at but we didn't want to be in the business of maintaining our own instances. We just want to use the GH runner part.
Biggest issue with MacOS is that it requires a physical Mac (not virtualization) and IIRC the licening from Apple requires a minimum "rental" period of 24 hours or something like that
Edit: the TOS for OS X says this:
3. Leasing for Permitted Developer Services. A. Leasing. You may lease or sublease a validly licensed version of the Apple Software in its entirety to an individual or organization (each, a “Lessee”) provided that all of the following conditions are met: (i) the leased Apple Software must be used for the sole purpose of providing Permitted Developer Services and each Lessee must review and agree to be bound by the terms of this License; (ii) each lease period must be for a minimum period of twenty-four (24) consecutive hours;
What I did for work was, getting a Mac Mini and set up multiple self hosted runners in Tmux (very hackish, I know). The builds became faster (and cheaper) because each run no longer had to download the dependencies again.
Of course, hosting myself means I also gotta own the uptime of it..
How does Github Actions get away with this? They certainly don't bill and provide the machine for 24 hours for each CI run.