return to table of content

Show HN: Open-source x64 and Arm GitHub runners

hangonhn
17 replies
1d1h

Our biggest GHA fees come from running on MacOS. Do you offer MacOS as a managed service (or plan to?) and how much cheaper is that than GitHub?

suryao
10 replies
1d

We support GitHub MacOS 13 runners on M2 Pros at WarpBuild [1]. They're about 25% faster and 50% cheaper per minute compared to the equivalent GitHub hosted runners.

[1] https://docs.warpbuild.com/runners#macos-m2-pro-on-arm64

joshstrange
4 replies
1d

Nice, I don't think you supported them last I looked. I'll be glad to get off GH's mac runners which are straight garbage, like it's embarrassing or it should be for them. The sheer audacity to charge 10x the linux runners and still be so slow... At 10x price they are 2x+ slower than the linux runners for doing the same thing (checking out repo, installing dependencies, running webpack).

mike_hearn
2 replies
23h40m

I'm curious what you need macOS for if you're doing a JS project. Is it Electron?

joshstrange
1 replies
23h15m

Cross platform app development using JS (TS). Quasar is the framework I use. I build my Android apps on linux but need macOS to build the iOS apps.

mike_hearn
0 replies
9h16m

Thanks! Hadn't heard of Quasar, looks cool though.

suryao
0 replies
1d

That is correct. We went live with them about a week ago. Will likely introduce it to the HN friends in the next couple of days.

I agree with the stuff you've mentioned, obviously, but also empathize a bit with the GitHub folks because of all the licences and limitations Mac runners come with.

The perf for comparable operations is squarely on them though.

electroly
4 replies
23h49m

Any timeline for supporting repositories in personal accounts? I'm willing to create an organization and move the repository, but I won't bother if it's right around the corner. macOS ARM runners are a gamechanger given the offensively high pricing GitHub offers them at.

suryao
1 replies
21h17m

We're not prioritizing personal accounts at this point, one of the reasons being to limit spammy accounts. Sorry about the hassle - you sound like a genuine user.

electroly
0 replies
17h48m

No worries. I've already created the organization; I doubt this will impact many serious users. I look forward to seeing your new Mac support on HN as a top level submission. Since GitHub doesn't give free macOS ARM minutes at all, this should be immediately interesting to anyone running such builds.

ayewo
1 replies
3h41m

Have you seen this announcement by GitHub regarding Apple Silicon:

https://github.blog/changelog/2024-01-30-github-actions-intr... ?

Today, GitHub is excited to announce the launch of a new M1 macOS runner! This runner is available for all plans, free in public repositories, and eligible to consume included free plan minutes in private repositories. The new runner executes Actions workflows with a 3 vCPU, 7 GB RAM, and 14 GB of storage VM, which provides the latest Mac hardware Actions has to offer. The new runner operates exclusively on macOS 14 and to use it, simply update the runs-on key in your YAML workflow file to macos-14.

electroly
0 replies
3h15m

I had not seen it--thank you!

ozgune
2 replies
1d1h

Not for the foreseeable future.

Ubicloud runs on bare metal providers and they don't lease Mac hardware. Technically, we could run MacOS VMs on arm64. However, our interpretation of Apple's End User License Agreement (EULA) tells us that we can't do this.

This repo has some good references on the topic: https://github.com/kholia/OSX-KVM?tab=readme-ov-file#is-this...

dewey
1 replies
1d

Would something like https://www.macstadium.com work for your use case?

hangonhn
0 replies
23h57m

That's exactly what we had looked at but we didn't want to be in the business of maintaining our own instances. We just want to use the GH runner part.

HideousKojima
2 replies
1d1h

Biggest issue with MacOS is that it requires a physical Mac (not virtualization) and IIRC the licening from Apple requires a minimum "rental" period of 24 hours or something like that

Edit: the TOS for OS X says this:

3. Leasing for Permitted Developer Services. A. Leasing. You may lease or sublease a validly licensed version of the Apple Software in its entirety to an individual or organization (each, a “Lessee”) provided that all of the following conditions are met: (i) the leased Apple Software must be used for the sole purpose of providing Permitted Developer Services and each Lessee must review and agree to be bound by the terms of this License; (ii) each lease period must be for a minimum period of twenty-four (24) consecutive hours;

yla92
0 replies
18h25m

What I did for work was, getting a Mac Mini and set up multiple self hosted runners in Tmux (very hackish, I know). The builds became faster (and cheaper) because each run no longer had to download the dependencies again.

Of course, hosting myself means I also gotta own the uptime of it..

mysteria
0 replies
18h2m

How does Github Actions get away with this? They certainly don't bill and provide the machine for 24 hours for each CI run.

rgbrgb
14 replies
1d1h

Congrats on the launch. Looks interesting. Quick thoughts on the landing page:

- Pricing looks awesome.

- I'm not currently the target audience because everything I'm doing right now is open source with free GitHub actions.

- I'm left wondering what the catch is / why it's cheaper and faster.

- Visual nit: lacking horizontal padding from 990px to ~1200px, a common window size on my 14" MBP.

Ubicloud is an open source cloud. Think of it as an open alternative to cloud providers, like what Linux is to proprietary operating systems. You can self-host Ubicloud or use our managed service.

I find this hard to parse and the first few times I thought you were saying it's a linux alternative. I just clicked to the docs and the "What is Ubicloud?" section is clearer because you say concretely and directly what it is rather than how I should think of it metaphorically: "infrastructure-as-a-service (IaaS) features on providers that lease bare metal instances, such as Hetzner, OVH, and AWS Bare Metal. It’s also available as a managed service."

There's some old "counterintuitive" adage I'm too lazy to look up about how the best marketing to engineers is just saying in concrete language what it is rather than the benefit it provides. In this case, I'd do both: tell me what it actually is and why that makes it cheaper/better. Also a minor note, there's a typo in that paragraph (missing space "systems.Ubicloud").

Arelius
4 replies
23h57m

I'm left wondering what the catch is / why it's cheaper and faster.

So, while this is the first time I've heard of ubicloud, I use gha extensively.

Abd frankly, I think it's just because Github has a crazy markup on their actions compute over the raw compute. Taking a quick look, it appears that their base rate is like $0.008 per-minute!! That's a rate that wouldn't look crazy out of line with EC2's hourly rate.

I've worked on projects where we saved significant money, and improved build times just by launching a single EC2 instance and connecting it to Actions.

pinkgolem
3 replies
23h47m

Iops on GitHub runners is also terrible slow, you easily get a 5x to 10x improvement

We did the same, and set up GitHub actions runners on hetzner

Halfed the integration test time and made them more reliable.

crohr
1 replies
9h32m

Just a catch with Hetzner vs AWS: pricing at AWS is per-second while Hetzner is per-hour, which is (very) inconvenient if you're launching ephemeral runners.

pinkgolem
0 replies
2h8m

while thats true.. i am using TestFlows-GitHub-Hetzner-Runners, which recycles runner for the 1 hour lifetime, works like a charm so far

also main reason for switching was performance, not cost for us.

password4321
0 replies
23h35m

I've heard CI/CD in general and GitHub Actions specifically is where old cloud hardware goes until it dies.

robertlagrant
3 replies
23h53m

If we're doing nits, because this product looks cool, here are a couple of potential tweaks:

Imagine to do more

I'd get rid of this. I don't understand the phrase, and it sounds like fluff.

Fast runs even at this price point

I'd get rid of the "point". "Price point" isn't a synonym for "price", which I think is what's being attempted here. I'd be tempted to just have no tagline, and retitle this section "Faster than GitHub Actions". You've already said it's cheaper.

Ubicloud is an open, free, and portable cloud. Think of it as an open alternative to cloud providers, like what Linux is to proprietary operating systems. You can check out our source code in GitHub or see Ubicloud runners in action for our GitHub Actions. An open and portable cloud gives you the option to manage your own VMs and runners, should you choose to.

This is woolly. How about: Ubicloud is an open and free cloud. You can run it on the hosting provider of your choice, or bring your own hardware. Check out our source code on GitHub!

umur
1 replies
22h4m

Much appreciate the nits! I've made a few minor tweaks right now, we will do a more complete revision later on.

OJFord
0 replies
5h1m

Fast runs even at this price

what about: 'Cheaper doesn't mean slower'? It's pithier, and (in particular for anyone reading before/without looking at the page) better IMO in its place as a subheading. Scans better. Or even 'Cheaper != slower' (again, subhead).

kpandit
0 replies
22h30m

Nitpicking is contagious. Once one does it, then immediately others feel compelled to do it too, me included...

Their pricing page looks amazing. Not because of any funky UI stuff but because of the slight displacement of the decimal point.

asveikau
2 replies
23h42m

I'm left wondering what the catch is / why it's cheaper and faster.

I don't think it's very expensive to run a build server.

kpandit
1 replies
22h40m

Not until it works. But when it doesn't you have an office full of people waiting for someone to fix that broken build server. In terms of lost productivity this disaster is very expensive. Over the years I have suffered only three hardware disasters. One was a NAS+backup screwup but other two were both related to build servers...

asveikau
0 replies
22h22m

This reminds me. It must have been 2009ish. I was at Microsoft. Part of the code I was responsible for ran on the Windows build servers. This code was triggering a kernel bug in the Windows registry. The only way I knew how to reproduce it was by building Windows. Because thousands of Windows builds happened per day, maybe 1 of them would hit it every 1-3 days.

The way I got it in a kernel debugger was to constantly run the problematic race condition inside a VM, when it detected that it hadn't hit, it would roll back the VM to a save state of the code already running in progress ... Took an overnight run to hit it on a machine in the office.

All that said, I'd still say it's not very expensive to run a build machine.

umur
1 replies
22h11m

Thank you for your feedback! I've just made several edits to the text for clarity; will also fix UX bugs and do bigger a update in the upcoming weeks.

rgbrgb
0 replies
20h13m

big improvement! good luck

necubi
13 replies
1d1h

We've been using Ubicloud builders for our Rust project [0] for several months, and it's worked very well. We've seen CI times go from 10-15 minutes to 6-7, and our bill has gone from $300/month to $30.

One counter-intuitive thing we found is that it's slow to save and restore caches, but the machines have good CPU, so for us it's been faster to disable cache entirely and just redo everything on each build.

[0] https://github.com/ArroyoSystems/arroyo

MuffinFlavored
6 replies
1d

so for us it's been faster to disable cache entirely and just redo everything on each build.

I wonder what the consequential "carbon footprint" of this is, but at scale for all companies/all jobs of similar nature

ploxiln
2 replies
23h27m

Also consider the carbon footprint of using caches! Apparently it takes longer, because it has to send/recv more data, compress and decompress, cause load on other systems ...

It's really kinda impossible to judge the carbon footprint for these kinds of things, and if it's justified. Consider the carbon footprint of all the dumb AI feature rolled out at Facebook, Google, etc. Consider the carbon footprint of everyone trying to use ChatGPT now. Do you know how much power GPUs are guzzling to give each little answer? It's really huge, compared to a traditional google search! Is it worth it? ... who can judge eh

sophiabits
1 replies
18h55m

There’s a French think tank called “The Shift Project” which actually produced a report estimating CO2 impact of data transfer a few years ago [1]!

The numbers are obviously very rough; there’s a LOT of factors to consider and which vary from one node to another. But the methodology is quite comprehensive, e.g. they factor in power consumption of the end user’s device while waiting for the data to transfer over the wire

Green software engineering is interesting to me because it seems like a good way to impact greenhouse gases _without_ requiring consumers to change anything about their lifestyle (which is the hard part of climate change…) There’s a cool presentation from Rasmus Lerdorf from around the time when PHP7 released where he estimated a 50% adoption rate of PHP7 would result in a saving of 3.5B kg CO2/year iirc, purely because of the compute efficiency gains from PHP6 -> PHP7.

I used their numbers to calculate that swapping our CI/CD clones at my day job over to shallow clones saved (maybe) 5.5 kg/week of CO2 emissions [2]. Not quite as impressive as the PHP7 figures, but I still think it’s kinda neat.

[1] https://theshiftproject.org/en/lean-ict-2/

[2] https://sophiabits.com/blog/the-cost-of-bandwidth

okeuro49
0 replies
4h1m

It was PHP 5.4 -> PHP 7.0. There was no 6.

crohr
1 replies
10h23m

One of the goal of my own hosted runner project [1] is to display the carbon cost and money cost after each run. Too many people are oblivious of the costs and I think as an industry it would be great to at least get the data points.

[1]: https://runs-on.com

boundlessdreamz
0 replies
5h52m

This is pretty much exactly what I wanted (yay for custom images) but the pricing makes it a non starter. Please consider a tiered monthly pricing based on build minutes.

Arelius
0 replies
23h53m

I mean, you could also consider how much the transition to Ephermial builders and containers is costing us in general, moving from (admittedly more brittle) build machines that would keep build artifacts alive on the local hard drive just as matter of fact.

csdvrx
2 replies
1d

One counter-intuitive thing we found is that it's slow to save and restore caches, but the machines have good CPU, so for us it's been faster to disable cache entirely and just redo everything on each build.

The link to SPDK was very interesting: https://www.ubicloud.com/blog/building-block-storage-for-clo.... I use filesystems for very high performance applications, and I've found ZFS to often be the limiting factor when compared to simpler solutions of XFS +- mdadm +- encryption.

It's a controversial point, but others have made similar findings: https://klarasystems.com/articles/virtualization-showdown-fr... : "Although I suspect this will surprise many readers, it didn’t surprise me personally—I’ve been testing guest storage performance for OpenZFS and Linux KVM for more than a decade, and zvols have performed poorly by comparison each time I’ve tested them"

OpenZFS seems to be starting to consider optimizations to better perform on modern drivers (SSD, NVMe) which have very different performance profiles to what ZFS was built for (spinning rust)

In the SPDK summary they say "To make VM provisioning times go faster, we changed our host OS from ext4 to btrfs" (...) "Also, when we switched the host filesystem to btrfs, our disk performance degraded notably. Our disk throughput dropped to about one-third of what it was with ext4."

Ubicloud: the problem seems to be generic to CoW filesystems, and it's interesting you came with a slight variation (CoA) but have you considered the even simpler alternative of any journaling filesystems (XFS, Ext4...) with overlays?

Or just UFS2 + snapshots to restore from a given state (initialized, ready for each test) then restore to this state between tests?

I think customers finding that disabling cache works better means the CoA has similar issues to CoW.

Personally, I'd have just tried to using SR-IOV with a namespace per customer, and call it a day instead of bringing extra complexity, but there must be good reasons for it. I'd love to know what these reasons are.

necubi
1 replies
1d

In this case, I don't think the issue is due to filesystem performance. Someone from Ubicloud can correct me, but my understanding is that for custom runners Github still stores the cache on their side. So Ubicloud (in Europe) needs to transfer the cache from Github (in the US) on every run.

pwmtr
0 replies
1d

Hi, I work for Ubicloud.

Yes, that is correct. We are also working on implementing our own caching, which should speed up cache downloads/uploads significantly.

babanin
1 replies
1d

Thanks for sharing! I've been looking at the repository and noticed that some jobs are still running on Github hosted runners. What is the point of using them and not running everything on Ubicloud?

necubi
0 replies
23h54m

We have one very slow job (our Rust CI, for example: https://github.com/ArroyoSystems/arroyo/actions/runs/7702793...) and a bunch of little jobs that take a few seconds (checking lints, etc.). We never bothered to switch those over because they complete quickly on github and fit within our included runner minutes.

I should say we also use BuiltJet for docker builds because of their arm support. Now that ubicloud has arm we may switch those over as well.

crohr
0 replies
9h27m

I love to see examples in the wild of repos having switched to a non-official GitHub Action runner. I'm slowly preparing a collection of repo timings comparing GitHub vs Buildjet/Warpbuild/Ubicloud vs my own solution RunsOn.

In your case your workflow can run in less than 5 minutes on AWS ephemeral machines, for the same price as ubicloud: https://github.com/runs-on/arroyo/actions/runs/7723361513/jo...

kburman
9 replies
1d1h

wipe out the block storage device attached to the VM

Does this provide guarantee that subsequent job won't be able to recover the data?

amenghra
5 replies
1d1h

If the data is also encrypted at rest (as they claim it is), then even if the raw data is recovered it shouldn't be useable without a combination of key leak.

fdr
4 replies
1d

I work at Ubicloud.

Although we have a KEK and DEK code for regular VMs, they are not operative on GHA...yet. The reason has to do with a technical conflict with copy-on-write we aim to close, not least of which because Ubicloud needs to grow its own copy-on-write features for block device snapshots, things we lack today.

I expect within a few months, all expired GHA vms will be cryptoshredded upon their deletion. This is already true for regular virtual machines or managed postgres machines.

csdvrx
3 replies
1d

because Ubicloud needs to grow its own copy-on-write features for block device snapshots, things we lack today.

Just, why? CoW (or your own CoA) are ripe with performance problems. How exactly do you benefit from their use?

fdr
2 replies
1d

The GitHub image is 86GB and people want an action VM to start reasonably quickly, so a full copy for every run isn’t going to work so well.

As a side note, isn’t it nuts that GHA operates on a principle of “installing the universe” (and apparently the universe is 86GB) and updating about every week, and it’s not total chaos? I was surprised, but it seems to work.

k8svet
1 replies
23h48m

As a side note, isn’t it nuts that GHA operates on a principle of “installing the universe” (and apparently the universe is 86GB) and updating about every week, and it’s not total chaos? I was surprised, but it seems to work.

As someone who has authored a GitHub Action to delete like 85% of the hodge-podge stuff blasted all over in a default runner image to free up more space for a Nix store. It's "nuts" indeed.

maxloh
0 replies
23h18m

Any GitHub link for your code?

ozgune
1 replies
1d1h

Yes, a subsequent job won't be able to recover the data.

We completely shut down the VM and remove the block device (all files associated with the block device).

riddley
0 replies
1d

I think the concern is that a subsequent allocation would have blocks from a previous allocation that could be readable.

auguzanellato
0 replies
1d1h

Well, it’s the same as regular GHA runners. On GitHub provided runners you need to explicitly move data between jobs. I find that useful in preventing weird side effects in one job affecting another.

roboben
5 replies
1d

SOC2 ?

tkellogg
4 replies
1d

are you pushing PHI/PII through github actions?

CSDude
2 replies
23h33m

Does not matter - pipeline needs to be trusted because it has access to sensitive resources for deployment tasks, can fake test results etc.

slekker
1 replies
23h8m

Even though it is a bit of a PITA to maintain self hosted runners, it is the reason we do it.

Klasiaster
0 replies
1h39m

GARM can easily manage ephemeral runners for you: https://github.com/cloudbase/garm (Ephemeral runners are also more secure)

manquer
0 replies
22h38m

Actions have access to environment secrets . Those secrets can open door to PII.

JoshTriplett
4 replies
1d1h

Sigh. Please don't call the Elastic license "open source". It's nice that the source is available, but this is not an open source license.

EDIT: per responses, it looks like this is outdated information and the project now uses AGPL!

matt_heimer
3 replies
1d1h

The docs still say the Elastic license is used but looking at https://github.com/ubicloud/ubicloud/blob/main/LICENSE it looks like the project might have switched to GNU Affero General Public License v3.0 in the last day.

umur
2 replies
1d

thank you and that's correct, just updated the docs as well.

mindwok
0 replies
20h59m

Wow. When you guys first launched this was my biggest concern. This is absolutely awesome, thank you team!

JoshTriplett
0 replies
1d

That's wonderful news, thank you very much for switching to an actually open license!

The linked page at https://www.ubicloud.com/docs/github-actions-integration/qui... still says "Source open under the Elastic V2 license", and https://www.ubicloud.com/docs/about/pricing still says "it's open and free under the Elastic V2 license". Not sure if those were missed or if the docs just need some time to refresh from their sources.

risyachka
3 replies
22h40m

Am I missing something or GitHub can easily block all 3rd party runners if they want to?

shepherdjerred
2 replies
21h0m

Of course, but doesn't GitHub embrace these runners?

prosim
1 replies
20h3m

Depends on how you define "embrace". Services like Ubicloud violate the GitHub Terms of Service.

LOLwierd
0 replies
13h48m

Can you elaborate on how and why?

ospider
3 replies
14h43m

I was looking for macOS M1 runner, which is not provided by GitHub. I'm willing to pay for that, but it seems that there are only Linux types now.

suryao
0 replies
13h28m

WarpBuild[0] provides Apple Silicon macOS runners powered by M2 Pros. Note: I'm the founder. [0] https://docs.warpbuild.com/runners#macos-m2-pro-on-arm64

low_tech_punk
0 replies
11h26m
fkorotkov
0 replies
8h12m

Cirrus Runners being doing it for a long time now:

https://cirrus-runners.app/

gajus
3 replies
1d1h

Feels like there is another one of these every week.

risyachka
0 replies
22h38m

It is relatively easy to roll out a solution to this problem and GitHub doesn't care about competition in this niche.

flir
0 replies
1d

Might be a sweet spot. I'm thinking (a) well-understood problem (b) hardware's rented, and scales with use (c) lots of teams have CI costs high enough to be annoying, but not so high that they need authorization to change their supplier (d) github are marking up the service so hard that it's easy to compete on price.

crohr
0 replies
10h30m

Yes, I wonder when/if GitHub decides to drop prices at some point.

But in the meantime I’ve recently released RunsOn [1] with the same promise (10x cheaper, faster) but the whole thing runs in your AWS account.

[1] https://runs-on.com

ozgune
2 replies
1d1h

Hi there, I'm Ozgun, one of Ubicloud's founders.

We have dozens of customers using Ubicloud runners in production today. We’re now designing our caching layer (Docker instance registry, Docker layer cache, or package cache). We wanted to put this out there for any comments.

Also, if you have any points related to the broader topic of an open and portable cloud, please pass them along!

maxmcd
1 replies
20h0m

Yes, please do what Depot does and put fast persistent disks close to builds to cache docker layers. Github action runners and circleci and all the others adding expensive network calls to manually cache layers has always been such a time sink and I think moves lots of people to remove caching entirely.

kylegalbraith
0 replies
12h44m

Depot [0] founder here. Thanks for the mention. We're also planning on bringing a bit of a different take to GitHub Action runners that's not tied to Hetzner directly. It will be entirely open-source as well, so you can take it and run it on your own instances if you'd like. Similar to how Depot supports self-hosted builders in your own AWS account [1].

[0] https://depot.dev/

[1] https://depot.dev/docs/self-hosted/architecture

gitter101
2 replies
16h21m

what stops GitHub from shutting down these offerings from a legal or technical perspective, these alternatives clearly violate their terms of service:

"Additionally, regardless of whether an Action is using self-hosted runners, Actions should not be used for: the provision of a stand-alone or integrated application or service offering the Actions product or service, or any elements of the Actions product or service, for commercial purposes"

OJFord
0 replies
4h50m

No that means you can't create a CI/CD competitor that's 'hosted' in Actions. (e.g. Install OJFordCI GitHub app, pay at ojfordci.com/sign-up, view your CI results at ojfordci.com, but actually also at github.com in the Actions tab on your repo.)

They absolutely support custom runners, it's how all these work, they don't need to stop them via ToS, they can just not allow it as an option. `runs-on: ubicloud` only works because GitHub implements it right.

Aeolun
0 replies
15h47m

I read that as them not wanting you to use Actions as part of your commercial offering.

E.g. don’t use Github as Infrastructure as a Service.

forks
2 replies
1d1h

How does this compare to BuildJet?

nateb2022
0 replies
1d

It's cheaper, for one

gajus
0 replies
1d

given that the landing page looks almost pixel by pixel inspired by BuildJet, I'd say the answer is "very comparable"

acidhue
2 replies
1d

Gitlab + Home laptop runners = free

Been using this setup for year, very happy with it. I don't see the point of Github to be honest?

dboreham
0 replies
22h39m

In a similar vein, using Gitea + self-hosted runners (including macos) here, very happy although we did do some of the work to make the "CI stack" (contributed back to gitea and act projects) so not totally batteries included yet. One thing that helps quite a bit imho is to avoid virtualization -- our approach is to run all CI jobs in containers, not VMs. Yes this has isolation implications, and requires some futzing to get docker-in-docker and docker-in-docker-in-docker to work (shout out to the Earthly team for figuring out how to host kind/k8s inside a container), but the "runs on any computer" property of containers (vs virtualization) is powerful. Want CI on your laptop? No problem. On a Windows machine? No problem.

belthesar
0 replies
1d

For what it's worth, you can also self host your runners without Ubicloud on Github. That doesn't remove other reasons to not run Gitlab, but it's also not unique. I've self hosted runners for a variety of reasons on Github with great success.

Aeolun
2 replies
15h56m

How does this pricing work? With purely spot based AWS runners we can barely reach the 10x cost treshold compared to github runners.

Edit: Oh, Hetzner. They’re really ubiquitous in cheap computing these days.

crohr
1 replies
10h28m

You can reach 10x cheaper with just AWS, especially on the larger runner sizes. That is even including disk costs.

I put a calculator for RunsOn at https://runs-on.com/calculator/

Aeolun
0 replies
7h46m

I mean, I did just say that, but it seems to me they can’t just resell AWS spot instances because there’d be no margin left.

tmpfs
1 replies
12h1m

This looks great and the pricing is obviously a big improvement, I will try it out for our Linux jobs.

Sure would be good to see competition for the MacOS and Windows runners too, these are the ones that tend to cost us the most.

suryao
0 replies
6h50m

We at WarpBuild currently support mac instances (M2 Pros). Windows support will come soon.

lpgauth
1 replies
23h58m

One thing I find frustrating about the Github Actions's runner pricing is that it's calculated on a minute basis. Couldn't you bill by the second instead? Maybe set a minimum to 1 min but after that charge by the second?

I assume this is done to cover the time that the VM reboots between jobs?

suryao
0 replies
21h13m

This is exactly what we do at WarpBuild, more in the interest of fairness and not passing on random costs to users.

The VM reboot times add up quickly and that's likely the rationale. However, it gets hard when there are users running linting jobs that take ~2s but running it with 16vcpu instances so we kept the 1 min floor.

timvdalen
0 replies
23h50m

Seems like lots of people are thinking about this. We are rolling our own, and I spent a good chunk of today writing a new scheduler. Definitely fun to play around with.

thomasisaac
0 replies
1d

We've been happily using BuildJet [0] for over a year now.

Saved over $25k in CI costs compared to GH Actions - they're also using super powerful bare metal servers with Hetzner - we got about a 94% reduction in build time!

Absolutely chuffed, good to see more companies on the market though.

[0] https://buildjet.com

syassami
0 replies
21h28m

Looks neat. I've been using https://github.com/philips-labs/terraform-aws-github-runner to save on $

siwatanejo
0 replies
10h50m

Hello, can I pay with bitcoin?

mastabadtomm
0 replies
22h41m

Congrats on the launch! I hope Ubicloud will be even more successful than the previous project, Citus!

low_tech_punk
0 replies
11h24m

Coincidence maybe? GitHub just launched M1 macOS machines today. https://github.blog/changelog/2024-01-30-github-actions-intr...

lijok
0 replies
22h15m

Amazing, testing this out tomorrow !

justinzollars
0 replies
1d1h

I always imagine the github runners as a vast array of early model raspberry pis.

jovezhong
0 replies
17h34m

Nice intro, but I am a bit worried not seeing SOC2 or something similar.

fierro
0 replies
23h42m

didn't see any mention of caching in the docs. This tends to be pretty important for fast build/CI at many different key points of a given workflow (dependency cache, build cache, docker layer cache, image cache, etc). Wondering if I missed something.

fermigier
0 replies
11h14m

Where is your company incorporated? (e.g do you comply with the GDPR?). This should be stated on the website.

felipemesquita
0 replies
22h57m

I am hearing about Ubicloud for the first time and it sounds very good. I don’t have a need for a cheaper GitHub runner now, but I’ve been dreaming about a modern, less complex open stack alternative for some time. Also, having a cloud service combined with the open source product seem like a great fit!

cynix
0 replies
20h38m

Will FreeBSD be supported in the future?

commonenemy
0 replies
22h35m

Looks like it's a long running node? Or am I wrong?

Does it have on demand workers like those Kubernetes providers, where you use shared master nodes (or with a very small fee), and scalable node pools on demand (charged when used).

cauchyk
0 replies
21h25m

We've been using the Ubicloud runner for a while at PeerDB[1]. Great value and specially the ARM runners have been helpful to get our CI costs down. The team is really responsive and added the arm runner support within a few weeks of us requesting it.

[1] https://github.com/PeerDB-io/peerdb

bastardoperator
0 replies
21h2m

The people buying github/actions are getting at least 10K minutes included as part of their EA agreement, and they can use their own runners for zero cost. I think most enterprises are more concerned with having their bits on a cloud provider, versus pricing, but that's my limited experience.

andag
0 replies
3h23m

I want the vendor that can take care of this for us, but that can guarantee a private egress range, or can run things within our VPC.

Several builds connect to internal resources, so running them on external nodes suboptimal and expensive when it comes to network egress.

DavyJone
0 replies
21h38m

Congrats on the launch, certainly good to see competition.

I was reading https://www.ubicloud.com/blog/ubicloud-hosted-arm-runners-10... but seems like a bit misleading to compare an ARM workload over QEMU vs a native ARM.

What about native x86 vs native x86 or at least native x86 vs native arm?

CSDude
0 replies
1d

We've been running Ubicloud for a while at Resmo. It is indeed 10x cheaper. We upped our instance sizes by 2x for a slightly more performance but it's still 5x cheaper.

The main reason is their platform is hosted on Hetzner dedicated instances.