return to table of content

Fly.io has GPUs now

k8svet
52 replies
15h45m

Does it have basic functioning other stuff? I am shocked at how our production usage of Fly has gone. Even basic stuff as support not being able to just... look up internal platform issues. Cryptic/non-existent error messages. I'm not impressed. It feels like it's compelling to those scared of or ignorant of Kubernetes. I thought I was over Kubernetes, but Fly makes me miss it.

parhamn
17 replies
10h43m

I was hoping to migrate to Fly.io and during my testing I found that simple deploys would drop connections for a few seconds during a deploy switch over. Try a `watch -n 2 curl <serviceipv4>` during a deploy to see for yourself (try any one of the the strategies documented including blue-green). I wonder how many people know this?

When I tested it I was hoping for at worst early termination of old connections with no dropped new connections and at best I expected them to gracefully wait for old connections to finish. But nope, just a full downtime switch over every time. But then when you think about the network topology described in their blog posts, you realize theres no way it could've been done correctly to begin with.

It's very rare for me to comment negatively on a service but that fact that this was the case paired with the way support acted like we were crazy when we sent video evidence of it definitely irked me for infrastructure company standards. Wouldn't recommend it outside of toy applications now.

It feels like it's compelling to those scared of or ignorant of Kubernetes

I've written pretty large deployment systems for kubernetes. This isn't it. Theres a real space for heroku-like deploys done properly and no one is really doing it well (or at least without ridiculously thin or expensive compute resources)

sofixa
12 replies
10h14m

I've written pretty large deployment systems for kubernetes. This isn't it. Theres a real space for heroku-like deploys done properly and no one is really doing it well (or at least without ridiculously thin or expensive compute resources)

Have you tried Google Cloud Run(based on KNative) I've never used it in production, but on paper seems to fit the bill.

parhamn
4 replies
9h41m

Yeah we're mostly hosted there now. The cpu/virtualization feels slow but I haven't had time to confirm (we had to offload super small ffmepg operations).

It's in a weird place between heroku and lambda. If your container has a bad startup time like one of our python services, autoscaling can't be used as latency becomes a pain. Its also common deploy services on there that need things like health checks (unlike functions which you assume are alive), this assumes at least 1 instance of sustained use as well, assuming you do minute health checks. Their domain mapping service is also really really bad and can take hours to issue a cert for a domain so you have to be very careful about putting a lb in front of it for hostname migrations.

I don't care right now but the fact that we're paying 5x in compute is starting to bother me a bit. A 8core 16gb 'node' is ~$500/month ($100 on DO) assuming you don't scale to zero (which you probably wont). Plus I'm pretty sure the 8 cores reported isn't a meaty 8 cores.

But its been pretty stable and nice to use otherwise!

jetbalsa
3 replies
3h18m

A 6c / 12t Dedicated Server with 32GB of ram is 65$ a month with OVH

I do get that it is a bare server, but if you deploy even just bare containers to it, you would be saving a good bit of money and get better performance from it.

doctorpangloss
2 replies
1h18m

Another interpretation is the so-called dedicated servers are too good to be true.

jrockway
0 replies
29m

It depends on what the 6 cores are. Like I have a 8C/8T dedicated server sitting in my closet that costs $65 per the number of times you buy it. (Usually once.) The cores are not as fast as the highest-end Epyc cores, however ;)

ac29
0 replies
28m

At the $65/month level for an OVH dedicated server, you get a 6-core CPU from 2018 and a 500Mbps public network limit. Doesnt even seem like that good a deal.

There is also a $63/month option that is significantly worse.

dig1
4 replies
9h28m

I have yet to gain positive experience with Cloud Run. I have one project with it, and Cloud Run is very unpredictable with autoscaling. Sometimes, it can start spinning up/down containers without any apparent reason, and after hunting Google support for months, they said it is an "expected behavior". Good luck trying to debug this independently because you don't have access to knative logs.

Starting containers on Cloud Run is weirdly slow, and oh boy, how expensive that thing is. I'm getting the impression that pure VMs + Nomad would be a way better option.

parhamn
1 replies
8h46m

Starting containers on Cloud Run is weirdly slow

What is this about? I assumed a highly throttled cpu or terrible disk performance. A python process that would start in 4 seconds locally could easily take 30 seconds there.

JoshTriplett
0 replies
8h23m

Last I checked, Cloud Run isn't actually running real Linux, it's emulating Linux syscalls.

sofixa
0 replies
8h20m

I'm getting the impression that pure VMs + Nomad would be a way better option

As a long time Nomad fan (disclaimer: now I work at HashiCorp), I would certainly agree. You lose some on the maintenance side because there's stuff for you to deal with that Google could abstract for you, but the added flexibility is probably worth it.

jonatron
0 replies
6h47m

I just use AWS EC2, load balancer, auto scaling groups. The user_data pulls and runs a docker image. To deploy I do an instance refresh which has no downtime. Obvious downside is more configuration than more managed services.

giovannibonetti
1 replies
6h32m

I have been using Google Cloud Run in production for a few years and have had a very good experience. It has the fastest auto scaler I have ever seen, except only for FaaS, which are not a good option for client-facing web services.

davidspiess
0 replies
21m

Same experience here, using it for years in production for our critical api services without issues.

asaddhamani
2 replies
10h35m

Yeah I had a similar experience where I got builds frozen for a couple days, such that I was not able to release any updates. When I emailed their support, I got an auto-response asking me to post in the forum. Pretty much all hosts are expected to offer a ticket system even for their unmanaged services if its a problem on their side. I just moved over all my stuff to Render.com, it's more expensive, but its been reliable so far.

loloquwowndueo
1 replies
7h1m

The first (pinned) post in the fly.io forum explains it:

https://community.fly.io/t/fly-io-support-community-vs-email...

malfist
0 replies
3h30m

That forum post just says what OP said, that they will ignore all tickets from unnmanaged customers. Which is a pretty shitty thing to do to your customers.

rollcat
0 replies
10h3m

Try a `watch -n 2 curl <serviceipv4>` during a deploy

You need blackbox HTTP monitoring right now, don't ever wait for your customer to tell you that your service is down.

I use Prometheus (&Grafana), but you can also get a hosted service like Pingdom or whatever.

chachra
10 replies
14h15m

Been on it 7 months, 0 issues. Feel like you're alone on this potentially.

weird-eye-issue
7 replies
12h52m

Alone? Every thread about Fly has complaints about reliability and people complain about it on Twitter too

loloquwowndueo
2 replies
6h58m

Every thread on the Internet about any product or service has complaints.

weird-eye-issue
0 replies
6h19m

Not to this extent, it has always stood out to me in particular

weird-eye-issue
0 replies
3h2m

Actually here is a good example: Cloudflare. Sure people complain a ton about privacy but I haven't seen a single complaint about the reliability of Cloudflare Workers or similar product in the dozens of threads I've seen on HN

nixgeek
0 replies
12h20m

That hasn’t been my experience with Fly but I’m sorry to hear it seems to be others :(

jrockway
0 replies
19m

It's hard to tell how meaningful the reviews are. I have used AWS, GCP, DigialOcean, and Linode throughout my career. Every single one of these, through no fault of myself or my team, messed up and caused downtime. Like, you can get most SRE types in a room to laugh if you blurt out "us-east-1", because it's known to be so unreliable. And yet, it's where every Fortune 500 puts every service; we laugh about the reliability and it's literally powering the economy just fine.

So yes, a lot of people on HN complain about fly's reliability. fly posts to HN a lot and gives them the opportunity. Is it actually meaningful compared to the alternatives? It's very hard to tell.

jokethrowaway
0 replies
7h52m

To be fair most hosting providers come with plenty of public complaints about downtime. The big ones do way better, the best one is AWS, then GC and last Azure. They cost stupid money though.

Digital ocean has been terrible for me, some regions just go down every month and I lose thousands of requests, increasing my churn rate.

Fly.io had tons of weird issues but it got better in the last months. It's still very incomplete in terms of functionality and figuring out how to deploy the first time is a massive pain.

My plan is to add Hetzner and load balance with bunnycdn across DO and H

chachra
0 replies
12h19m

ok possibly not alone, maybe the issues happened before I started using them extensively. I've had ~no downtime that affects me in 7 months.

I do wish they had some features I need, but their support and responses are top notch. And I've lost much less hair and time than I would going full-blown AWS or another cloud provider.

uo21tp5hoyg
0 replies
11h18m
heeton
0 replies
9h46m

Not alone, I’ve been part of two teams who have evaluated fly and hit weird reliability or stability issues, deemed it not ready yet.

xena
9 replies
13h38m

Can you email the first two letters of my username at fly.io with more details? I'd love to find out what you've been having trouble with so I can help make the situation better any way I can. Thanks!

bongobingo1
6 replies
10h50m

Another support.flycombinator.com classic.

zmgsabst
1 replies
10h25m

Why would you care about customer problems if they don’t embarrass you in public?

/s

keeganpoppen
0 replies
10h10m

the only thing easier than them responding in this thread is someone making this comment in this thread…

rob
1 replies
3h7m

Don't worry, a random anime character is going to help you now that it's been brought to the top.

joshi4
0 replies
42m

It seems to me that your comment is personally targeting OP and I think that is quite out of line.

azinman2
1 replies
10h25m

Would you rather them be unresponsive?

lostemptations5
0 replies
9h46m

It's HN -- if the company proved responsive it might invalidate his OP and everyone who band wagons on it.

throwaway220033
1 replies
6h15m

...as if it's one person who had issues! I thought it was just incompetency. But it now looks like a theatre, pretending now.

ignoramous
0 replies
4h49m

I've been a paying Fly.io customer for 3 years now, and for the past 18 months, I've had no real issue with any of my apps. In fact, I don't even monitor our Fly.io servers any more than I monitor S3 buckets; the kind of zero devops I expect from it is already a reality.

it's one person who had issues

Issues specific to an application or one particular account have to be addressed as special cases (like any NewCloud platform, Fly.io has its own idiosyncrasies). The first step anyway is figuring out just what you're dealing with (special v common failure).

looks like a theatre

I have had the Fly.io CEO do customer service. Some may call it theatre, but this isn't uncommon for smaller upstarts, and indicative of their commitment, if anything.

pech0rin
6 replies
11h44m

Yep they have terrible reliability and support. Couldn’t deploy for 2 days once and they actually told me to use another company. Unmanaged dbs masquerading as managed. Random downtime. I could go on but it’s not a production ready service and I moved off of it months ago.

biorach
4 replies
10h47m

Unmanaged dbs masquerading as managed

Are you talking about fly postgres? Because I use it and feel they've been pretty clear that it's unmanaged.

andy_ppp
3 replies
8h15m

Seriously! That's crazy. I need to setup terraform and move to AWS before launching I guess.

biorach
2 replies
7h18m

Seriously! That's crazy

huh? it does what it says on the tin. nothing crazy about it.

They spell out for you in detail what they offer: https://fly.io/docs/postgres/getting-started/what-you-should...

And suggest external providers if you need managed postgres: https://fly.io/docs/postgres/getting-started/what-you-should...

andy_ppp
1 replies
7h1m

I was shocked because I didn't realise it wasn't managed. Even Digital Ocean offer managed Postgres.

If you are offering a service like Fly I think the database should be managed personally, the whole point of Fly.io is to provide abstractions to make production simpler.

Do you think the type of user who is using fly.io is interested in or capable of managing their own Postgres database? I'd rather just trust RDS or another provider.

corobo
0 replies
5h57m

Do you think the type of user who is using fly.io is interested in or capable of managing their own Postgres database?

Honestly.. kinda, yeah

At least I'm projecting my weird "I want to love you for some reason, Fly" plus my skillset onto anyone else that wants to love Fly too haha

They feel very developer/nerd/HN/tinkerer targeted

benzible
0 replies
2h56m

The header at the top of their Getting Started is "This Is Not Managed Postgres " [1]

and they have a managed offering [2] in private beta now...

Supabase now offers their excellent managed Postgres service on Fly.io infrastructure. Provisioning Supabase via flyctl ensures secure, low-latency database access from applications hosted on Fly.io.

[1] https://fly.io/docs/postgres/getting-started/what-you-should...

[2] https://fly.io/docs/reference/supabase/

morgante
2 replies
9h24m

Unfortunately this is a pretty common story. Half the people I know who adopted Fly migrated off it.

I was very excited about Fly originally, and built an entire orchestrator on top of Fly machines—until they had a multi-day outage where it took days to even get a response.

Kubernetes can be complex, but at least that complexity is (a) controllable and (b) fairly well-trodden.

loloquwowndueo
1 replies
6h56m

Fly.io is not comparable to Kubernetes. It’s a bit like comparing AWS to Terraform.

Or to clarify your comment, Kubernetes on which cloud? Amazon? google? Linode?

jrockway
0 replies
22m

Kubernetes on AWS, GCP, and Linode are all controllable and well-trodden.

I definitely understand the comparison between Kubernetes and fly. You have couple apps that are totally unrelated, managed by separate teams, and you want to figure out how you can avoid the two teams duplicating effort. One option is to use something like fly.io, where you get a command line you run to build your project and push the binary to a server. Another option is to self-host infrastructure like Kubernetes, and eventually get that down to one command to build and push (or have your CI system do it).

The end result that organizations are aiming for are similar; developers code the code and then the code runs in production. Frankly, a lot of toil and human effort is spent on this task, and everyone is aiming to get it to take less effort. fly.io is an approach. Kubernetes is an approach. Terraform on AWS is an approach.

throwaway220033
0 replies
6h23m

I switched to Kamal and Hetzner. It's the sweet spot.

rmbyrro
0 replies
27m

I find it amazing how much bad vibes fly.io gets here.

It looks worse than AWS or Azure to me.

Never used the service, but based on what I hear, I'll never try...

awestroke
0 replies
11h16m

I have run several services on Fly for almost a year now, have not had any issues.

nakovet
23 replies
19h5m

About Fly but not about the GPU announcement, I wish they had a S3 replacement, they suggest a GNU Affero project that is a dealbreaker for any business, needing to leave Fly to store user assets was a dealbreaker for us to use Fly on our next project, sad cause I love the simplicity, the value for money, the built in VPN.

JoshTriplett
14 replies
18h59m

I wish they had a S3 replacement, they suggest a GNU Affero project that is a dealbreaker for any business

AGPL does not mean you have to share everything you've built atop a service, just everything you've linked to it and any changes you've made to it. If you're accessing an S3-like service using only an HTTPS API, that isn't going to make your code subject to the AGPL.

bradfitz
6 replies
18h37m

Regardless, some companies have a blanket thou-shalt-not-use-AGPL-anything policy.

anonzzzies
3 replies
8h14m

Yep, our lawyers say to not use and we have to check components and libs we use too. People are really shooting themselves in the foot with that license.

aragilar
1 replies
5h43m

You assume that people want you to use their project. For MinIO, the AGPL seems to be a way to get people into their ecosystem so they can sell exceptions. Others might want you to contribute code back.

anonzzzies
0 replies
5h39m

I have no problem with contributing back: we do that all the time on MIT / BSD projects even if we don't have to. AGPL just restricts the use-cases and (apparently) there is limited legal precedence in my region to see if we don't have to give away everything that's not even related but uses it, so the lawyers (I am not a lawyer, so I cannot provide more details) say to avoid it completely. Just to be safe. And I am sure it hurts a lot of projects... There are many modern projects that are the same thing, but they don't share code because the code is agpl.

corobo
0 replies
2h36m

Sounds more like the license is doing its job as intended, and businesses that can afford lawyers but not bespoke licenses are shooting themselves in the foot with that policy

trollian
0 replies
18h31m

Lawyercats are the worst cats.

hiharryhere
0 replies
17h52m

Some companies Including Google.

I’ve sold Enterprise Saas to Google and we had to attest we have no AGPL code servicing them. This is for a CRM-like app.

RcouF1uZ4gsC
6 replies
18h29m

AGPL does not mean you have to share everything you've built atop a service, just everything you've linked to it and any changes you've made to it. If you're accessing an S3-like service using only an HTTPS API, that isn't going to make your code subject to the AGPL.

I am not so sure about that. Otherwise, you could trivially get around the AGPL by using https services to launder your proprietary changes.

There is not enough caselaw to say how a case that used only http services provided by AGPL to run a proprietary service would turn out, and it is not worth betting your business on it.

xcdzvyn
4 replies
18h16m

you could trivially get around the AGPL by using https services to launder your proprietary changes.

This is a very interesting proposition that makes me reconsider my opinion of AGPL.

mbreese
3 replies
16h9m

Anything “clever” in a legal sense is a red flag for me… Computer people tend to think of the law as a black and white set of rules, but it is and it isn’t. It’s interpreted by people and “one clever trick” doesn’t sound like something I’d put a lot of faith in. Intent can matter a lot.

(Regardless of how you see the AGPL)

internetter
1 replies
15h51m

Computer people tend to think of the law as a black and white set of rules

I've never seen someone put this into words, but it makes a lot of sense. I mean, idealistically computers are deterministic, whereas the law is not (by design), yet there exists many parallels between the two. For instance, the lawbook has strong parallels to the documentation for software. So it makes sense why programmers might assume the law is also mostly deterministic, even if this is false

ozr
0 replies
15h15m

I'm an engineer with a passing interest in the law. I've frequently had to explain to otherwise smart and capable people that their one weird trick will just get them a contempt charge.

Dylan16807
0 replies
15h18m

On the other hand the AGPL itself is trying to be one clever trick in the first place, so maybe it's appropriate here.

c0balt
0 replies
18h15m

> AGPL does not mean you have to share everything you've built atop a service, just everything you've linked to it and any changes you've made to it. If you're accessing an S3-like service using only an HTTPS API, that isn't going to make your code subject to the AGPL.

Correct, this is a known caveat, that's also covered a bit more in the GNU article about the AGPL when discussing Software as a Service Substitutes, ref: https://www.gnu.org/licenses/why-affero-gpl.html.en

benatkin
1 replies
19h0m
candiddevmike
0 replies
18h11m

Seaweed requires a separate coordination setup which may simplify the architecture but complicates the deployment.

tptacek
0 replies
18h14m

Give us a minute.

simonw
0 replies
18h56m
martylamb
0 replies
18h54m

Funny you should mention that: https://news.ycombinator.com/item?id=39360870

itake
0 replies
17h56m

The dealbreaker should be their uptime and support. They deleted my database and have many uptime issues.

benhoyt
0 replies
18h57m

They're about to get an S3 replacement, called Tigris (it's a separate company but integrated into flyctl and runs on Fly.io infra): https://benhoyt.com/writings/flyio-and-tigris/

benbjohnson
0 replies
18h56m

We have an region-aware S3 replacement that's in beta right now: https://community.fly.io/t/global-caching-object-storage-on-...

xena
21 replies
18h7m

Hi, author of the post and Fly.io devrel here in case anyone has any questions. GPUs went GA yesterday, you can experiment with them to your heart's content should the fraud algorithm machine god smile upon you. I'm mostly surprised my signal post about what the "GPUs" are didn't land well here: https://fly.io/blog/what-are-these-gpus-really/

If anyone has any questions, fire away!

qeternity
9 replies
17h44m

I posted further down before seeing your comment. First, congrats on the launch!

But who is the target user of this service? Is this mostly just for existing fly.io customers who want to keep within the fly.io sandbox?

xena
4 replies
17h15m

Part of it is for people that want to do GPU things on their fly.io networks. One of the big things I do personally is I made Arsène (https://arsene.fly.dev) a while back as an exploration of the "dead internet" theory. Every 12 hours it pokes two GPUs on Fly.io to generate article prose and key art with Mixtral (via Ollama) and an anime-tuned Stable Diffusion XL model named Kohaku-XL.

Frankly, I also see the other part of it as a way to ride the AI hype train to victory. Having powerful GPUs available to everyone makes it easy to experiment, which would open Fly.io as an option for more developers. I think "bring your own weights" is going to be a compelling story as things advance.

gooseyman
1 replies
15h3m

https://en.m.wikipedia.org/wiki/Dead_Internet_theory

What have you learned from the exploration?

xena
0 replies
14h45m

Enough that I'd probably need to write a blogpost about it and answer some questions that I have about it. The biggest one I want to do is a sentiment analysis of these horoscopes vs market results to see if they are "correct".

cosmojg
1 replies
13h36m

Interesting setup! What's the monthly cost of running Arsène on fly.io?

xena
0 replies
11h31m

Because I have secret magical powers that you probably don't, it's basically free for me. Here's the breakdown though:

The application server uses Deno and Fresh (https://fresh.deno.dev) and requires a shared-1x CPU at 512 MB of ram. That's $3.19 per month as-is. It also uses 2GB of disk volume, which would cost $0.30 per month.

As far as post generation goes: when I first set it up it used GPT-3.5 Turbo to generate prose. That cost me rounding error per month (maybe like $0.05?). At some point I upgraded it to GPT-4 Turbo for free-because-I-got-OpenAI-credits-on-the-drama-day reasons. The prose level increase wasn't significant.

With the GPU it has now, a cold load of the model and prose generation run takes about 1.5 minutes. If I didn't have reasons to keep that machine pinned to a GPU (involving other ridiculous ventures), it would probably cost about 5 minutes per day (increased the time to make the math easier) of GPU time with a 40 GB volume (I now use Nous Hermes Mixtral at Q5_K_M precision, so about 32 GB of weights), so something like $6 per month for the volume and 2.5 hours of GPU time, or about $6.25 per month on an L40s.

In total it's probably something like $15.75 per month. That's a fair bit on paper, but I have certain arrangements that make it significantly less cheap for me. I could re-architect Arsène to not have to be online 24/7, but it's frankly not worth it when the big cost is the GPU time and weights volume. I don't know of a way to make that better without sacrificing model quality more than I have to.

For a shitpost though, I think it'd totally worth it to pay that much. It's kinda hilarious and I feel like it makes for a decent display of how bad things could get if we go full "AI replaces writers" like some people seem to want for some reason I can't even begin to understand.

I still think it's funny that I have to explicitly tell people to not take financial advice from it, because if I didn't then they will.

tptacek
2 replies
17h0m

This isn't the target user, but the boy's been using it at the soil bacteria lab he works in to do basecalling for a FAST5 data from a nanopore sequencer.

yard2010
1 replies
1h58m

Can you please elaborate?

tptacek
0 replies
1h57m

I am nowhere within a million miles smart enough to elaborate on this one.

subarctic
0 replies
17h36m

Commenters like this, for one thing: https://news.ycombinator.com/item?id=34242767

yla92
2 replies
15h33m

Not a question but the link "Lovelace L40s are coming soon (pricing TBD)" is 404.

xena
0 replies
15h17m

Uhhhh that's not ideal. I'll go edit that after dinner. Thanks!

thangngoc89
0 replies
14h21m

If it's a link to nvidia.com then it's expected to be broken. Seriously, I've never seen a valid link to nvidia.com

Nevin1901
2 replies
13h37m

How fast are coldstarts, and how do you compare against other gpu providers (runpod modal etc)

xena
1 replies
11h23m

The slowest part is loading weights into vram in my experience. I haven't done benchmarking on that. What kind of benchmark would you like to see?

ipsum2
0 replies
10h23m

I would like to see time to first inference for typical models (llama-7b first token, SDXL 1 step, etc)

bl4kers
1 replies
16h52m

How difficult world it be to set up Folding@home on these? https://foldingathome.org

xena
0 replies
16h47m

I'm not sure, the more it uses CUDA the easier I bet. I don't know if it would be fiscally worth it though.

benreesman
1 replies
12h38m

I'd be fascinated to hear your thoughts on Apple hardware for inference in particular. I spend a lot of time tuning up inference to run locally for people with Apple Silicon on-prem or even on-desk, and I estimate a lot of headroom left even with all the work that's gone into e.g. GGUF.

Do you think the process node advantage and SoC/HBM-first will hold up long enough for the software to catch up? High-end Metal gear looks expensive until you compare it to NVIDIA with 64Gb+ of reasonably high memory bandwidth attached to dedicated FP vector units :)

One imagines that being able to move inference workloads on and off device with a platform like `fly.io` would represent a lot of degrees of freedom for edge-heavy applications.

xena
0 replies
11h24m

Well, let me put it this way. I have a MacBook with 64 GB of vram so I can experiment with making an old-fashioned x.ai clone (the meeting scheduling one, not the "woke chatgpt" one) amongst other things now. I love how Apple Silicon makes things vroomy on my laptop.

I do know that getting those working in a cloud provider setup is a "pain in the ass" (according to ex-AWS friends) so I don't personally have hope in seeing that happen in production.

However, the premise makes me laugh so much, so who knows? :)

thangngoc89
0 replies
13h19m

This is right on time. I'm evaluating "severless" GPU services for my upcoming project. I see on the announcement that pricing is per hours. Is scaling to zero priced based on minutes/seconds? For my workflow, medical image segmentation, one file takes about 5 minutes.

UncleOxidant
14 replies
16h2m

I don't want to deploy an app, I just want to play around with LLMs and don't want to go out and buy an expensive PC with a highend GPU just now. Is Fly.io a good way to go? What about alternatives?

mrcwinn
3 replies
15h55m

https://ollama.com/ - Easy setup, run locally, free.

UncleOxidant
2 replies
15h52m

Yeah, but I've got an RTX1070 in my circa 2017 PC. How well is that going to work?

thangngoc89
0 replies
14h15m

It's slow but still decent since it has 8GB of RAM.

jeswin
0 replies
13h3m

You mean GTX 1070. There's no RTX 1070.

ignoramous
2 replies
14h52m
ayewo
1 replies
6h47m

Apart from the Big 3 ...

Who are the big 3 in this context?

gk1
0 replies
5h20m

OpenAI, Anthropic, Cohere

mrb
1 replies
11h6m

Use https://vast.ai and rent a machine for as long as you need (minutes, hours, days). You pick the OS image, and you get a root shell to play with. An RTX 4090 currently costs $0.50 per hour. It literally took me less than 15 minutes to sign up for the first time a few weeks ago.

For comparison, the first time experience on Amazon EC2 is much worse. I had tried to get a GPU instance on EC2 but couldn't reserve it (cryptic error message). Then I realized as a first-time EC2 user my default quota simply doesn't allow any GPU instances. After contacting support and waiting 4-5 days I eventually got a response my quota was increased, but I still can't launch a GPU instance... apparently my quota is still zero. At this point I gave up and found vast.ai. I don't know if Amazon realizes how FRUSTRATING their useless default quotas are for first-time EC2 users.

janalsncm
0 replies
9h37m

Pretty much had the same experience with EC2 GPUs. No permission, had to contact support. Got permission a day later. I wanted to run on A100 ($30/hour, 8GPU minimum) but they were out of them that night. I tried again next day, same thing. So I gave up and used RunPod.io.

leourbina
1 replies
16h2m

Paperspace is a great way to go for this. You can start by just using their notebook product (similar to Collab), and you get to pick which type of machine/GPU it runs on. Once you have the code you want to run, you can rent machines on demand:

https://www.paperspace.com/notebooks

janalsncm
0 replies
9h32m

I used paperspace for a while. Pretty cheap for mid tier gpu access (A6000 for example). There were a few things that annoyed me though. For one, I couldn’t access free GPUs with my team account. So I ended up quitting and buying a 4090 lol.

nojs
0 replies
15h30m

I can recommend runpod.io after a few months of usage - very easy to spin up different GPU configurations for testing and the pricing is simple and transparent. Using TheBloke docker images you can get most local models up and running in a few minutes.

mrkurt
0 replies
15h59m

You might actually be better off building a gaming rig and using that. The datacenter GPUs are silly expensive, because this is how NVIDIA price discriminates. The consumer, game GPUs work really well and you can buy them for almost as cheap as you can lease datacenter ones.

dathinab
0 replies
9h12m

main question, do you need a A100?

some use cases do so.

but if not there are much cheaper consumer GPU based choices

but then maybe you anyway just use it for 1-2 hours in total in which case the price difference might just not matter

iambateman
13 replies
19h40m

It’s cool to see that they can handle scaling down to zero. Especially for working on experimental sites that don’t have the users to justify even modest server costs.

I would love an example on how much time a request charges. Obviously it will vary, but is it 2 seconds or “minimum 60 seconds per spin up”?

mrkurt
12 replies
19h31m

We charge from the time you boot a machine until it stops. There's no enforced minimum, but in general it's difficult to get much out of a machine in less than 5 seconds. For GPU machines, depending on data size for whatever is going into GPU memory, it could need 30s of runtime to be useful.

bbkane
4 replies
19h25m

I see the whisper transcription article. Is there an easy way to limit it to, say $100 worth of transcription a month and then stop till next month? I want to transcribe a bunch of speeches but I want to spread the cost over time

IanCal
3 replies
18h50m

Probably available elsewhere but you could setup an account with a monthly spend limit with openai and use their API until you hit errors.

$100/mo is about 10 days of speeches a month, how much data do you have?

edit - if the pricing seems reasonable, you can just limit how many minutes you send. AssemblyAI is another provider at about the same cost.

bbkane
2 replies
17h15m

Thanks! Maybe 50hr of speeches. It's a hobby idea so I'll check these out when I get some time

xena
0 replies
11h22m

Email xe@fly.io, I'm intrigued.

IanCal
0 replies
6h22m

I can probably just run these through whisper locally for you if you want and are able to share. Email is in my bio (ignore the pricing, I'm obv not charging)

sodality2
3 replies
19h25m

How long does model loading take? Loading 19GB into a machine can't be instantaneous (especially if the model is a network share).

loloquwowndueo
1 replies
18h51m

There are no “network shares”. The typical way to store model data would be in a volume, which is basically local nvme storage.

xena
0 replies
11h22m

Wellllllll, technically there is LSVD which would let you store model weights in S3.

God that's a horrible idea. Blog time!

carl_dr
0 replies
18h49m

It takes about 7s to load a 9GB model on Beam (they claim, and tested as about right), I imagine it is similar with Fly - I've not had any performance issues with Fly.

andes314
2 replies
19h26m

Do you offer some sort of keep_warm parameter that removes this latency (for a greater cost)?

mrkurt
1 replies
19h23m

You control machine lifecycles. To scale down, you just set the appropriate restart policy, then exit(0).

You can also opt to let our proxy stop machines for you, but the most granular option is to just do it in code.

So yes, kind of. You just wait before you exit.

Aeolun
0 replies
17h29m

So just to confirm, for these workloads, it’d start a machine when the request comes in, and then shut it down immediately after the request is finished (with some 30-60s in between I suppose)? Is there some way to keep it up if additional requests are in the queue?

Edit: Found my answer elsewhere (yes).

nextworddev
12 replies
19h10m

Somehow cheaper than AWS?

patmorgan23
4 replies
18h30m

They run their own data centers.

tptacek
3 replies
18h12m

We run our own hardware, but not our own data centers.

huydotnet
2 replies
17h20m

Is there any write up on how Fly.io run your infrastructure? The "not data center" fact makes me interested a little bit.

tptacek
0 replies
17h16m

We should write that up! We lease space in data centers like Equinix.

rxyz
0 replies
9h3m

It's just renting space in a big server room. Every mid-to-large city has companies providing that kind of service

CGamesPlay
1 replies
18h57m

AWS is one of the most expensive infrastructure providers out there (especially anything beyond the "basic" services like EC2). And even though AWS still has some globally-notable uptime issues, "nobody ever got fired for picking AWS".

dathinab
0 replies
8h12m

I mean from hearsay of people which had to work with AWS & Google Cloud & Microsoft Azure it seems to me that the other two are in practice worse to a point they would always pick AWS over them even through they hate the AWS UX.

And if it's the best of the big 3 providers, then it can't that bad, right ..... right? /s

seabrookmx
0 replies
18h40m

They're a "real" cloud provider (with their own hardware) and not a reseller like Vercel and Netlify. So this isn't _that_ surprising. AWS economies of scale do allow them to make certain services cheap but only if they choose. A lot of time they choose to make money!

reactordev
0 replies
19h8m

AWS isn’t the cheapest so how is that a surprise? They are a business and know how to turn the right knobs to increase cash flow. GPUs for AI is one major knob right now.

dathinab
0 replies
8h18m

As a person working in a startup which used AWS for a while:

*AWS is expensive, always, except if magic*

Where magic means very clever optimizations (often deeply affecting your project architecture/code design) which require the right amount of knowledge/insights into a very confusing UI/UX and enough time evaluate all aspects. I.e. it might simple not be viable for startups and is expensive in it's own way.

Through most cheaper alternatives have their own huge bag of issues.

Most important fly.io is their own cloud provider not just a more easy way to use AWS. I mean while I don't know if they have their own server centers in every region they do have their own servers.

andersa
0 replies
18h53m

It would be absurd if it wasn't.

Sohcahtoa82
0 replies
18h12m

Genuine question...why are you surprised?

qeternity
6 replies
17h48m

Who is the target market for this? Small/unproven apps that need to run some AI model, but won't/can't use hosted offerings by the literally dozens of race-to-zero startups offering OSS models?

We run plenty of our own models and hardware, so I get wanting to have control over the metal. I'm just trying to figure out who this is targeted at.

KTibow
2 replies
13h39m

Fly is an edge network - in theory, if your GPUs are next to your servers and your servers are next to your users, your app will be very fast, as highlighted in the article. In practice this might not matter much since inference takes a long time anyway.

tptacek
0 replies
12h54m

We're really a couple things; the edge stuff was where we got started in 2020, but "fast booting VMs" is just as important to us now, and that's something that's useful whether or not you're doing edge stuff.

joshxyz
0 replies
12h46m

this is crazy, this move alone cements fly as an edge player for the next 3 / 5 / 10 years.

mrkurt
0 replies
14h54m

We have some ideas but there's no clear answer yet. Probably people building hosting platforms. Maybe not obvious hosting platforms, but hosting platforms.

dathinab
0 replies
9h45m

TL;DR: (skip to last paragraph)

- having the GPU compute in the same data center or at least from the same cloud provider can be a huge plus

- it's not that rare for various providers we have tried out to run out of available A100 GPUs, even with large providers we had issues like that multiple times (less an issue if you aren't locked to specific regions)

- not all providers provide a usable scale down to zero "on demand" model, idk. how well it works with fly long term but that could be another point

- race-to-zero startups have the tendency to not last, it's kind by design from a 100 of them just a very few survive

- if you are already on fly and write a non-public tech demo which just gets evaluated a few times their GPU offering can act like a default don't think much about it solution (through you using e.g. Huggingface services would be often more likely)

- A lot of companies can't run their own hardware for various reasons, at best they can rent a rack in another Datacenter but for small use use-cases this isn't always worth it. Similar there are use cases which do might A100s but only run them rarely (e.g. on weekly analytics data). Potentially less then 1h/w in which case race-to-zero pricing might not look interesting at all

To sum up I think there are many small reasons why some companies, not just startups, might have interest in fly GPUs, especially if they are already on fly. But there is no single "that's why" argument, especially if you are already deploying to another cloud.

DreamGen
0 replies
7h44m

I am not seeing any race-to-zero in the hosted offering space. Most charge multiples of what you would pay on GCP, and the public prices on GCP are already several times what you would pay as an enterprise customer.

holoduke
5 replies
19h29m

Anybody has experience with the performance. First glance is that they are quite expensive. Compared to for example Hetzner (cpu machines)

impulser_
4 replies
19h19m

I'm not sure about others, but you can get A100s with 90gb of RAM from DigitalOcean for $1.15 an hour. So about 1/3 the price.

You can even get H100s for cheaper than these prices at $2.24 an hour.

So these do seem a bit expensive, but this might be because there is high demand for them from customers and they don't have the supply.

treesciencebot
1 replies
19h1m

Just to correct the record, both $1.15 per A100 and $2.24 per H100 require a 3-year-commitment. On-demand prices are 2.5X that.

Aeolun
0 replies
17h25m

$2.24/hour pricing is for a 3-year commitment. On-demand pricing for H100 is $5.95/hour under our special promo price.\$1.15/hour pricing is for a 3-year commitment.

Wow, that’s some spectacularly false advertising.

skrtskrt
0 replies
19h9m

getting supply is super hard right now, DigitalOcean just straight up bought Paperspace to get access to those GPUs.

The whole reason Coreweave is on a fat growth trajectory right now is they used their VC money to buy a ton of GPUs at the right time

dathinab
0 replies
9h5m

Company I work for had multiple times problems of not being able to allocate any gpus from some larger cloud providers (with the region restrictions we have, which still include all of EU as regions).

(I'm not sure which of them it was, we are currently evaluating multiple providers and I'm not really involved in that process.)

pgt
4 replies
9h28m

I was an early adopter of Fly.io. It is not production-ready. They should fix their basic features before adding new ones.

ecmascript
1 replies
6h0m

Comments like these are just sad to see on HN. It is not constructive. What is these basic features that need fixing you're speaking about and what is the fixes required?

cschmatzler
0 replies
4h58m

Reliability and support. Having even “the entire node went down” tickets get an auto-response to “please go fuck off into the community forum” is insane. What is the community forum gonna do about your reliability issues? I can get a 4€/mo server at Hetzner and have actual people in the datacenter respond to my technical inquiries within minutes.

urduntupu
0 replies
9h9m

Unfortunately true. Also jumped the fly.io ship after initial high excitement for their offering. Moved back to DigitalOcean's app platform. A bit more config effort, significantly pricier, but we need stability on production. Can't have my customers call me b/c of service interruption.

throwaway220033
0 replies
8h59m

+1 - It's the most unreliable hosting service I've ever used in my life with "nice looking" packaging. There were frequently multiple things broken at same time, status page would always be green while my meetings and weekends were ruined. Software can be broken but Fly handles incidents with unprofessional, immature attitude. Basically you pay 10x more money for an unreliable service that just looks "nice". I'm paying 4x less to much better hardware with Hetzner + Kamal; it works reliably, pricing is predictable, I don't pay 25% more for the same usage next month.

https://news.ycombinator.com/item?id=36808296

Mikejames
4 replies
14h58m

anyone know if this is a PCI passthrough for a full a100? or some fancy clever vgpu thing?

tptacek
2 replies
14h42m

Do not get me started on the fancy vGPU stuff.

mgliwka
1 replies
12m

I‘ll bite :-) What are your experiences with that?

tptacek
0 replies
4m

Bad.

mrkurt
0 replies
14h55m

Passthrough, yes.

riquito
3 replies
18h8m

Is there any configuration to keep alive the machine for X seconds after a request has been served, instead of scaling down to zero immediately? I couldn't find it skimming the docs

mrkurt
1 replies
18h5m

Machines are both dumber and more powerful than you'd think. Scaling down means just exit(0) if you have the right restart policy set. So you can implement any kind of keep-warm logic you want.

Aeolun
0 replies
17h23m

Oh! I hadn’t thought if it like that. That makes sense.

kylemclaren
0 replies
4h30m

you might also be looking for `kill_signal` and `kill_timeout`: https://fly.io/docs/reference/configuration/#runtime-options

niz4ts
3 replies
18h7m

As far as I know, Fly uses Firecracker for their VMs. I've been following Firecracker for a while now (even using it in a project), and they don't support GPUs out of the box (and have no plan to support it [1]).

I'm curious to know how Fly figured their own GPU support with Firecracker. In the past they had some very detailed technical posts on how they achieved certain things, so I'm hoping we'll see one on their GPU support in the future!

[1]: https://github.com/firecracker-microvm/firecracker/issues/11...

mrkurt
2 replies
18h4m

The simple spoiler is that the GPU machines use Cloud Hypervisor, not Firecracker.

niz4ts
1 replies
17h41m

Way simpler than what I was expecting! Any notes to share about Cloud Hypervisor vs Firecracker operationally? I'm assuming the bulkier Cloud Hypervisor doesn't matter much compared to the latency of most GPU workloads.

tptacek
0 replies
17h29m

They are operationally pretty much identical. In both cases, we drive them through a wrapper API server that's part of our orchestrator. Building the cloud-hypervisor wrapper took me all of about 2 hours.

ec109685
3 replies
12h47m

The recipe example or any any LLM use case seems like a very poor way of highlighting “inference at the edge” given the extra few hundred ms round trip won’t matter.

unraveller
1 replies
7h6m

The better use case is obviously voice assistant at the edge. As in voice 2 text 2 search/GPT 2 voice generated response. That is where ms matter but it is also a high abuse angle no one wants to associate with just yet. My guess is they are going to do this in another post, and if so they should make their own perplexity style online-gpt. For now they just wanted to see what else people can think up by making the introduction of it boring.

ec109685
0 replies
2h4m

There’s three options for inference: 1) On device inference 2) Inference “on the edge” 3) Inference in a data center

Given fly is deployed in equinox data centers just like everyone else, fundamentally there isn’t much difference between #2 and #3.

manishsharan
0 replies
12h27m

This. I cannot think of a business case for running LLMs on the edge. Is this a Pets.com moment for the AI industry?

Havoc
3 replies
19h21m

How fast is the spin up/down on this scale to zero? If it is fast this could be pretty interesting

amanda99
2 replies
18h59m

I think the bigger question is how long it takes to load any meaningful model onto the GPU.

fideloper
1 replies
18h30m

that’s exactly right.

gpu-friendly base images tend to be larger (1-3g+) so that takes time (30s - 2m range) to create a new Machine (vm).

Then there’s “spin up time” of your software - downloading model files adds as long as it takes to download GB of model files.

Models (and pip dependencies!) can generally be “cached” if you (re)use volumes.

Attaching volumes to gpu machines dynamically created via the API takes a bit of management on your end (in that you’d need to keep track of your volumes, what region they’re in, and what to do if you need more volumes than you have)

dathinab
0 replies
8h58m

I know it's not common in research and makes often little sense there.

But at least in theory for deployments you should generate deployment images.

I.e. no pip included in the image(!), all dependencies preloaded, unnecessary parts stripped, etc.

Models likely might also be bundled, but not always.

Still large images, but also depending on what they are for the same image might be reused often so it can be cached by the provider to some degree.

unixhero
2 replies
6h7m

I use Fly.io free tier to run uptime monitoring with Uptime kuma. It works insanely well, and I'm a really happy camper.

rozenmd
1 replies
5h58m

What do you use to let you know uptime kuma went down?

unixhero
0 replies
4h6m

It doesn't

wslh
0 replies
7h39m

Interesting. We have this discussing this kind of services (offloading training) over the last several days [1] [2] [3]. Thinking on the opportunity to compete with top cloud services such as Google Cloud, AWS, and Azure.

[1] https://news.ycombinator.com/item?id=39353663

[2] https://news.ycombinator.com/item?id=39329764

[3] https://news.ycombinator.com/item?id=39263422

m3kw9
0 replies
15h35m

Now having GPUs is news now?

isoprophlex
0 replies
11h16m

Almost twice as cheap as Modal! Very nice!

faust201
0 replies
1h19m

The speed of light is only so fast

This is the title of one of the sections. Why? Think IT sector needs to stop using such titles.

dvrp
0 replies
14h26m

too expensive

dcsan
0 replies
17h36m

Can fly run cog files like replicate uses? Would be nice to take those pre packaged models run them here with the same prediction API

Maybe cos it's replicate they might be hesitant to adopt it but it does seem to make things a lot smoother Even with lambalabs' lambdastack I still hit cuda hell https://github.com/replicate/cog

bugbuddy
0 replies
11h17m

This is amazing and it shows that Nvidia should be the most valuable stock in the world. Every company, country, city, town, village, large enterprise, medium and small business, AI bro, Crypto bro, gamer bro, big tech, small tech, old tech, new tech, and start up want Nvidia GPUs. Nvidia GPUs will become the new green oil of the 21st century. I am all in and nothing short of a margin call will change my mind.

andes314
0 replies
19h28m

Has anyone who has used Beam.Cloud compare that service to this one?

DreamGen
0 replies
7h45m

Great, more competition for the price-gouging platforms like Replicate and Modal is needed. As always with these, I would be curious about the cold-start time -- are you doing anything smart about being able to start (load models into VRAM) quickly? Most platforms that I tested are completely naive in their implementation, often downloading the docker image just-in-time instead of having it ready to be deployed on multiple machines.