return to table of content

Infrastructure setup and open-source scripts to train 70B model from bare metal

thejash
13 replies
1d19h

In the span of a few months, with a small team of researchers and engineers, we trained a 70B parameter model from scratch on our own infrastructure that outperformed zero-shot GPT-4o on reasoning-related tasks. Using our cluster for high performance training meant that every component — InfiniBand, Ethernet, GPUs, and the nodes themselves — had to work perfectly. If even a single one of the over 12,000 connections was a little flaky, it could slow down the entire training run.

We're sharing open-source scripts and an end-to-end guide for infrastructure set-up that details the process of making everything work perfectly, and ensuring that it stays that way.

This is one of a three-part toolkit on training a 70b model from scratch. The other two sections focus on evaluations and CARBS, our hyperparameter optimizer; you can find them here: https://imbue.com/research/70b-intro/

Thoughts and questions welcome! :)

chx
8 replies
1d8h

If even a single one of the over 12,000 connections was a little flaky, it could slow down the entire training run

It's an unusual enough sentence to be remarkable and I was like "I read this exact same sentence before". Indeed, this and most of the writeup appeared on Twitter, LinkedIn, Reddit it seems word-by-word. Is this just spam ?

https://x.com/imbue_ai/status/1805629547473518695

https://reddit.com/r/learnmachinelearning/comments/1dobgbs/t...

https://www.linkedin.com/posts/mattboulos_training-a-70b-mod...

bottled_poe
1 replies
1d7h

lmao, I was thinking this was bullshit and you’ve cemented that position. We’ve entered the grifting stage of this AI cycle. Salut.

knowaveragejoe
0 replies
1d1h

Having listened to the person who wrote this speak at length about the subject, it is not BS or grifting.

neilv
0 replies
1d6h

I'd rather some company copy&paste the same text multiple places -- if the alternative was that those places would instead get obfuscation of the same information to appear novel each time (so I'd have to read all of them to realize they're all just the same info).

lolinder
0 replies
1d4h

This is the kind of criticism that could only come from someone without much formal writing experience.

This is a very normal workflow: You write a full-length text detailing the project you worked on. You then trim it down to a summary which you share with a group of people X. You then trim it down into a different summary which you share with a group of people Y.

When you do this multiple times you unsurprisingly end up with some sentences that make it into multiple summaries because they're that important to the thesis!

(Also, the summaries on Twitter and Reddit aren't anything close to "most of the writeup"—the full text is 6000+ words!)

leothetechguy
0 replies
1d6h

The same company reports multiple times on a finding they've made through multiple social media channels? Shocking. /s

fastasucan
0 replies
1d

I dont inderstand your issue with this. Is it that they share their work several places, or that they don't describe their work in an unique way every time?

exe34
0 replies
59m

i prefer this, to the story about that time they went to Florence and their grandma made pizza for dinner and they got the recipe.

ac29
0 replies
1d3h

Eh, seems like legit marketing to me. Yes, they are trying to sell you something, but they are doing that by releasing non-trivial research and open source code.

vessenes
0 replies
1d2h

Loved this and the detail - thank you. It’s the best inside detail on the engineering work behind these models I’ve ever read.

Two things I’m curious about- first, what, if any difference would you imagine in training a 400b parameter model? It seems that you have plenty of vram across the cluster, but I want to know what you think.

Second, do you think this sort of architecture is the end game for model training? It seems sooo fragile. Are there better shared training mechanisms/architectures? Are there better cluster geometries?

Thanks again - great read.

ipsum2
0 replies
1d11h

What happened to the Minecraft-like 3d world your team built? Did you guys pivot?

highfrequency
0 replies
1d4h

outperformed zero-shot GPT-4o

Cool stuff! Does this do RLHF or just pretraining? If the latter, how did you manage to beat GPT 4?

Flumio
0 replies
1d10h

Nice. Tx for the write up

john2x
6 replies
1d13h

once the model is trained, what happens to the hardware and infrastructure?

pvg
3 replies
1d12h

It probably isn't the answer but should be - LAN party.

rvnx
2 replies
1d12h

GPUs will be reused for mining Monero and exfiltrate money to the founders at the expense of the investors.

Oops, don't tell I told you.

EDIT: Sorry Dogecoin, thanks to the tip!

surfingdino
0 replies
1d12h

Are you suggesting training models is a cover for mining crypto? The hardware is dual-purpose...

BetaDeltaAlpha
0 replies
1d11h

Monero uses a CPU-optimized consensus algorithm. Dogecoin is a better bet.

trashtester
0 replies
1d10h

Voltage Park is a cloud provider. This is no different from renting barebone infra from AWS, GCP or Azure.

Except Voltage Park, being smaller, is probably more willing to provide some customized setup.

Indeed, they may even see it as a learning opportunity for when they rent similar setups to other customers.

gostsamo
0 replies
1d12h

Either training the next model or inference for the already trained one. In some cases, you might even offer it as a service.

mmastrac
2 replies
1d2h

Honest question: why is there so much PC hardware in the mix here? Why don't we have PCI + infiniband backends with GPUs and a little tiny orchestrating ARM controller and just let them all coordinate with each other? Is it just "momentum" from previous designs and/or lack of "market" for specialized GPU controllers?

ianburrell
0 replies
23h22m

Cause when you have quarter million dollars of GPU on each machine, it is dumb to worry about few thousand for the controlling hardware. Too risky to use something new.

Another problem is that all the hardware, drivers, and experience for GPU are on PC. It would take a lot of work to get running on ARM since would be starting from scratch. Then more work to get it stable. All to save a little on processor.

bick_nyers
0 replies
1d1h

Are you asking why pay extra for a CPU and RAM? Not everything can be done on a GPU, for example, .png decompression. If you really analyzed your training code and preprocessed your data substantially you could probably get away with very lightweight CPU/RAM resources but I think the reality is that it's such a minor contribution of cost to the overall system (GPU are expensive) that wasting development cycles on that degree of optimization isn't strictly necessary. When you're a hyperscaler you are likely chasing those fractions of a percent of cost efficiency though. To use my original example, you would likely want to preprocess your .png to either .webp (multi-threaded lossless) or .jpeg (lossy), but likely it wouldn't make sense to turn it into a GPU decompressible format as you would save on CPU cost during training but would pay more in storage (and maybe transfer) cost.

Edit: To be more clear, if the CPU work is bottlenecking training, you want to optimize that as much as possible by preprocessing your data/tweaking training scripts. What I'm discussing here is the gap between "fast enough" and "faster":

CPU is not fast enough for training < CPU is exactly fast enough for training < CPU is faster than needed for training

alias_neo
2 replies
1d4h

This post focuses on one cluster that had 4,092 H100 GPUs spread across 511 computers, with eight GPUs to a computer

Am I right in understanding, that's over $100 Million worth of GPUs?

I wonder what/when/if any of this will be within the realms of an enthusiast with a gaming-pc budget.

mandeepj
0 replies
16h43m

Am I right in understanding, that's over $100 Million worth of GPUs?

Ha! I guess most or many of the readers (who don't have that much of funding) should jump to the next HN submission

loudmax
1 replies
1d5h

This was discussed on the Latent Space podcast a few days ago: https://www.latent.space/p/llm-training-2024

That was a good episode, worth listening to for hearing justifications behind some of these decisions.

swyx
0 replies
14h36m

thank you for listening!

im not used to conducting these kinds of interviews and felt out of my depth. please suggestions questions that you felt should have been asked but werent.

weinzierl
0 replies
1d5h

How much did it cost? Overall, from nothing to the usable model files, in hardware cost, development hours and ultimately electricity and cooling?

renewiltord
0 replies
1d11h

This is hella cool. Cisco has a new nvidia collab with 800G per-port. I don’t recall if it was RoCE or not. The infiniband is accessible by the GPUs here? Beautiful.

Thank you for sharing all this. One of the more directly useful posts.

mikewarot
0 replies
22h3m

It would be quite interesting to see the same hardware used to repeat the training, but with raw Unicode, instead of tokenized training data.

I'd like to see the difference in performance on spelling and rhymes.

lifeisstillgood
0 replies
1d8h

I am fascinated by the total electrical power drawn to build models - power and cooling I guess. Do you have any numbers on that (the point being Zuckerberg in a podcast suggested the next 1GW model was being planned - basically a data centre with a mid sized power plant attached)

instagib
0 replies
1d6h

4,092 H100 GPUs.

They’re working on “self-coding”. No-code or minimal code solutions or?

Quite a few articles and such people may be interested in also on their website: https://imbue.com/our-work/