There are two findings I find shocking in this work:
* In existing LLMs, we can replace all parameter floating-point values representing real numbers with ternary values representing (-1, 0, 1).
* In matrix multiplications (e.g., weights by vectors), we can replace elementwise products in each dot product (a₁b₁ + a₂b₂ ...) with elementwise additions (a₁+b₁ + a₂+b₂ ...), in which signs depend on each value. See the paper for exact details.
On existing hardware, the gains in compute and memory efficiency are significant, without performance degradation (as tested by the authors).
If the proposed methods are implemented in hardware, we will see even greater gains in compute and memory efficiency.
Wow.
I'd be VERRY cautious about being excited here.
My priors are like this:
1. Initial training of a neural network moves all weights around a large amount at first.
2. Later training of the network adjusts them a small amount.
3. An undertrained network will therefore look a lot like figuring out "positive, negative, or 0?" for each node during early training.
If all these things are true, then
1. Early training of an fp16 network and a bitnet with 0 added will be roughly similar in results
2. Later training will yield different / worse results, as the network gets into the 'fine tuning' part of the training.
I think the paper's stats back these priors up -- they say "this works on (3B+) large networks, but not small ones." They then imply there's something about the structure of a large network that allows a bitnet to do well. It seems more likely to me it works on large networks because they have not put the compute into 3B+ networks to get past the 'gross tuning' phase.
The networks they have compute to put in to get them 'fully' trained -- those networks don't show the results.
Also, a quick reminder that Perplexity 12 is really terrible. You would not want to use such a network. Hopefully I'm wrong and we can get something for free here! But, I'm cautious - to - skeptical.
Update - I'm still cautious about this paper, but I had the table numbers inverted in my head while thinking about it. The paper shows better perplexity results than competing models at larger parameter sizes, so I was wrong.
I was pretty unhappy and suspicious for the same reason. Not reporting perplexity for a 70B network while reporting its efficiency means that someone did something and the result wasn't good enough to put in the paper.
According to the author, the 70B model is not fully trained.
"Is not fully trained" can also mean "we did not figure out how to reach an acceptable loss" or "training was unstable," both of which are common for ML systems.
It probably means that the model is not fully trained, because it is very expensive to train a 70B model, not even Mamba or RWKV have a model that comes close to that size, the leeriness is just kinda silly honestly.
Extraordinary claims require extraordinary evidence.
That's not to say that a 70B model is necessary, but surely something larger than 3B is doable, especially given that the results of the paper directly imply a significant reduction in memory requirements for training such a model.
Isn't memory use in training higher, since they maintain high precision latent weights in addition to the binarized weights used in the forward pass?
Most research universities have the resources to train a ~10B parameter model, at least.
For sure bigger models are needed to compete with transformer LLM, same thing for Mamba, I was just bothered by the distrust about something very reasonable like not being able to fully train a 70B model.
One can forgive the lack of quality results for the 70B model, but apparently they trained 7B and 13B versions of their model, and don't report those either.
Intuitively I've always been a bit skeptical of quantization. Wouldn't there be a tiny loss in precision by doing this type of quantization? I could imagine the error function increasing by utilizing these types of techniques.
John Carmack pointed out (and I learned it here at HN) that what training really needs is the *sign" of each individual gradient parameter. I.e., you can quantize gradient to -1, 0 and 1 and still have neural network learn much of the dataset.
Wow! Is there a link to read up more on this?
Why isn't John Carmack working for OpenAI? Hell, why did he waste years at Meta to work on a VR headset and NOT AI? He even announced he wants to focus on AGI but he missed out on literally all the action.
he has his own AGI startup now https://dallasinnovates.com/john-carmacks-keen-technologies-...
TBH I think they won't get anywhere. Doing good game engine work... why that would translate to AGI?
Yes each weight will not be able to "learn" as much if it has less bits of precision. But the idea is that you can use more weights, and the big question is whether these low-precision weights can make the model more accurate, as a whole.
Quantization does reduce quality of the outputs. But the point is that you save enough memory doing so that you can cram a larger model into the same hardware, and this more than compensates for lost precision.
It does increase the “error” (meaning it is less likely to predict the next word when compared against a dataset) but the losses are lower than your intuition would guide you to believe.
Thank you. Your key point -- that so far all models with the proposed methods may have been only "grossly trained" -- is compelling. If I understand the authors correctly, they trained the compared models on only 100B tokens, all drawn from RedPajama, to make the comparisons apples-to-apples. That seems sensible to me, and makes replication easier, but I agree we need more to see extensive testing, after more extensive pretraining, on models of larger sizes.
They also trained 3B with 2 trillion tokens.
You're right. Thank you for pointing that out!
Wait, are we reading the same paper? What I'm seeing is comparable accuracy to unquantized models for <4B params, and nothing reported for larger models except resource consumption.
Nope, you're right, I got the table inverted in my head. I'm updating my top comment.
Then perhaps a method emerges out of this to make training faster (but not inference) - do early training on highly quantized (even ternary) weights, and then swap out the weights for fp16 or something and fine-tune? Might save $$$ in training large models.
The 3B model had a perplexity of 9.91, less than LLaMa 1 in fp16.
Why is this so shocking? Quantization has been widely explored, driving that to its extreme (and blowing up parameter count to make up for it) just seems like a natural extension of that.
Easier said than done, of course, and very impressive that they pulled it off.
I feel like this follows naturally from having only ternary values, multiplication doesn't really bring much to the table here. It's a bit surprising that it's performing so well on existing hardware, usually multiplication hardware sees more optimization, especially for GPGPU hardware.
> Why is this so shocking? Quantization has been widely explored, driving that to its extreme (and blowing up parameter count to make up for it) just seems like a natural extension of that.
I find it shocking that we don't even need lower floating-point precision. We don't need precision at all. We only need three symbols to represent every value.
> I feel like this follows naturally from having only ternary values, multiplication doesn't really bring much to the table here. It's a bit surprising that it's performing so well on existing hardware, usually multiplication hardware sees more optimization, especially for GPGPU hardware.
I find it shocking. Consider that associative addition over ternary digits, or trits, represented by three symbols (a,b,c) has only three possible input pairs, (a,b), (a,c), or (b,c) (within each pair, order doesn't matter), and only three possible outputs, a, b, or c. Matrix multiplications could be executed via crazy-cheap tritwise operations in hardware. Maybe ternary hardware[a] will become a thing in AI?
---
[a] https://en.wikipedia.org/wiki/Ternary_computer
An integer is just a concatenation of bits. Floating point appears more complicated but from an information theory perspective it is also just a concatenation of bits. If, for the sake of argument, one replaced a 64-bit int with 64 individual bits, that's really the same amount of information and a structure could hypothetically then either choose to recreate the original 64-bit int, or use the 64-bits more efficiently by choosing from the much larger set of possibilities of ways to use such resources.
Trits are helpful for neural nets, though, since they really love signs and they need a 0.
So from the perspective that it's all just bits in the end the only thing that is interesting is how useful it is to arrange those bits into trits for this particular algorithm, and that the algorithm seems to be able to use things more effectively that way than with raw bits.
This may seem an absolutely bizarre zigzag, but I am reminded of Busy Beavers, because of the way they take very the very small primitives of a Turing Machine, break it down to the smallest pieces, then combine them in ways that almost immediately cease to be humanly comprehensible. Completely different selection mechanism for what appears, but it turns out Turing Machine states can do a lot "more" than you might think simply by looking at human-designed TMs. We humans have very stereotypical design methodologies and they have their advantages, but sometimes just letting algorithms rip can result in much better things than we could ever hope to design with the same resources.
Yes. Though here the interesting point is not so much that these structures exist, but that 'stupid' back-propagation is smart enough to find them.
You can't find busy beavers like that.
> So from the perspective that it's all just bits in the end the only thing that is interesting is how useful it is to arrange those bits into trits for this particular algorithm, and that the algorithm seems to be able to use things more effectively that way than with raw bits.
Thank you. I find many other things interesting here, including the potential implications for hardware, but otherwise, yes, I agree with you, that is interesting.
This sort of breakdown also reminds me of the explanation of why busy beavers grow faster than anything humans can ever define. Anything a human can define is a finite number of steps that can be represented by some turing machine of size M. A turning machine of size N > M can then use M as a subset of it, growing faster than than the turing machine of size M. Either it is the busy beaver for size N, or it grows slower than the busy beaver for size N. Either way, the busy beaver for size N grows faster than whatever the human defined that was captured by the turning machine of size M. This explanation was what helped me understand why busy beavers is faster growing than any operator that can be formally defined (obviously you can define an operator that references busy beaver itself, but busy beaver can be considered to not be formally defined, and thus any operator defined used it isn't formally defined either).
The bit about floating point numbers just being a collection of bits interpreted in a certain way helps make sense why a bigger model doesn't need floating points at all.
The matrices (weights) are ternary.
The vectors are not.
The activations are in (-1, 1), so they're also representable by (-1, 0, 1).
This is wrong. The paper described that their activation is in int8 during inference.
That being said, before-LLM-era deep learning already had low bit quantization down to 1w2f [0] working back in 2016 [1]. So it's certainly possible it would work for LLM too.
[0] 1-bit weights, 2-bit activations; though practically people deployed 2w4f instead. [1] https://arxiv.org/abs/1606.06160
If you find three symbols per weight shocking, this paper should completely blow your mind: https://arxiv.org/abs/1803.03764
I admit it did shock me when it came out.
EDIT: Embarrassingly, on the last paragraph I got the number of possible input pairs wrong:
> only three possible input pairs, (a,b), (a,c), or (b,c) (within each pair, order doesn't matter)
The correct number, ignoring order, is six pairs, because we have to include (a,a), (b,b), and (c,c).
Well I guess it's the “blowing up parameter count to make up for it” that confuses me, but maybe it's just ignorance.
Like what would be the expected factor of this blow up to make up the difference between ternary and whatever 16 bits encoding they were using?
I mean intuitively I'd expect to need ~10× the symbols to encode the same information? Are they using an order of magnitude more parameters, or is that not how it works?
With existing common quantization techniques, a 70b model quantized to 3-bit still drastically outperforms an unquantized 35b model.
Are you sure? I was under impression that 3b quantization still results in a significant degradation. Which quantization method are you talking about?
It does result in a significant degradation relative to unquantized model of the same size, but even with simple llama.cpp K-quantization, it's still worth it all the way down to 2-bit. The chart in this llama.cpp PR speaks for itself:
https://github.com/ggerganov/llama.cpp/pull/1684#issue-17396...
Oh wow, you’re right. Though it seems that they are using very small weight group sizes: either 16 or 32 (fp16 scaling factor per group). In this paper it seems there’s no weights grouping, so it’s a bit apples to oranges.
Because it's no longer a linear optimization or curve fitting problem. It becomes a voting or combinatorial problem. Which at least in my mind are two completely different areas of research.
With enough parameters, it probably starts looking continuous again. Like how in physics everything is quantised at the smallest scale but if you put enough atoms together it all smooths out and behaves "classically".
Yes, but we can simulate classical physics using mathematical shortcuts. Simulating every little atom would take a lot more work.
based on (an admittedly rapid and indulgent reading of the paper), it seems like they're not increasing the parameter size. Do you mind pointing out where the blowup is occurring?
No, unless I'm mistaken it's a huge impact: it means the matrix product is separable: basically, it's a O(n²) algorithm, and not O(n3): add together all the c_j = sum(a_i_j), d_i = sum(b_i_j), and the final results are all the combinations of cj+di. And even then, half that is unnecessary because the d_i can all be pre-computed when before inference since they are weights.
But I skimmed over the paper, and didn't found the part where it was explained how they replace the product by additions: from what I understand, they remplace multiplications by bi by selecting +ai, 0, or -ai. So the final matrix multiplication can be implemented by only additions, but only because the weights are 1,0,-1 they avoid multiplications altogether. This is really different from what the GP said (remplacing a0*b0+... by a0+b0+...).
Fun to see ternary weights making a comeback. This was hot back in 2016 with BinaryConnect and TrueNorth chip from IBM research (disclosure, I was one of the lead chip architects there).
Authors seemed to have missed the history. They should at least cite Binary Connect or Straight Through Estimators (not my work).
Helpful hint to authors: you can get down to 0.68 bits / weight using a similar technique, good chance this will work for LLMs too.
https://arxiv.org/abs/1606.01981
This was a passion project of mine in my last few months at IBM research :).
I am convinced there is a deep connection to understanding why backprop is unreasonably effective, and the result that you can train low precision DNNs; for those note familiar, the technique is to compute the loss wrt to the low precision parameters (eg project to ternary) but apply the gradient to high precision copy of parameters (known as the straight through estimator). This is a biased estimator and there is no theoretical underpinning for why this should work, but in practice it works well.
My best guess is that it is encouraging the network to choose good underlying subnetworks to solve the problem, similar to Lottery Ticket Hypothesis. With ternary weights it is just about who connects to who (ie a graph), and not about the individual weight values anymore.
IIRC, Hamming's book "Digital Filters" (1989) has a section on FFTs with only the sign of the coefficient being used. It performed surprisingly well.
What is the sign of a complex number? Do you mean the phase?
AFAICT, both the real and imaginary components are from (-1, 0, +1) only. No single sign, but only 8 directions and the center.
You mean Fast Hadamard Transform?
They train using Straight Through Estimator but is cited in the previous BitNet paper. What happen to the TrueNorth Chip? I think investing in specialized hardware for AI is a good bet.
Nice to know there is a trail to relevant citations. I missed the BitNet paper and need to catch up.
Btw TrueNorth project evolved into "NorthPole" chip by the same group, and was recently in the press. From afar NorthPole looks like an interesting design point and leverages on-chip memory (SRAM)--so it's targeting speed and efficiency at the expense of memory density (so perhaps like Groq in some respects). Tbh I haven't followed the field closely after leaving the group.
Could the reason that 3 states in this case be more efficient than 2 states be that 3 is closer to 2.718... (Euler's number) than 2 is?
Why not have some layers/nodes/systems be 2 states and have others be 3... couldn't you get arbitrarily close to Euler's number that way?
That’s really interesting to see the breadcrumb trail goes back that far.
So what are the most important insights in this paper compared to what was previously done?
I assume there’s more context to the story and it’s not just that no one thought to apply the concepts to LLM’s until now?
I don't think there is anything conceptually new in this work, other than it is applied to LLMs.
But in fairness, getting these techniques to work at scale is no small feat. In my experience quantization aware training at these low bit depths was always finicky and required a very careful hand. I'd be interested to know if it has become easier to do, now that there are so many more parameters in LLMs.
In any case full kudos to the authors and I'm glad to see people continuing this work.
As aside, I'm curious: what was it like to work at IBM research, especially as a legacy industrial research org?
You can probably apply the same techniques 'Deep neural networks are robust to weight binarization and other non-linear distortions' used to get to 0.68 bits / weight to get your ternary weights below one bit; so you can claim they are still one-bit networks.
Thank you. Others on this thread have addressed the citation-trail issues you raise. I just want to tell you how helpful I find your comment about why ternary weights ought to work at all without degrading performance:
Your guess sounds and feels right to me, even if currently there's no way to express it formally, with the rigor it deserves.
Thank you again for your comment!
They cite straight through estimators in the previous work with many of the same authors on (actual binary) BitNet
We have been experimenting with the paper(https://www.researchgate.net/publication/372834606_ON_NON-IT...).
There is a mathematical proof that binary representation is enough to capture the latent space. And in fact we don't even need to do "training" to get that representation.
The practical application we tried out for this algorithm was to create an alternate space for mpnet embeddings of Wikipedia paragraphs. Using Bit embedding we are able to represent 36 million passages of Wikipedia in 2GB.(https://gpt3experiments.substack.com/p/building-a-vector-dat...)
How is this not lossy compression?
LLMs and vector embeddings are always lossy compression, yes?
Almost always. Though you can use them in a lossless compression system, too, with a few tricks.
It kind of is!
kind of related: https://medium.com/@heinrichpeters/commentary-gzip-knn-beats...
You're talking about mapping floating-point vector representations, i.e., embeddings, computed by a pretrained LLM to binary vector representations, right? And you're talking about doing this by first having someone else's pretrained LLM compute the embeddings, right? Sorry, but that seems only minimally, tangentially related to the topic of running LLMs in ternary space. I don't see how your comment is relevant to the discussion here.
Yeah, sorry, needed a much bigger canvas than a comment to explain. Let me try again. The example I took was to show mapping from one space to another space and it may have just come across as not learning anything. Yes. You are right it was someone else's pretrained LLM. But this new space learnt the latent representations of the original embedding space. Now, instead of the original embedding space it could also have been some image representation or some audio representation. Even neural networks take input in X space and learn a representation in Y space. The paper shows that any layer of a neural network can in fact be replaced with a set of planes and we can represent a space using those planes and that those planes can be created in a non iterative way. Not sure if I am being clear, but have written a small blog post to show for MNIST how an NN creates the planes(https://gpt3experiments.substack.com/p/understanding-neural-...). Will write more on how once these planes are written, how we can use a bit representation instead of floating point values to get similar accuracy in prediction and next how we can draw those planes without the iterative training process.
Sounds interesting, but this is the part I would need more explanation on.
Just started reading your linked blog, I see it goes into some details there.
Will add a lot more details next week. Have been postponing it for a long time.
I find this extremely interesting. Do you share the source code of the process? any more references?
Unfortunately the source code is currently not open sourced. Some more details at (https://www.researchgate.net/publication/370980395_A_NEURAL_...), the source code is built on top of this.
The approach is used to solve other problems and papers have been published under https://www.researchgate.net/profile/K-Eswaran
We are currently trying a build a full fledged LLM using just this approach(no LLM training etc) and also an ASR. We should have something to share in a couple of months.
Am I missing something or is this just a linear transformation?
It says here ( https://www.researchgate.net/publication/370980395_A_NEURAL_... ) that each layer can be represented as a matrix multiplication (equation 3): Ax = s
So concatenating multiple layers could just be reduced to a single matrix multiplication?
If there is no non-linearity I don't see how this could replace neural networks, or am I missing something?
Wow, this works better than I would've thought.
First result:
There is another _shocking_ realization in this work: there are 11 types of people: those who know what binary means, those who don't, and those who say they do but actually don't.
"The era of 1-bit LLMs"
Representing { -1, 0, 1 } can't be done with 1-bit, I'm sorry -- and sad, please let's all get back to something vaguely sound and rigorous.
One trit but that's not a word anyone knows.
That used to be true yesterday…
Ternary supporters are always bitter about this
(I'll let myself out)
Something rigorous would be to actually read the paper rather than stop at the first part of its title. The authors are not claiming their LLM is 1-bit.
There are 10 types of people, those who don't know binary, those who do and those who know ternary.
Thinking out loud here. If you encode 64 weights in 2 64-bit words you can have the bits in one word indicating +1 if they're 1, and the bits in the other word indicating -1 if they are 1. You should be able to do the "products" with a few boolean operations on these 2 words to get a pair of 64 bit words for the result. Then summing becomes a matter of using a count-of-1's instruction on each word and subtracting the "negative" count from the positive. If AVX instructions can do this too, it seems like equivalent of 10-100 TOPS might be possible on a multi-core CPU.
Yes. More generally, this will enable implementation via crazy-cheap bit-wise ops in binary hardware, and possibly, maybe, via crazy-cheap trit-wise ops in ternary hardware that manipulates ternary digits, or trits. Note that any binary op over trits has only nine possible (trit, trit) input pairs and only three possible trit outputs. Maybe ternary hardware for AI will become a thing?
Fleshing out my thought above. If we want to multiply A*B = C and all operands are stored in 2 separate bits Ap and An (Ap = 1 if A = +1 while An = 1 if A = -1). We can do a product with:
Cp = (Ap & Bp) | (An & Bn)
Cn = (An & Bp) | (Ap & Bn)
So 64 products in 6 instructions, or 256 in 6 instructions with AVX2, or 512 in six instructions using AVX512. If you can execute 2 instructions at a time on different words, this becomes 1024 "products" in 6 cycles or between 0.5 and 1 TOP per core.
The summing still involves using popcount on the positive and negative bits - I doubt AVX supports that but its still a fast way to "sum" individual bits. I don't see custom hardware for this as a short term thing - they need to prove out the quantization concept more first.
> I don't see custom hardware for this as a short term thing - they need to prove out the quantization concept more first.
Yes, I agree. This still needs to be more extensively tested.
Authors reported perplexity only for small up to 3B weights models. On the other hand, they reported throughput for 70B model, but not its performance (perplexity, end-to-end tasks). Very unfortunate omission. Overall, the paper is rather poorly written.
If I understand the authors correctly, they trained the compared models on only 100B tokens, all drawn from RedPajama, to make the comparisons apples-to-apples. That's sensible. It allows for easier replication of the results. Otherwise, I agree with you that more extensive testing, after more extensive pretraining, at larger model sizes, is still necessary.
towards the end of the paper they mentioned training on 2T tokens.
You're right. Thank you for pointing that out.
It seems like the AI space is slowly coming back around to the old Thinking Machines CM-1 architecture. It's not too often in computing where you see ideas a full 40 years ahead of their time make it into production.
Memristors any moment now
I'm holding out for Josephson junctions
IIUC the main issue with the CM-1 architecture was feeding the processor cluster with data. That required a heftier front end system than was practical/affordable at the time. With modern CPUs and memory subsystems the GPUs can be saturated pretty easily. So going back to huge clusters of super narrow cores won't starve them for work.
This will be big for FPGAs - adders are extremely cheap compared to multipliers and other DSP blocks.
Multipliers for eg 8 bit or 4 bit floating point values should also be pretty cheap? (I assume multipliers have a cost that grows quadratically with the number of bits?)
You use DSPs for that. Effinix has direct bfloat16 support in their FPGAs. The real game changer is using the carry chain with your LUT based adders. Assuming 16 LUTs, you could be getting 11 teraops out of a Ti180 using a few watts. Of course that is just a theoretical number though but I could imagine using four FPGAs for speech recognition and synthesis and vision based LLMs operating in real time.
.. And the paper is _true_ of course, indeed, this sort of compounding quantum leap in efficiency due to representational change starts to get towards the Black Mirror / SciFi foundational mythology level of acceleration. Wild (if true!)
Slight tangent: in physics a quantum leap is the smallest possible change.
I'm also curious about the potential speed gains in automatic differentiation, as there are way less branches to 'go up'. Or am I wrong here?
They actually use a relu to represent the model weights. But I'm not convinced that this can't be avoided. We do gradient boosted decision tree training without this trick.
I think you need more evidence than this paper (which is very short and light on actual numbers) to be this shocked.
For example, most of the plots in the paper are actually of throughput, memory, etc. all performance characteristics that are better on the ternary version. Which, of course.
The only thing that contains perplexities are Table 1 and 2. There, they compare "BitNet b1.58 to our reproduced FP16 LLaMA LLM in various sizes" on the RedPajama data set. The first thing to note is the perplexities are very high: they're all at least ~9.9, which compared for example with quantized Llama on wikitext-2 which is 6.15 (https://www.xzh.me/2023/09/a-perplexity-benchmark-of-llamacp...). Maybe RedPajama is a lot harder than wikitext-2, but that's a big gap.
I think probably their benchmark (their "reproduced FP16 LLaMA LLM") is just not very good. They didn't invest much in training their baseline and so they handily beat it.
Thank you. I think the paper as it is provides enough evidence to support the claims. If I understand the authors correctly, they trained the compared models on only 100B tokens, all drawn from RedPajama, to make the comparisons apples-to-apples. That's sensible. It allows for easier replication of the results. Otherwise, I agree with you that more extensive testing, after more extensive pretraining, is still necessary.
Considering how much faster additions are processed, and how a particular silicon chip could be optimized for this very specific case; all parts added together perhaps could show >100x speed up vs current systems.
I must concur, "wow".
For hardware, 2-argument ternary additions and multiplications should be very close in terms of the tiny circuit required for either.
If you are doing ternary calculations on 32/16-bit hardware, then the additions would be simpler.
Did they actually show absence of performance degradation?
I think it's conspicuous that Table 1 and Table 2 in the paper, which show perplexity and accuracy results respectively, are only for small model sizes, whereas Figure 2, Figure 3 (latency, memory, energy consumption) and Table 3 (throughput) all show larger model sizes. So it seems like they had every opportunity to show the perplexity/accuracy comparisons at the larger model sizes, but did not include them.
Others have already made the same point in this thread. See my response here: https://news.ycombinator.com/item?id=39539508
It's not too surprising, honestly! I've poked around with similar in the past and am of a perspective that ternary is a very good thing for a lot of neural networks.
Training CIFAR-10 speedily w/ ternary weights on an fp16 interface (using fp16 buffers, and norm params unchanged): https://gist.github.com/tysam-code/a43c0fab332e50163b74141bc...
Ternary networks have been used since 2015. There are hundreds of papers. They all require full QAT (training from scratch). Not sure why you’re shocked.
It all seems too good to be true but your comment helped me develop a mental model for how this could work.
The most inspiring aspect to me here is just realizing how much potential low-hanging fruit there is in this space! What other seemingly naïve optimizations are there to try out?
does that mean we can do integer instead of floating point math for some parts of the training? that seems like a really big win
It almost seems too good to be true
Aren’t you over complicating it a bit here? A dot product between a vector of activations (a₁, a₂, …) and a vector of ternary weights (b₁, b₂, …) can of course be computed as the sum of all activations for which the weight is 1, minus the sum of all activations for which the weight is -1.
It can’t however be computed as (a₁+b₁ + a₂+b₂ ...). You must have gotten that wrong.
I haven’t been keeping tabs, but this seems very much like RIP / Achilioptas version of the Johnson Lindenstrauss lemma.
Perhaps the rest of the JL lemma promise applies as well - compressing the number of parameters by a few orders of magnitude as well.
Question is whether you can train in this domain or whether you need increased precision to properly represent gradients.
If we could train in this domain it would be an even bigger game changer.
Conversely, this also implies our current model sizes can still embed a ton more “understanding”
I am not startled at all. Dense vector representations are pretty silly, they can’t really be the road to knowledge representation.
In undergrad, some of us math majors would joke that there's really only three quantities: 0, 1, infinity.
So, do we need the -1, and/or would a 2.32 bit (5 state, or 6 with +/-0) LLM perform better than a 1.58 bit LLM?