return to table of content

Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory

HybridCurve
57 replies
21h42m

As someone who has a deeper knowledge of programming rather than math, I find the mathematical notation here to be harder to understand than the code (even in a programming language I do not know).

Does anyone with a stronger mathematical background here find it easier to understand the math as written more easily than the source code?

layer8
12 replies
21h26m

Mathematical notation is more concise, which may take some getting used to. One reason is that it is optimized for handwriting. Handwriting program code would be very tedious, so you can see why mathematical notation is the way it is.

Apart from that, there is no “the code” equivalent. Mathematical notation is for stating mathematical facts or propositions. That’s different from the purpose of the code you would write to implement deep-learning algorithms.

tnecniv
8 replies
17h57m

The last part was a big hurdle for me as an early undergrad. I was a fairly strong programmer toward the end of high school, and was trying to think of math as programming. That worked for the fairly algorithmic high school stuff and I got good grades, but it made I was awful at writing proofs. I also went through a phase where I used all the logical notation and rules to manipulate it possible in order to make proofs more algorithmic to me, but that both didn’t work well for me and produced some downright unreadable results.

Mathematical notation is really a shorthand for words, like you’d read text. The equals sign is literally short for equals. The added benefit, as others have pointed out, is that a good notation can sometimes be clearer than words because it makes certain conclusions almost obvious. You’ve done the hard part in finding a notation that captures exactly the idea to be demonstrated in its encoding, and the result is a very clean manipulation of your notation.

HybridCurve
5 replies
17h5m

This is essentially my problem. I started writing programs at a young age and was introduced (unknowingly) to many more advanced mathematical concepts from that perspective rather than through pure mathematics. What was it that helped break this paradigm for you?

j2kun
3 replies
15h1m

I wrote a book: https://pimbook.org

You might find it useful for your situation. The PDF is pay-what-you-want if you don't feel like paying for it.

HybridCurve
2 replies
14h1m

Ah, I think I remember bookmarking this when it was posted before. You really don't have to go very far in computing to find a frontier where most everything in described pure mathematics and so it becomes a substantial barrier for undiversified autodidacts in the field. The math in these areas can often be quite advanced and difficult to approach without the proper background and so I appreciate anyone who has made taken the time to make it less formidable to others.

light_hue_1
1 replies
10h17m

I would suggest something like https://ocw.mit.edu/courses/6-042j-mathematics-for-computer-... instead of that book.

I appreciate that some may find the book useful, but I personally don't agree with the presentation. There are too many conceptual errors in the book that you need to unlearn to make progress. For example, the book describes R^2 as a "pair" of real numbers. This is very much untrue and that kind of thinking will lead you even further astray.

I say this as someone with a math/cs degree and PhD having taught these topics to hundreds of students.

woolion
0 replies
7h39m

For example, the book describes R^2 as a "pair" of real numbers.

I naturally auto-corrected this "(the set of) pairs of real numbers". If that's the case, then I don't see how this differs from the actual definition. What is the conceptual error? Is it the missing 'set of'?

tnecniv
0 replies
16h3m

Really trial and error and grinding through proofs. Working through Linear Algebra Done Right was a big a-ha moment for me. Since I was self-studying over the summer (to remedy my poor linear algebra skills), I was very rigorous in making sure I understood every line of the proofs in the text and trying to mimic his style in the exercises.

In hindsight, I think the issue was trying to map everything to programming is a bad idea and I was doing it because programming was the best tool in my tool chest. It was a real “when all you have is a hammer, everything looks like a nail” issue for me.

bawolff
1 replies
16h36m

I don't know that has anything to do with programming.

Arithmatic and writing proofs are very different skills. There is going to be a gap for everyone.

tnecniv
0 replies
15h48m

Yeah I know it’s a common challenge. I think it took me a bit longer than some of my peers because I was trying to force it to be like something I knew instead of meeting it on its own terms.

When all you have is a hammer and all that

Koshkin
2 replies
17h55m

Mathematical notation is for stating mathematical facts or propositions.

And as such it is way too often abused. Because the (original, and the most useful) purpose of mathematical notation is to enable calculation, i.e., in a general sense, to make it possible to obtain results by manipulating symbols according certain rules.

layer8
1 replies
17h51m

I see the steps of a calculation as stating a sequence of mathematical facts, so that’s just an instance of the general definition.

Koshkin
0 replies
17h41m

Sure, but the whole point is to avoid the need to do that! Manipulating symbols is the way to automate reasoning, i.e. to get to a result while completely ignoring said "facts." Using the symbols to merely "state the facts" is abuse (of the reader, mostly).

outrun86
11 replies
20h16m

I’m just wrapping up a PhD in ML. The notation here is unnecessarily complex IMO. Notation can make things easier, or it can make things more difficult, depending on a number of factors.

angra_mainyu
10 replies
20h6m

Really? Coming from physics (B.Sc only) the notation is refreshingly familiar and straightforward. My topology and analysis classes were basically like this.

In fact, this pdf is literally the resource I've been searching for as many others are far too ambiguous and handwavey focusing more on libraries and APIs than what's going on behind the scenes.

If only there were a similar one for microeconomics and macroeconomics, I'd have my curiosity satiated.

youainti
4 replies
19h59m

As a PhD econ student, the mathematics just comes down solving constrained optimization problems. Figuring out what to consider as an optimand and the associated constraints is the real kicker.

angra_mainyu
2 replies
19h7m

If you're referring to micro/macro, I meant more like a mathematical introduction to the models.

I recall giving Mankiw a try and wished I could just find a physics-style textbook as I found it way too wordy.

zwaps
0 replies
12h0m

Most economists (who write these sort of textbooks) have some sort of math background. The push to find the most general "math" setting has been an ongoing topic since the 50's and so you can probably find what you are looking for. It's not part of undergraduate textbooks since adding generality gives better proofs but often adds "not that much" to insight. Nevertheless, the standard micro/macro models are just applications of optimization theory (lattice theory typically for micro, dynamical systems for macro). Game theory (especially mechanism design) is a bit of different topic, but I suppose that's not what you are looking for.

E.g., micro models are just constrained optimization based on the idea of representing preference relations over abstract sets with continuous functions. So obviously, the math is then very simple. This is considered a feature. You can also use more complex math, which helps with certain proofs (especially existence and representation).

You could grab some higher level math for econ textbooks, which typically include the models as examples, where you skip over the math.

For example, for micro, you can get the following: https://press.princeton.edu/books/hardcover/9780691118673/an... I think it treats the typical micro model (up to oligopoly models) via the first 50 or so pages while explaining set theory, lattices, monotone comparative statics with Tarski/Topkis etc.

QuesnayJr
0 replies
7h43m

Debreu's Theory of Value

tnecniv
0 replies
19h53m

It depends on what you’re doing. That is accurate for, say, describing the training of a neural network, but if you want to prove something about generalization, for example (which the book at least touches on from my skimming), you’ll need other techniques as well

outrun86
3 replies
18h35m

Bishop’s Pattern Recognition and Machine Learning is one example that has tremendous depth and much clearer notation. Deep Learning by Goodfellow et al. is another example, albeit with less depth than Bishop.

I’m glad you’re enjoying the book. The approach is ideal for a very small subset of the ML population, no doubt that was their intention. I’m just weighing in that it’s entirely possible to cover this material with rigour yet much simpler notation. Even as someone who could parse this I’d go with other options.

jeffhwang
2 replies
16h48m

Thanks for highlighting Bishop to me! I've self-taught through various resources esp. Goodfellow et al 2016. It's taken me a number of years to rebuild my math knowledge so that I feel comfortable with Goodfellow's treatment and look forward to learning from the Bishop book. Fwiw, I've found the math notation in the Goodfellow textbook to be among the best I've ever seen in terms of consistency and clarity. Some other books I enjoy, for example, do not seem to make any typographic indication of whether an object is a vector, scalar, or other. :(

p1esk
0 replies
12h58m

FYI, Bishop just released an updated DL book: https://www.bishopbook.com/

HybridCurve
0 replies
15h24m

I appreciated the notation in Goodfellow book as well, it was easy enough for me to follow without having a strong mathematics background. I'll agree however with others that this text is instead focused for a different audience and purpose.

t_mann
0 replies
18h12m

Re your question on economics books, I think Advanced Macroeconomics by David Romer could fit your bill. It goes a lot into why the math is the way it is (arguably more interesting, like another poster said). Modern macroeconomics is also built on microeconomics, and to that extent it's covered in the book, so you're sort of getting two-for-one here.

WhitneyLand
7 replies
19h38m

Use ChatGpt.

Screenshot the math, crop it down to the equation, paste into the chat window.

It can explain everything about it, what each symbol means, and how it applies to the subject.

It’s an amazing accelerator for learning math. There’s no more getting stuck.

I think it’s underrated because people hear “LLM’s aren’t good at math”. They are not good at certain kinds of problem solving (yet), but GPT4 is a fantastic conversational tutor.

godelski
6 replies
15h48m

Don't suggest this. While I agree it can be helpful, the problem is if you're a novice you won't be able to distinguish hallucinations. Which in my experience are fairly common, especially as you do advance topice. If you got good math rigor then it's extremely helpful, because often things are hard to exactly search, but it's a potential trap for novices. But if you have no better resource, then I can't blame anyone, just give a warning to take care.

WhitneyLand
2 replies
13h32m

That’s kind of like telling people not to go online because you can’t believe everything you read on the Internet.

What proportion of the problems you’ve encountered were with the free version vs premium? It’s a huge difference and the topic here is GPT4.

Also since it is fairly common for you are there any real world examples you can share?

godelski
1 replies
10h28m

That’s kind of like telling people not to go online because you can’t believe everything you read on the Internet.

Uhhh... it's like telling people to trust SO over reddit, especially a subreddit known to lie.

What proportion of the problems you’ve encountered were with the free version vs premium? It’s a huge difference and the topic here is GPT4.

Both. Can we stop doing this? This is a fairly well established principle with tons of papers written about it, especially around math. Just search arxiv, there's a new one at least every week

WhitneyLand
0 replies
9h47m

I’ll take that as it happens so infrequently with GPT4 you have no illustrative prompts that can be shared.

There have not been tons of papers written about this.

You seem to be conflating papers about GPT4 as a solver with it as a math tutor. It’s a completely different problem space.

CamperBob2
2 replies
15h2m

It works better than you think, as long as you use GPT 4. See my answer to the other person (https://news.ycombinator.com/item?id=38837646).

A lot of negativity comes from people who goofed around with 3.X for a while, came away unimpressed, muttered something under their breath about stochastic parrots or Markov chains that sounded profound (at least to them), and never bothered to look any further. 4 is different. 4 is starting to get a bit scary.

The real pedagogical value comes when you try to reconcile what it tells you about the equations with the equations themselves. Ask for clarification when something seems wrong, and there is an excellent chance it will catch its own mistakes.

godelski
1 replies
10h16m

That answer isn't very compelling as it is one of the most well known equations in ML. There are some very minor errors but nothing that changes the overall meaning. But you even seem to agree with me in your followup: don't rely on it, but use it. I'm only slightly stronger than you.

And stop all this 3.5 vs 4 nonesense. We all know 4 is much better. But there's plenty of literature that shows its limits, especially around memorization. You also don't understand stochastic parrots, but in fairness, seems like most people don't. LLMs start from compression algorithms and they are that at their core. But this doesn't mean it is a copy machine despite the NYT article but it also doesn't mean it is a thinking machine like the baby AGI people. Truth is in between but we can't have a real conversation because hype primed us to just bundle people into two camps and make us all true believers. Just please stop gaslighting people when they say they have run into issues. The machine is sensitive to prompts, so that can be a key difference or sometimes they might just see mistakes you don't. It's not an oracle so don't treat it like one. And don't confuse this criticism as saying LLMs suck, because I use them almost every day and love them. I just don't get why we can't be realistic about their limits and can only believe they are a golden goose or pile of shit. It's, again, neither.

CamperBob2
0 replies
2h22m

You also don't understand stochastic parrots

You have a parrot that can paint original pictures, compose original songs and essays, and translate math into both English and program code?

I would like to buy your parrot. I'll keep it in my Chinese room. There used to be a guy in there, but he ran away screaming something about a basilisk.

ceh123
5 replies
19h46m

As someone that’s in the later stages of a PhD in math, given the title starts with “Mathematical Introduction…”, the notation feels pretty reasonable for someone with a background in math.

Sure I might want some slight changes to the notation I found skimming through on my phone, but everything they define and the notation they choose feels pretty familiar and I understand why they did what they did.

Mirroring what someone else said, this is exactly the kind of intro I’ve been looking for for deep learning.

godelski
4 replies
15h38m

Is it fair to call something an introduction if it uses math from an upper division undergrad math criteria? Such as metric theory. My opinion is that it is context driven. E.g. Introduction to Differential Geometry or Introduction to Homotopy Theory. But I think you can't look at the title and infer prerequisites that are within the ballpark. I'd wager most people outside math and some physics students are familiar with Galerkin methods (maybe a handful of engineers) at the undergraduate level. I don't think most outside math and physics even learn PDEs (my engineering friends mostly didn't and my uni's CS program doesn't even require DE).

WhitneyLand
3 replies
13h5m

What percent of LLM knowledge requires proficiency in anything you mentioned?

From what I’ve seen it’s a small percentage, and there’s no reason for most people to be put off by it.

Everyone come on in the water is fine.

godelski
2 replies
10h31m

Between 0% and idk 70%? depending on what you're doing.

WhitneyLand
1 replies
3h57m

Looking at the theory as a whole it’s a very small minority.

I’m trying to think if it’s 0 percent outside of backprop…

Arguably high school math gets you quite a bit of understanding. After that in descending order I’d guess Linear Algebra, Statistics/Probability, Basic Calculus, Partial Derivatives…

In other words it’s not all or nothing. The easiest stuff gets you a lot of bang for your buck.

godelski
0 replies
5m

Are you a researcher in ML? What is your focus? I'm in image synthesis/explicit density modeling.

tnecniv
3 replies
19h36m

So this is a book written by applied mathematicians for applied mathematics (they state in the preface it’s for scientists, but some theoretical scientists and engineers are essentially applied mathematics). As a result, both the topics and the presentation are biased towards those types of people. For example, I’ve never seen in practice worry about the existence and uniqueness conditions for their gradient-based optimization algorithm in deep learning. However, that’s the kind of result those people do care about and academic papers are written on the topic. The title does say that this is a book on the theoretical underpinnings of the subject, so I am not surprised that it is written this way. People also don’t necessarily read these books cover-to-cover, but drill into the few chapters that use techniques relevant to what they themselves are researching. There was a similarly verbose monograph I used to use in my research, but only about 20-30 pages had the meat I was interested in.

This kind of book is more verbose than my liking both in terms of rigor and content. For example, they include Gronwall’s inequality as a lemma and prove it. The version that they use is a bit more general than the one I normally see, but Gronwall’s inequality is a very standard tool in analyzing ODEs and I have rigorous control theory books that state it without proof to avoid clutter (they do provide a reference to a proof). A lot of this verbosity comes about when your standard of proof is high and the assumptions you make are small.

aurareturn
2 replies
11h44m

Are there any books you recommend for deep learning that are written for developers who don't use math every day?

I suppose the goal would be to understand deep learning so that we know enough of what's going on but not to get stuck in math concepts that we probably don't know and won't use.

stefr-
1 replies
9h6m

I am/was in this scenario. I'm sure there are other resources out there specifically aimed at developers, but a book I'm reading now is "Deep Learning From Scratch" by Seth Weidman. He takes a different approach, by explaining concepts in three distinct methods: a mathematical way, by using diagrams and by showing the code.

I like this approach because it allows me to connect the math to the problem, whereas otherwise you wouldn't have.

In the book, you're slowly creating a DL framework, as the title says, from scratch. He also has all the code on GitHub: https://github.com/SethHWeidman/DLFS_code

I think if you are truly trying to understand deep learning, you will never get to avoid the math because that's really what it is at it's core, a couple of (non-linear) functions chained together (obvious gross oversimplification).

lr1970
0 replies
6h27m

The last commit in the repo of "Deep Learning from Scratch" was 5 years ago. It is hopelessly outdated. The field is changing very fast.

ivancho
3 replies
18h0m

I have a strong mathematical background, and I found the notation completely insane. Right out of the gate in chapter 1 we get a definition that has subscript indices in the subscript index and a summation with subscripts in the superscript, and then composed in a giant function chain. Later we get to 4-level subscripts deep, invent at least 3 new infix operators, define 30 new symbols from 3 different alphabets and we're barely at page 100 out of 600. I have no idea who is supposed to follow and digest this

chongli
2 replies
15h21m

I’m not sure what specialization of math you studied, but using superscripts for indices is pretty common where you’re dealing with multi-dimensional objects. I used it in a lot of the courses in my degree.

tsimionescu
0 replies
8h40m

They are not complaining about superscripts for indices, but about having a subscripts in those superscripts. Basically like x² but the ² has a subscript of its own. That is very dense and graphically hard to follow as notations go.

ivancho
0 replies
12h42m

I have no problem with superscripts. Here are a couple of examples of what I am talking about:

  \left(\Psi_{L} \circ \mathcal{A}_{l_{L}, l_{L-1}}^{\theta, \sum_{k=1}^{L-1} l_{k}\left(l_{k-1}+1\right)} \circ\right. & \Psi_{L-1} \circ \mathcal{A}_{l_{L-1}, l_{L-2}}^{\theta, \sum_{k=1}^{L-2} l_{k}\left(l_{k-1}+1\right)} \circ \ldots \\
  & \left.\ldots \circ \Psi_{2} \circ \mathcal{A}_{l_{2}, l_{1}}^{\theta, l_{1}\left(l_{0}+1\right)} \circ \Psi_{1} \circ \mathcal{A}_{l_{1}, l_{0}}^{\theta, 0}\right)
and

  x_{\mathcal{L}(\Psi)+k-1} & =\mathfrak{M}_{a \mathbb{1}_{(0, L)}(\mathcal{L}(\Psi)+k-1)+\mathrm{id}_{\mathbb{R}} \mathbb{1}_{\{L\}}(\mathcal{L}(\Psi)+k-1), \mathbb{D}_{k}(\Phi)}\left(\mathcal{W}_{k, \Phi} x_{\mathcal{L}(\Psi)+k-2}+\mathcal{B}_{k, \Phi}\right)
and sure, I can figure it out, but you have to agree there are some readability issues

aabajian
2 replies
21h28m

All three authors are PhDs or PhD-candidates in mathematics. The notation is extremely dense. I'm curious who their target audience of "students and scientists" are for this book.

tnecniv
0 replies
17h29m

Likely graduate students with a very theoretical interest. Some theoretically-oriented scientists and engineers are also basically applied mathematicians. It is presumably targeted at people that want to further develop the theoretical aspects of learning, as opposed to applied practitioners

angra_mainyu
0 replies
20h2m

I had a bunch of classes in undergrad (physics) that had basically the same notation and style.

strangedejavu2
1 replies
20h50m

It's not too difficult to understand, but this introduction isn't written with pedagogy in mind IMO

HybridCurve
0 replies
16h59m

This is the probably the most succinct explanation, and as an experienced perl developer, I admire your brevity.

joshuanapoli
1 replies
21h33m

Mathematical notation usually has a problem with preferring single-letter names. We usually prefer to avoid highly abbreviated identifier names in software, because they make the program harder to read. But they’re common in Math, and I think that it makes for a lot of work jumping back and forth to remind oneself what each symbol means when trying to make sense of a statement.

tsimionescu
0 replies
8h33m

I think the main difference is that in programming you typically use names from your domain, like "request" or "student". But math objects are all very abstract, they don't denote any domain. For example, if I have a triangle and I want to name its vertexes so I can refer to them later, what would be a good name? Should I call them vertexA, vertexB, and vertexC just so it's not a single letter?

spi
0 replies
19h31m

Sharing my experience here. My background is in math (Ph.D. and a couple of postdoc years) before switching to practitioner in deep learning. This year I taught a class at university (as invited prof) in deep learning for students doing a masters in math and statistics (but with some programming knowledge, too).

I tried to present concepts in an as reasonably accurate mathematical way as possible, and in the end I cut through a lot of math in part to avoid the heavy notation which seems to be present in this book (and in part to make sure students could spend what they learnt in the industry). My actual classes had way more code than formulas.

If you want to write everything very accurately, things get messy, quickly. Finding a good notation for new concepts in math is very hard, something that gets sometimes done by bright minds only, even though afterwards everybody recognizes it was “clear” (think about Einstein notation, Feynman diagrams, etc., or even just matrix notation, which Gauss was unaware of). If you just take domain A and write in notations from domain B, it’s hard to get something useful (translating quantum mechanics to math with C* algebras and co. was a big endeavour, still an open research field to some extent).

So I’ll disagree with some of the comments below and claim that the effort of writing down this book was huge but probably scarcely useful. Who can read comfortably these equations probably won’t need them (if you know what an affine transformation is, you hardly need to see all its ijkl indices written down explicitly for a 4-dimensional tensor), and the others will just be scared off. There might be a middle ground where it helps some, but at least I haven’t encountered such people…

conformist
0 replies
21h18m

Yes, it's easier for mathematicians, because a lot of background knowledge and intuition is encoded in mathematical conventions (eg "C(R)" for continuous functions on the reals etc...). Note that this is probably a book for mathematicians.

andrepd
0 replies
20h27m

Obligatory hn comment on any math-related topic: "notation bad"

Please be more original.

dachworker
14 replies
22h51m

Is anyone using any of this math? My guess is no. At best it provides "moral support" for deep learning researchers who want to feel reassured that what they are attempting to do is not impossible.

Glad to be proven wrong, though.

godelski
9 replies
22h15m

There's something I tell my students. You don't need math to make good models, but you do need to know math to know why your models are wrong.

So yes, math is needed. If you don't have math you're going to hoodwink yourself into thinking you can get to AGI by scale alone. You'll just use transformers everywhere because that's what everyone else does and you'll get confused between activation functions. You'll make models and models that work, but there's a big difference in working models and knowing where to expect your models to fail and understanding their limitations.

I feel a lot of people just look at test set results and expect that to mean that the model isn't overfitting. (not to mention tuning hps based on test set results)

joe_the_user
6 replies
21h15m

If you don't have math you're going to hoodwink yourself into thinking you can get to AGI by scale alone.

There are very smart people who think we can get to AGI by scale alone - they call that the "the scaling hypothesis", in fact. I think they're wrong but I thought they knew a fair amount of math.

What math would you use to describe the limitations of deep learning? My impression is there aren't any exact theorems that describe either it's limits or it's behavior/possibilities, there are just suggestive theorems and constructions combined with heuristics.

godelski
5 replies
19h17m

"the scaling hypothesis"

Oh boy, don't get me started.... I first off should say that by no means do I think any of these people (at least those publishing) are dumb. You can also be a genius in one direction and a fucking idiot in another, and that's okay. Certainly describes me haha (well less on the genius side and more on the functioning idiot side. So take everything I say with a grain of salt). Don't get me wrong, scale is incredibly important and is certainly the reason for our recent advancements. But scale taking us to AGI is fairly naive to me. The idea here has a few clear assumptions being made. First is that the data can accurately explain all phenomena if the machine is capable of sufficient imputation. I just don't even know how to tackle this one because it is so well established as false in the statistics literature. Another is that RLHF is enough for alignment. I like to say that RLHF is like Justice Stewart's definition of porn: I know it when I see it. This is certainly a useful tool, but we shouldn't be naive about its limitations. Just go on any reddit discussion on what constitutes NSFW and you'll find tons of disagreement or even the HN discussions of "Is This A Vehicle"[0]. Those comments are just beautiful and crazygringo (top comment) demonstrates this all perfectly. There's a powerful inference and imputation game going hand in hand and this is the issue. There needs to be more time spent thinking about one's brain and questioning assumptions we've made. As you advance, details become more and more important. We get tricked because you can often get away without nuance in the beginning of studying something but with sufficient expertise nuance ends up dominating the discussion and you might often actually see that naivety doesn't take a step in the right direction but rather can take you a step in the wrong direction (but often moving is more important). I'll reference Judea Pearl and Ilya on this one[1]. Pearl is absolutely correct, even if not conveyed well (it is Twitter after all). His book will give a good understanding of this though.

What math would you use to describe the limitations of deep learning?

This is hard, because there isn't as much research in it as there is in demonstrations. I wouldn't go as far as saying that there's no work, but it is just far less popular and advancements are slower. Some optimal transport people really get into this stuff as well as people that work on Normalizing Flows. Aapo Hyvarinen is a really good person to read and you'll find foundations for many things like diffusion in his works that far predate the boom. I'd also really suggest looking at Max Welling and any/all of his students. If you go down that path you'll find many more people but this is a good place to enter that network.

But honestly, the best math to get started on to learn this stuff isn't "ML math". It's statistics, probability, metric theory, topology, linear algebra, and many specialized domains within these. I'd even go as far to say that category theory and set theory are very useful. It's all that math that you learn for a lot of other things, but you just need to have the correct lens. There is a problem in math education that we're often either far too application focused or too abstract focused that we forget to be generalist and have that deeper understanding[2]. But this is a lot and I'm not sure of a good single resource that pulls it all together in a way good for introductions (this paper certainly has many of the things I'd mention but it is not introductory). After all, things are simpler after they are understood.

I've written a lot and feel like I may have not given a sufficient answer. There's a lot to say and it is hard to convey in general language to general audiences. But I think I have given enough to find the path you're asking about but just wouldn't suggest you're going to get a complete answer in a comment, unfortunately (maybe someone is a better communicator than me)

[0] https://news.ycombinator.com/item?id=36453856

[1] https://twitter.com/yudapearl/status/1735211875191910550

[2] I think the theory focused people do often understand this more, but that's usually after going through the gauntlet and likely isn't even seen by them along that journey and especially prior to the point where many people stop. Certainly Terry Tao understands how math is just models and something like "the wave equation" isn't specifically about waves and far more general. You'll also find a lot of breaktroughs where the key ingredient is taking something from one domain and shoving it into another. Patchwork is often needed but sometimes it gets more generalized (or they derive a generalization, then show that the two are specific instances of that general form).

armoredtech
2 replies
13h25m

ML researchers saying they need "category theory" sounds like a way to try to convince mathematicians that their work is cool. You absolutely do not need category theory.

Math is just models? Lol!

calebkaiser
1 replies
5h39m

The parent didn't say category theory is necessary to conducting ML research, just that it could be useful. This point isn't particularly controversial. If you're interested in this niche of the field, I find Tai-Danae Bradley's work to be pretty cool! She has a site: https://www.math3ma.com/

armoredtech
0 replies
46m

Thanks for the reply. I'm glad my comment is no longer flagged.

What do you mean that "this point isn't particularly controversial?" If you just mean that "X may be useful", then of course. But the particular X matters, and "could be useful" is much different than "is useful".

People who like category theory want it everywhere. I don't know your mathematical background, but spend any time in a math department, or even classes, and you'll find people ready to explain any topic in the language of CT.

The may be useful, but it has to be justified. It's clear in some mathematical contexts, but definitely not in ML (yet alone analysis).

ML has a problem in that no one knows what certain methods work. Just look at something like batch normalization: I can think of at least 3 different "explanations" on why it works.

ML people want explanations, and mathematicians need work. Category theorists therefore have work. But I don't think you should mistake this as being an explanation. You just get a nice get a "cleaner way" to present concepts.

borissk
1 replies
17h11m

Very interesting. Are any of your lectures available online?

godelski
0 replies
15h59m

I'm trying not to dox myself so I can be more open on HN (though more concerns in modern era...). You can find some harsh words against some ML community practices in my history and I think it is easy to get misinterpreted as calling people dumb or confuse academic bashing from utility (I criticize LLMs and diffusion a lot because I like them, not the other way). So yes and no. But the lectures I have aren't recorded and public (zoom for my Uni. I'm ABD in my PhD). My lecture slides and programs should be publicly visible though, but I don't go into this with them because I've been specifically asked to not teach this way :/ In all fairness, our ML course only has Calc 1 as a pre-req and CS students aren't required to take Lin Alg (most do though, but first courses are never really that great ime) or differential equations. TBH to get into this stuff you kinda need some metric theory. If you actually poke through this paper you'll find that come up very quickly, and this is common in the optimal transport community. But I think if you get into metric theory a lot of this will make sense pretty quickly. So if you can, maybe start with Shao's Mathematical Statistics?

light_hue_1
1 replies
22h8m

Oh sure. I say the same to my students.

But the particular spin on this book makes it look to non-experts that this is the math you need to do something useful with deep learning. And that's just not true.

Certainly you need to understand what you're optimizing, how your optimizer works, what your objective function is doing, etc. But the vast majority of people don't need to know about theoretical approximation results for problems that they will never actually encounter in real life, etc. For example, I have never used used anything like "6.1.3 Lyapunov-type stability for GD optimization" in a decade of ML research. I'm sure people do! But not on the kinds of problems I work on.

Just look at the comments here. People are complaining about the lack of context, but this is fine for the audience the book is aimed at. It's just the average HN reader.

I think it would be better if the authors chose a different title. As it stands, non-experts will be attracted and then be put off, and experts will think the book is likely to be too generic.

godelski
0 replies
21h53m

Yeah I would have a very hard time recommending this book too. It is absurdly math heavy. I'm not sure I've even seen another book this math dense before and I've read some pretty dense books targeting review. So I'm not even sure what audience this book is aimed for. Citations? And I fully agree that the title doesn't fit whoever that audience is.

nerdponx
0 replies
22h43m

Describing it as "moral support" really sells it short.

Imagine computer science without sorting algorithms, search algorithms, etc that have been proven correct and have known proven properties. This math serves the same purpose as CS theory.

So yes, if you're just fitting a model from a library like Keras, you're not really "using" the math. If you're working with data sets below a certain size, problems below a certain level of complexity, and models that have been deployed for many years and have well studied properties, you can do a lot with only a cursory understanding of the math, much like you can write perfectly functional web apps in Python or Java without really understanding how the language runtime works at a deep level.

But if you don't actually know how it works, you're going to get stuck pretty badly if you encounter a situation that isn't already baked into a library.

If you want to see what happens when you don't know the underlying math, look at the current generation of "data science" graduates, who don't know their math or statistics fundamentals. There are plenty of issues on the hiring side of course, but ultimately the reason those kids aren't getting jobs is that they don't actually know what they're doing, because they were never forced to learn this stuff.

nephanth
0 replies
22h36m

According to the abstract it covers different ANN architectures, optimization algorithms, probably backpropagation.. so um yes? That is stuff anyoke in machine learning uses everyday?

fastneutron
0 replies
22h13m

In the latter part of the book that covers PINNs and other PDE methods, it helps to frame these using the same kind of functional analysis that is used to develop more traditional numerical methods. In this case, it provides a way for practitioners to verify the physical consistency between the various methods.

danielmarkbruce
0 replies
22h28m

Some people like to think and communicate in dense math notation. So, yes.

HighFreqAsuka
14 replies
21h39m

I've seen quite a few of these books attempting to explain deep learning from a mathematical perspective and it always surprises me. Deep learning is clearly an empirical science for the time being, and very little theoretical work that has been so impactful that I would think to include it in a book. Of the such books I've seen, this one seems like actively the worst one. A significant amount of space is dedicated to proving lemmas that provide no additional understanding and are only loosely related to deep learning. And a significant chunk of the code I see is just the plotting code, which I don't even understand why you'd include. I'm confident that very few people will ever read significant chunks of this.

I think the best textbooks are still Deep Learning by Goodfellow etal and the more modern Understanding Deep Learning (https://udlbook.github.io/udlbook/).

thehappyfellow
9 replies
21h13m

This book is not aimed at practitioners but I don’t think that means it deserves to be called „actively the worst one”.

Even though the frontier of deep learning is very much empirical, there’s interesting work trying to understand why the techniques work, not only which ones do.

I’m sorry but saying proofs are not a good method for gaining understanding is ridiculous. Of course it’s not great for everyone but a book titled „Mathematical Introduction to x” is obviously for people with some mathematical training. For that kind of audience lemmas and their proof are natural way of building understanding.

HighFreqAsuka
8 replies
20h42m

Just read the section on ResNets (Section 1.5) and tell me if you think that's the best way to explain ResNets to literally anyone. Tell me if, from that description, you take away that the reason skip connections improve performance is that they improve gradient flow in very deep networks.

p1esk
7 replies
20h15m

the reason skip connections improve performance is that they improve gradient flow in very deep networks.

Can you prove this statement?

c7b
4 replies
18h1m

Neither do the authors in the book, and I'd argue that after (only) reading the book, the reader wouldn't be equipped to attempt this either (see my other post in this thread), so I think the parent poster has a point.

HighFreqAsuka
3 replies
16h58m

Yes, I have a very good point in fact. But the above comment purposely chooses not to argue with it, because it's easier to ignore it entirely and argue something else.

p1esk
2 replies
12h46m

The problem is you presented something as a fact while it’s just a guess. Some people guess it’s an improved gradient flow, others guess it’s a smoother loss surface, someone else guesses it’s a shortcut for early layer information to reach later layers, etc. We don’t actually know why resnets work so well.

HighFreqAsuka
1 replies
5h30m

The point of that comment doesn't have anything to do with how ResNets actually work. You missed the actual point.

We don’t actually know why resnets work so well.

Yes actually we do. We know, from the literature, that very deep neural networks suffered from vanishing gradients in their early layers in the same way traditional RNNs did. We know that was the motivation for introducing skip connections which gives us a hypothesis we can test. We can measure, using the test I described, the differences in the size of gradients in the early layers with and without skip connections. We can do this across many different problems for additional statistical power. We can analyze the linear case and see that the repeated matmults should lead to small gradients if their singular values are small. To ignore all of this and say that well we don't have a general proof that satisfies a mathematician so i guess we just don't know is silly.

p1esk
0 replies
1h29m

You're doing it again - presenting guesses as facts. Why would a resnet - a batch normalized network using ReLU activations suffer from vanishing gradient problem? Does it? Have you actually done the experiment you've described? I have, and I didn't see gradients vanish. Sometimes gradients exploded - likely from a bad weights initialization (to be clear - that's a guess), and sometimes they didn't, but even when they didn't the networks never converged. The best we can do is to say: "skip connections seem to help training deep networks, and we have a few guesses as why, none of which is very convincing".

We know, from the literature

Let's look at the literature:

1. Training Very Deep Neural Networks: Rethinking the Role of Skip Connections: https://orbilu.uni.lu/bitstream/10993/47494/1/OyedotunAl%20I... they're making a hypothesis that skip connections might help prevent transformation of activations into singular matrices, which in turn could lead to unstable gradients (or not, it's a guess).

2. Improving the Trainability of Deep Neural Networks through Layerwise Batch-Entropy Regularization: https://openreview.net/pdf?id=LJohl5DnZf they are making some hypothesis about an optimal information flow through the network, and that a particular form of regularization helps improve this flow (no skip connections are needed).

3. Deep Learning without Shortcuts: Shaping the Kernel with Tailored Rectifiers https://arxiv.org/abs/2203.08120: focus on initial conditions and propose better activation functions.

Clearly the issues are a bit more complicated than the vanishing gradients problem, and each of these papers offer a different explanation of why skip connections help.

It's similar to people building a bridge in 15th century - there was empirical evidence and intuition of how bridges should be built, but very little theory explaining that that evidence or intuition. Your statements are like "next time we should make the support columns thicker so that the bridge doesn't collapse", when in reality it collapsed due to the resonant oscillations induced by people marching on it in unison. Thicker columns will probably help, but they do nothing to improve understanding of the issue. They are just a guess.

That's why we need mathematicians looking at it, and attempting to formalize at least parts of the empirical evidence, so that someone, some day, will develop a compelling theory.

HighFreqAsuka
1 replies
20h12m

Empirically yes, I can consider a very deep fully-connected network, measure the gradients in each layer with and without skip connections, and compare. I can do this across multiple seeds and run a statistical test on the deltas.

psyklic
0 replies
18h28m

Empirical studies are only useful until the system is mathematically understood. For example, I can construct transformer circuits where the skip connection (provably) purely adds noise.

I can also prove in particular cases the MLP's sole purpose is to remove the noise added from the skip connection.

danielmarkbruce
2 replies
21h23m

UDL has some dense math notation in it.

Math isn't just about proofs. It's a way to communicate. There are several different ways to communicate how a neural net functions. One is with pictures. One is with some code. One is with words. One is with some quite dense math notation.

n3ur0n
0 replies
21h10m

I would say UDL should be very accessible to any undergrad from a strong program.

I would not call the notation ‘dense’ rather it’s ‘abused’ notation. Once you have seen the abused notation enough times, it makes just makes sense. Aka “mathematical maturity” in the ML space.

My views on this have changed as a first year PhD in ML I got annoyed by the shorthand. Now as someone with a PhD, I get it — It’s just too cumbersome to write out what exactly you mean and you write like you’re writing for peers +\- a level.

HighFreqAsuka
0 replies
21h19m

I agree with that, I think UDL uses the necessary amount of math to communicate the ideas correctly. That is obviously a good thing. What it does not do is pretend to be presenting a mathematical theory of deep learning. Basically UDL is exactly how I think current textbooks should be presented.

blauditore
0 replies
21h30m

I think the mathematical background starts making sense once you get a good understanding of the topic, and then people make the wrong assumption that understanding the math will help learning the overall topic, but it that's usually pretty hard.

Rather than trying to form an ituition based on the theory, it's often easier to understand the technicalities after getting an intuition. This is generally true in exact sciences, especially mathematics. That's why examples are helpful.

ottaborra
5 replies
20h41m

This makes me wonder. Is deep learning as a field an empirical science purely because everyone is afraid of the math? It has the richness of modern day physics but for some reason most the practioners seem to want to keep thinking of it as the wild west

HighFreqAsuka
3 replies
20h36m

No, there are many very mathematically inclined deep learning researchers. It's an empirical science because the mathematical tools we possess are not sufficient to describe the phenomena we observe and make predictions under one unified theory. Being an empirical science does not mean that the field is a "wild west". Deep learning models are subjectable to repeatable controlled experiments, from which you can improve your understanding of what will happen in most cases. Good practitioners know this.

ottaborra
1 replies
19h59m

The main point you're making is fair

The only gripe I have is > Being an empirical science does not mean that the field is a "wild west"

I think what you meant to say is: "Being an empirical science does not <b>necessarily</b> mean that the field is a \"wild west\""

you clearly haven't seen the social sciences

Good practitioners know this

sure?

Edit: Removed unnecessary portions that wouldn't have continued the conversation in any meaningful way

bawolff
0 replies
16h40m

I think the necessarily is clearly implied from context.

trhway
0 replies
19h39m

It's an empirical science because the mathematical tools we possess are not sufficient to describe the phenomena we observe and make predictions under one unified theory.

To me the deep learning is actually itself a [long-awaited] tool (which has well established, and simple at that, math underneath - gradient based optimization, vector space representation and compression) to make a good progress toward mathematical foundations of the empirical science of cognition.

In the 90-ies there were works showing that for example Gabors in the first layer of the biological visual cortex are optimal for the feature based image recognition that we have. And as it happens in the DL visual NNs the convolution kernels in the first layers also converge to the Gabor-like. I see [signs of] similar convergence in the other layers (and all those semantically meaningful vector operations in the embedding space in LLMs are also very telling). Proving optimality or similar is much harder there, yet to me those "repeatable controlled experiments" (i.e. stable convergence) provide strong indication that it will be the case (as something does drive that convergence, and when there is such a drive in dynamic systems, you naturally end asymptotically up ("attracted") near something either fixed or periodic), and that would be a (or even "the") math foundation for understanding of cognition (dis-convergence from the real biological cognition, ie. emergence of completely different, yet comparable, type of cognition would also be great, if not even the much greater result) .

tnecniv
0 replies
19h28m

A little bit of A and B. You can do a lot with very little math beyond linear algebra, calculus, and undergraduate probability, and that knowledge is mainly there to provide intuition and formalize the problem that you’re solving a bit. You also churn out results (including very impressive ones) without doing any math.

A result of the above is that people are empirically demonstrating new problems and solving them very quickly — much more quickly than people can come up with theoretical results explaining why they work. The theory is harder to come by for a few reasons, but many of the successful examples of deep learning don’t fit nicely into older frameworks from, e.g., statistics and optimal control, to explain them well.

two_in_one
3 replies
15h2m

It's hard to call comprehensive. Transformers - one page. A picture would be nice. No "prompt engineering", no "double deep". In fact words "prompt" and "double" aren't used at all. "Recognition" is used only once outside of bibliography just for reference. Looks like theory will not catch up with practice any time soon. With looming singularity it's bit worrying.

tnecniv
0 replies
12h2m

Right in the title it explains it’s a book on theory. “Prompt engineering” doesn’t really fit in any theoretical framework I’m aware of and, while I also like graphics, most theory publications are light on them. You might be looking for a different kind of book, which is fine, but I think the content matches the title.

Also, a book on theory is going to lag quite a bit in terms of topics. The general process is that people discover something new and interesting empirically and publish articles on it. Other people develop theory explaining why that things work and publish articles on it. Once the theory gets crystallized, big ideas get distilled into a book.

nephanth
0 replies
2h22m

It's an introductory book though? I don't think it aims at being comprehensive

That said, i do agree that more on transformers would be nice since they're becoming quite central in every field of machine learning.

Prompt engineering is extremely new, vastly empirical, and theory on is still only beginning (though i do remember seeing some nice papers passing). It would probably be a mistake to include it in an introductory book

I have never heard of "double deep", what is that?

fourier456
0 replies
14h52m

This just isn’t really a good book. One sign; it’s an arXiv book.

montyanderson
3 replies
15h27m

for those who want some maths-heavy stuff for deep learning, check francois fluret's book https://fleuret.org/francois/lbdl.html. the pdf is free but the print is so cute.

CamperBob2
1 replies
15h24m

Has anyone figured out a way to print the Fleuret book on A4 paper? Every other page ends up upside down when I've tried it, which is problematic with a duplexer.

ajb
0 replies
3h10m

You can probably use pdftk to rotate every other page somehow.

3abiton
0 replies
10h17m

What makes it stand out compared to OP post?

c7b
3 replies
18h51m

Seems like a good collection of standard ML techniques, introduced with a fairly unified mathematical notation and quite a few proofs. Quite the Herculean effort (600 pages!). It just seems to me like they're putting the emphasis on the stuff that is more straightforward to formalize rather than the stuff that would be interesting to understand.

Look eg at the SGD chapter. I picked this because I think optimization is one of the areas where mathematicians actually can and do make impactful contributions to ML. But then look at the chapter in the book: most of the proofs are fairly elementary (like bias-variance decompositions or Jensen inequalities), some more interesting theorems (on convergence) are cited from the literature and do not build on the lemmata, and the sub-chapters on the actually interesting methods like ADAM,... are completely free of proofs or theory. It seems to me that after reading the chapter, a reader will have a good understanding of modern SGD methods and how we got there, but they won't necessarily be much wiser about why those methods work, other than having a good intuition confirmed by numerical experiments. If that's the outcome, then I wonder what the fuss proving all the basic stuff was all for. Wouldn't it be more useful to dedicate the space to convergence proofs for ADAM (which do exist) rather than showing lots of stuff like E(XY) = E(X)E(Y) for independent random variables?

That's just one chapter, I may not be doing them full justice here, although I did read through a few others as well. I first got this impression from the ANN chapter, which is ripe with long proofs for rather basic and uninteresting stuff, and from the physics-informed neural networks paper (which I actually find really nice, although it suffers a bit from the same problem as the SGD chapter). I don't want to be too critical here, it is nice in general to move towards a more rigorous and unified exposition of ML methods, and their approach should extend to the more technical results as well, just questioning where they drew the line of what to include and what not.

modeless
2 replies
10h51m

they won't necessarily be much wiser about why those methods work, other than having a good intuition confirmed by numerical experiments.

This is the state of the field as a whole, isn't it?

Wouldn't it be more useful to dedicate the space to convergence proofs for ADAM (which do exist)

Convergence proofs don't really explain why Adam tends to work better than other methods.

It's hard to blame them for not being able to explain things that, currently, nobody understands. But I guess it kind of undermines the idea of a theory-heavy approach to teaching if the theory we have can't predict the things that are actually important.

go_elmo
0 replies
8h45m

*Cant predict the important things _yet_

c7b
0 replies
7h53m

ADAM is known to have better convergence bounds than other methods. Theoretical bounds may not explain the full story of why a method works well, but it is how mathematicians reason about it. I'm only blaming them for not sharing those relevant parts of what we already know.

My bigger pain point even is how they choose to allocate their space: the theorem statements for the most relevant results are missing, the proofs for the more interesting theorems are just citations, while the proofs for basic and arguably tangentially relevant lemmata from eg probability theory take up pages and pages.

axpy906
3 replies
22h9m

This is in Tensorflow. Would rather see a numpy version or something along those lines so that students can better understand what each step looks like in code.

I concur on the comments noting lack of explanation for the notation/lemmas/proof.

godelski
0 replies
21h50m

I second this. Numpy would be the way to go, so students can switch to JAX or PyTorch trivially. Or they could use a mix, starting with numpy, build the layer from scratch, then hand over the abstraction. Pyro would be really good for this too

_giorgio_
0 replies
21h46m

Tensorflow? LOL what is this, the year 2010?

CamperBob2
0 replies
20h52m

Most of the examples I saw used Pytorch. (Which is still a step or two removed from the actual machinery, of course.)

reqo
2 replies
22h44m

Is it common to publish books directly to ArXiv, especially books that have just been released?

godelski
1 replies
22h32m

It's not too uncommon to see books available online from an official location. At least math and CS textbooks

nerdponx
0 replies
21h49m

Normally I just see it on the author's website.

runsWphotons
1 replies
19h38m

I like this book and everyone complaining about the math and math notation is a silly goose.

programjames
0 replies
19h20m

Oh my, that was the first thing that I came to complain about here. The type setting is pretty awful, it makes it very difficult to read.

sealWithIt
0 replies
16h18m

Looks like exactly what I was looking for :p

sage76
0 replies
1h36m

Do people really finish these books cover to cover?

I have been working on Bishop's PRML, and it is extremely time consuming to really finish the book, and do all the exercises.

I saw a blog from a guy who did the same, and it took him 1500+ hours.

Not a single person in my masters program finished any of these books. They just did the courses and googled whatever else was necessary.

majikaja
0 replies
16h9m

I don't think the content of the comments in this thread is limited to ML. I think there is lot of applied math research out there (almost all of it?) that hardly anyone outside of academia actually reads.

I think there's some useful stuff but my impression is that research papers are mostly dead ends so I stick to graduate textbooks. Maybe other people have other approaches? I'm not a math researcher so I don't need to be at the cutting edge.

godelski
0 replies
22h26m

First time I've seen one of these books where I wished there was more words and less math. Usually it is quite the opposite. But this book seems written as if they wanted to avoid natural language at all costs.