As someone who has a deeper knowledge of programming rather than math, I find the mathematical notation here to be harder to understand than the code (even in a programming language I do not know).
Does anyone with a stronger mathematical background here find it easier to understand the math as written more easily than the source code?
Mathematical notation is more concise, which may take some getting used to. One reason is that it is optimized for handwriting. Handwriting program code would be very tedious, so you can see why mathematical notation is the way it is.
Apart from that, there is no “the code” equivalent. Mathematical notation is for stating mathematical facts or propositions. That’s different from the purpose of the code you would write to implement deep-learning algorithms.
The last part was a big hurdle for me as an early undergrad. I was a fairly strong programmer toward the end of high school, and was trying to think of math as programming. That worked for the fairly algorithmic high school stuff and I got good grades, but it made I was awful at writing proofs. I also went through a phase where I used all the logical notation and rules to manipulate it possible in order to make proofs more algorithmic to me, but that both didn’t work well for me and produced some downright unreadable results.
Mathematical notation is really a shorthand for words, like you’d read text. The equals sign is literally short for equals. The added benefit, as others have pointed out, is that a good notation can sometimes be clearer than words because it makes certain conclusions almost obvious. You’ve done the hard part in finding a notation that captures exactly the idea to be demonstrated in its encoding, and the result is a very clean manipulation of your notation.
This is essentially my problem. I started writing programs at a young age and was introduced (unknowingly) to many more advanced mathematical concepts from that perspective rather than through pure mathematics. What was it that helped break this paradigm for you?
I wrote a book: https://pimbook.org
You might find it useful for your situation. The PDF is pay-what-you-want if you don't feel like paying for it.
Ah, I think I remember bookmarking this when it was posted before. You really don't have to go very far in computing to find a frontier where most everything in described pure mathematics and so it becomes a substantial barrier for undiversified autodidacts in the field. The math in these areas can often be quite advanced and difficult to approach without the proper background and so I appreciate anyone who has made taken the time to make it less formidable to others.
I would suggest something like https://ocw.mit.edu/courses/6-042j-mathematics-for-computer-... instead of that book.
I appreciate that some may find the book useful, but I personally don't agree with the presentation. There are too many conceptual errors in the book that you need to unlearn to make progress. For example, the book describes R^2 as a "pair" of real numbers. This is very much untrue and that kind of thinking will lead you even further astray.
I say this as someone with a math/cs degree and PhD having taught these topics to hundreds of students.
I naturally auto-corrected this "(the set of) pairs of real numbers". If that's the case, then I don't see how this differs from the actual definition. What is the conceptual error? Is it the missing 'set of'?
Really trial and error and grinding through proofs. Working through Linear Algebra Done Right was a big a-ha moment for me. Since I was self-studying over the summer (to remedy my poor linear algebra skills), I was very rigorous in making sure I understood every line of the proofs in the text and trying to mimic his style in the exercises.
In hindsight, I think the issue was trying to map everything to programming is a bad idea and I was doing it because programming was the best tool in my tool chest. It was a real “when all you have is a hammer, everything looks like a nail” issue for me.
I don't know that has anything to do with programming.
Arithmatic and writing proofs are very different skills. There is going to be a gap for everyone.
Yeah I know it’s a common challenge. I think it took me a bit longer than some of my peers because I was trying to force it to be like something I knew instead of meeting it on its own terms.
When all you have is a hammer and all that
And as such it is way too often abused. Because the (original, and the most useful) purpose of mathematical notation is to enable calculation, i.e., in a general sense, to make it possible to obtain results by manipulating symbols according certain rules.
I see the steps of a calculation as stating a sequence of mathematical facts, so that’s just an instance of the general definition.
Sure, but the whole point is to avoid the need to do that! Manipulating symbols is the way to automate reasoning, i.e. to get to a result while completely ignoring said "facts." Using the symbols to merely "state the facts" is abuse (of the reader, mostly).
I’m just wrapping up a PhD in ML. The notation here is unnecessarily complex IMO. Notation can make things easier, or it can make things more difficult, depending on a number of factors.
Really? Coming from physics (B.Sc only) the notation is refreshingly familiar and straightforward. My topology and analysis classes were basically like this.
In fact, this pdf is literally the resource I've been searching for as many others are far too ambiguous and handwavey focusing more on libraries and APIs than what's going on behind the scenes.
If only there were a similar one for microeconomics and macroeconomics, I'd have my curiosity satiated.
As a PhD econ student, the mathematics just comes down solving constrained optimization problems. Figuring out what to consider as an optimand and the associated constraints is the real kicker.
If you're referring to micro/macro, I meant more like a mathematical introduction to the models.
I recall giving Mankiw a try and wished I could just find a physics-style textbook as I found it way too wordy.
Most economists (who write these sort of textbooks) have some sort of math background. The push to find the most general "math" setting has been an ongoing topic since the 50's and so you can probably find what you are looking for. It's not part of undergraduate textbooks since adding generality gives better proofs but often adds "not that much" to insight. Nevertheless, the standard micro/macro models are just applications of optimization theory (lattice theory typically for micro, dynamical systems for macro). Game theory (especially mechanism design) is a bit of different topic, but I suppose that's not what you are looking for.
E.g., micro models are just constrained optimization based on the idea of representing preference relations over abstract sets with continuous functions. So obviously, the math is then very simple. This is considered a feature. You can also use more complex math, which helps with certain proofs (especially existence and representation).
You could grab some higher level math for econ textbooks, which typically include the models as examples, where you skip over the math.
For example, for micro, you can get the following: https://press.princeton.edu/books/hardcover/9780691118673/an... I think it treats the typical micro model (up to oligopoly models) via the first 50 or so pages while explaining set theory, lattices, monotone comparative statics with Tarski/Topkis etc.
Debreu's Theory of Value
It depends on what you’re doing. That is accurate for, say, describing the training of a neural network, but if you want to prove something about generalization, for example (which the book at least touches on from my skimming), you’ll need other techniques as well
Bishop’s Pattern Recognition and Machine Learning is one example that has tremendous depth and much clearer notation. Deep Learning by Goodfellow et al. is another example, albeit with less depth than Bishop.
I’m glad you’re enjoying the book. The approach is ideal for a very small subset of the ML population, no doubt that was their intention. I’m just weighing in that it’s entirely possible to cover this material with rigour yet much simpler notation. Even as someone who could parse this I’d go with other options.
Thanks for highlighting Bishop to me! I've self-taught through various resources esp. Goodfellow et al 2016. It's taken me a number of years to rebuild my math knowledge so that I feel comfortable with Goodfellow's treatment and look forward to learning from the Bishop book. Fwiw, I've found the math notation in the Goodfellow textbook to be among the best I've ever seen in terms of consistency and clarity. Some other books I enjoy, for example, do not seem to make any typographic indication of whether an object is a vector, scalar, or other. :(
FYI, Bishop just released an updated DL book: https://www.bishopbook.com/
I appreciated the notation in Goodfellow book as well, it was easy enough for me to follow without having a strong mathematics background. I'll agree however with others that this text is instead focused for a different audience and purpose.
Re your question on economics books, I think Advanced Macroeconomics by David Romer could fit your bill. It goes a lot into why the math is the way it is (arguably more interesting, like another poster said). Modern macroeconomics is also built on microeconomics, and to that extent it's covered in the book, so you're sort of getting two-for-one here.
Use ChatGpt.
Screenshot the math, crop it down to the equation, paste into the chat window.
It can explain everything about it, what each symbol means, and how it applies to the subject.
It’s an amazing accelerator for learning math. There’s no more getting stuck.
I think it’s underrated because people hear “LLM’s aren’t good at math”. They are not good at certain kinds of problem solving (yet), but GPT4 is a fantastic conversational tutor.
Don't suggest this. While I agree it can be helpful, the problem is if you're a novice you won't be able to distinguish hallucinations. Which in my experience are fairly common, especially as you do advance topice. If you got good math rigor then it's extremely helpful, because often things are hard to exactly search, but it's a potential trap for novices. But if you have no better resource, then I can't blame anyone, just give a warning to take care.
That’s kind of like telling people not to go online because you can’t believe everything you read on the Internet.
What proportion of the problems you’ve encountered were with the free version vs premium? It’s a huge difference and the topic here is GPT4.
Also since it is fairly common for you are there any real world examples you can share?
Uhhh... it's like telling people to trust SO over reddit, especially a subreddit known to lie.
Both. Can we stop doing this? This is a fairly well established principle with tons of papers written about it, especially around math. Just search arxiv, there's a new one at least every week
I’ll take that as it happens so infrequently with GPT4 you have no illustrative prompts that can be shared.
There have not been tons of papers written about this.
You seem to be conflating papers about GPT4 as a solver with it as a math tutor. It’s a completely different problem space.
It works better than you think, as long as you use GPT 4. See my answer to the other person (https://news.ycombinator.com/item?id=38837646).
A lot of negativity comes from people who goofed around with 3.X for a while, came away unimpressed, muttered something under their breath about stochastic parrots or Markov chains that sounded profound (at least to them), and never bothered to look any further. 4 is different. 4 is starting to get a bit scary.
The real pedagogical value comes when you try to reconcile what it tells you about the equations with the equations themselves. Ask for clarification when something seems wrong, and there is an excellent chance it will catch its own mistakes.
That answer isn't very compelling as it is one of the most well known equations in ML. There are some very minor errors but nothing that changes the overall meaning. But you even seem to agree with me in your followup: don't rely on it, but use it. I'm only slightly stronger than you.
And stop all this 3.5 vs 4 nonesense. We all know 4 is much better. But there's plenty of literature that shows its limits, especially around memorization. You also don't understand stochastic parrots, but in fairness, seems like most people don't. LLMs start from compression algorithms and they are that at their core. But this doesn't mean it is a copy machine despite the NYT article but it also doesn't mean it is a thinking machine like the baby AGI people. Truth is in between but we can't have a real conversation because hype primed us to just bundle people into two camps and make us all true believers. Just please stop gaslighting people when they say they have run into issues. The machine is sensitive to prompts, so that can be a key difference or sometimes they might just see mistakes you don't. It's not an oracle so don't treat it like one. And don't confuse this criticism as saying LLMs suck, because I use them almost every day and love them. I just don't get why we can't be realistic about their limits and can only believe they are a golden goose or pile of shit. It's, again, neither.
You also don't understand stochastic parrots
You have a parrot that can paint original pictures, compose original songs and essays, and translate math into both English and program code?
I would like to buy your parrot. I'll keep it in my Chinese room. There used to be a guy in there, but he ran away screaming something about a basilisk.
As someone that’s in the later stages of a PhD in math, given the title starts with “Mathematical Introduction…”, the notation feels pretty reasonable for someone with a background in math.
Sure I might want some slight changes to the notation I found skimming through on my phone, but everything they define and the notation they choose feels pretty familiar and I understand why they did what they did.
Mirroring what someone else said, this is exactly the kind of intro I’ve been looking for for deep learning.
Is it fair to call something an introduction if it uses math from an upper division undergrad math criteria? Such as metric theory. My opinion is that it is context driven. E.g. Introduction to Differential Geometry or Introduction to Homotopy Theory. But I think you can't look at the title and infer prerequisites that are within the ballpark. I'd wager most people outside math and some physics students are familiar with Galerkin methods (maybe a handful of engineers) at the undergraduate level. I don't think most outside math and physics even learn PDEs (my engineering friends mostly didn't and my uni's CS program doesn't even require DE).
What percent of LLM knowledge requires proficiency in anything you mentioned?
From what I’ve seen it’s a small percentage, and there’s no reason for most people to be put off by it.
Everyone come on in the water is fine.
Between 0% and idk 70%? depending on what you're doing.
Looking at the theory as a whole it’s a very small minority.
I’m trying to think if it’s 0 percent outside of backprop…
Arguably high school math gets you quite a bit of understanding. After that in descending order I’d guess Linear Algebra, Statistics/Probability, Basic Calculus, Partial Derivatives…
In other words it’s not all or nothing. The easiest stuff gets you a lot of bang for your buck.
Are you a researcher in ML? What is your focus? I'm in image synthesis/explicit density modeling.
So this is a book written by applied mathematicians for applied mathematics (they state in the preface it’s for scientists, but some theoretical scientists and engineers are essentially applied mathematics). As a result, both the topics and the presentation are biased towards those types of people. For example, I’ve never seen in practice worry about the existence and uniqueness conditions for their gradient-based optimization algorithm in deep learning. However, that’s the kind of result those people do care about and academic papers are written on the topic. The title does say that this is a book on the theoretical underpinnings of the subject, so I am not surprised that it is written this way. People also don’t necessarily read these books cover-to-cover, but drill into the few chapters that use techniques relevant to what they themselves are researching. There was a similarly verbose monograph I used to use in my research, but only about 20-30 pages had the meat I was interested in.
This kind of book is more verbose than my liking both in terms of rigor and content. For example, they include Gronwall’s inequality as a lemma and prove it. The version that they use is a bit more general than the one I normally see, but Gronwall’s inequality is a very standard tool in analyzing ODEs and I have rigorous control theory books that state it without proof to avoid clutter (they do provide a reference to a proof). A lot of this verbosity comes about when your standard of proof is high and the assumptions you make are small.
Are there any books you recommend for deep learning that are written for developers who don't use math every day?
I suppose the goal would be to understand deep learning so that we know enough of what's going on but not to get stuck in math concepts that we probably don't know and won't use.
I am/was in this scenario. I'm sure there are other resources out there specifically aimed at developers, but a book I'm reading now is "Deep Learning From Scratch" by Seth Weidman. He takes a different approach, by explaining concepts in three distinct methods: a mathematical way, by using diagrams and by showing the code.
I like this approach because it allows me to connect the math to the problem, whereas otherwise you wouldn't have.
In the book, you're slowly creating a DL framework, as the title says, from scratch. He also has all the code on GitHub: https://github.com/SethHWeidman/DLFS_code
I think if you are truly trying to understand deep learning, you will never get to avoid the math because that's really what it is at it's core, a couple of (non-linear) functions chained together (obvious gross oversimplification).
The last commit in the repo of "Deep Learning from Scratch" was 5 years ago. It is hopelessly outdated. The field is changing very fast.
I have a strong mathematical background, and I found the notation completely insane. Right out of the gate in chapter 1 we get a definition that has subscript indices in the subscript index and a summation with subscripts in the superscript, and then composed in a giant function chain. Later we get to 4-level subscripts deep, invent at least 3 new infix operators, define 30 new symbols from 3 different alphabets and we're barely at page 100 out of 600. I have no idea who is supposed to follow and digest this
I’m not sure what specialization of math you studied, but using superscripts for indices is pretty common where you’re dealing with multi-dimensional objects. I used it in a lot of the courses in my degree.
They are not complaining about superscripts for indices, but about having a subscripts in those superscripts. Basically like x² but the ² has a subscript of its own. That is very dense and graphically hard to follow as notations go.
I have no problem with superscripts. Here are a couple of examples of what I am talking about:
and and sure, I can figure it out, but you have to agree there are some readability issuesAll three authors are PhDs or PhD-candidates in mathematics. The notation is extremely dense. I'm curious who their target audience of "students and scientists" are for this book.
Likely graduate students with a very theoretical interest. Some theoretically-oriented scientists and engineers are also basically applied mathematicians. It is presumably targeted at people that want to further develop the theoretical aspects of learning, as opposed to applied practitioners
I had a bunch of classes in undergrad (physics) that had basically the same notation and style.
It's not too difficult to understand, but this introduction isn't written with pedagogy in mind IMO
This is the probably the most succinct explanation, and as an experienced perl developer, I admire your brevity.
Mathematical notation usually has a problem with preferring single-letter names. We usually prefer to avoid highly abbreviated identifier names in software, because they make the program harder to read. But they’re common in Math, and I think that it makes for a lot of work jumping back and forth to remind oneself what each symbol means when trying to make sense of a statement.
I think the main difference is that in programming you typically use names from your domain, like "request" or "student". But math objects are all very abstract, they don't denote any domain. For example, if I have a triangle and I want to name its vertexes so I can refer to them later, what would be a good name? Should I call them vertexA, vertexB, and vertexC just so it's not a single letter?
Sharing my experience here. My background is in math (Ph.D. and a couple of postdoc years) before switching to practitioner in deep learning. This year I taught a class at university (as invited prof) in deep learning for students doing a masters in math and statistics (but with some programming knowledge, too).
I tried to present concepts in an as reasonably accurate mathematical way as possible, and in the end I cut through a lot of math in part to avoid the heavy notation which seems to be present in this book (and in part to make sure students could spend what they learnt in the industry). My actual classes had way more code than formulas.
If you want to write everything very accurately, things get messy, quickly. Finding a good notation for new concepts in math is very hard, something that gets sometimes done by bright minds only, even though afterwards everybody recognizes it was “clear” (think about Einstein notation, Feynman diagrams, etc., or even just matrix notation, which Gauss was unaware of). If you just take domain A and write in notations from domain B, it’s hard to get something useful (translating quantum mechanics to math with C* algebras and co. was a big endeavour, still an open research field to some extent).
So I’ll disagree with some of the comments below and claim that the effort of writing down this book was huge but probably scarcely useful. Who can read comfortably these equations probably won’t need them (if you know what an affine transformation is, you hardly need to see all its ijkl indices written down explicitly for a 4-dimensional tensor), and the others will just be scared off. There might be a middle ground where it helps some, but at least I haven’t encountered such people…
Yes, it's easier for mathematicians, because a lot of background knowledge and intuition is encoded in mathematical conventions (eg "C(R)" for continuous functions on the reals etc...). Note that this is probably a book for mathematicians.
Obligatory hn comment on any math-related topic: "notation bad"
Please be more original.