I skipped around in the book a bit and found it interesting. I’d consider encouraging my kids to learn calculus this way.
However, I am curious about the first paragraph of the preface:
Julia is an open-source programming language with an easy to learn syntax that is well suited for this task.
Why is Julia better suited than any other language?
Julia has a bunch of little niceties for mathematics:
* Prefixing a variable with a scaler will implicitly multiply, as in standard math notation (e.g. 3x would evaluate to 6 if x is 2)
* Lots of unicode support. Some feel almost gratuitous (∈ and ∉ are operators that work as you would expect) but it's pretty nice to have π predefined (it's even an irrational datatype) and to use √ as an operator (e.g. √2 is a valid expression and evaluates to a float). It's not just that Julia supports these constructions but provides a convenient way to get them to appear.
* This is a little less relevant for calculus but vectors and matrices are fist class types in Julia. Entering and visually parsing matrices is so much easier in Julia than in Python.
vs. Transposition is a single character operator ('). Dot product can be done with the dot operator (m ⋅ n). A\b works as it does in matlab.* Julia supports broadcasting. It also has comprehensions but with broadcasting I personally find much less need for comprehensions.
* Rationals are built-in with very simple syntax (1//2))
I quite like and use Julia but wish there was a language mixing the best aspects of Julia and Swift (which I think can be done without many compromises, i.e. it would be a better language overall).
Some things I don't like about Julia:
- array.mean().round(digits=2) is more readable than round(mean(array), digits=2)
- Poor support for OOP (no, pure FP is not the optimal programming approach).
is how mathematicians have thought for hundreds of years now. They do so in this way, because it's indeed easier to parse, and philosophically it correctly represents what is happening mathematically.
is how a subset of software engineers have thought about computation in the last couple of decades.
Technology should whenever possible adopt already existing conventions, because technology is created to serve the user, and not vice versa.
"Because tradition" is not a good argument. We can and should strive for better.
The modern OOP-derived style of `array.mean().round(digits=2)` is just as exact in its specification, and arguably more semantically precise as well as it organizes the meaningful bits to be together.
Also I think you don't account for how much mathematical notation has changed over the years. The `round(mean(array), digits=2)` structure is relatively recent, early 20th century I believe, and the result of active reforms.
I agree. Math notation has changed in response to problems mathematicians have faced, and as their thinking have evolved. It should not change because tool builders for mathematicians - when the tool builders in most cases are not even professional mathematicians - decide some other notation is better.
might be good (it is not) when you are operating on one object. As soon as you have a binary operation (say multiplication), this notation completely breaks down. Because the operation is often symmetric, or "near" symmetric, meaning it is not so different as to write something as ridiculous as x.product(y).
But more importantly, (common/dominant) mathematics is functional in nature. It's decidedly not object oriented. Given a matrix, M, doing something like M.eigenvalues() is vomit-inducing because eigenvalues is not a property of a matrix M. For matrices, its a map from a set of linear transformations (of which M is merely one item) onto a set of numbers. It is its own thing, separate from M.
Now you're gatekeeping on who is a real mathematician?
I don't think this subthread is going to be productive and I'm bowing out.
I'm not the GP, but I'm sorry you felt gatekeeped. I don't think GP conveyed "real mathematicians should do FP", but rather something else, so I'll try my best to share what I understood from that comment. Hope this makes you feel better.
Even though the notation changed over the years, the paradigm of "numbers as immutable entities and pure functions" has been the dominant way that math is presented, compared to something like "encapsulated objects that interact with each other via sending messages".
I don't think this has to be this way, and I do think that "real math" can also be laid out assuming principles of OOP. However, I do suspect it's the way it is because the laws of nature are unchanging, in contrast to the logic of a business application.
Because Julia is a tool with the target audience of mathematicians and scientists, I think it's a sensible decision to embrace the usual way of thinking, as opposed to presenting a relatively different way of thinking which steepens the learning curve. Not because data and functions are fundamentally better than OOP, but because it's more pragmatic for the target audience.
I understood the other poster correctly. This view of mathematics is as limiting and ignorant as the other poster’s.
The entire field of theoretical computer science, to which functional programming and type theory is closely tied, is a branch of mathematics. The Church-Turing thesis which gives both to our field equates the two at a very fundamental level. Questions about type theory, programming language design, and programmer ergonomics are fundamentally about math and applied math.
Maybe you and the other poster have in mind specific fields of math, but then you need to make claims for why those fields are sufficiently different as to be exempt from applicability of any of the advances in notation observed in other fields.
Your implicit assumption that you can divide computer science into a different bucket from “real math” is incorrect, and gatekeeping.
As I said though, I don’t think this is a profitable debate to have here on HN.
Okay, I see what you mean.
Yes, because this is about Julia, I assumed we are talking about the specific fields of math that happen to be commonly used in mathematical and scientific computing, such as the ones learned in university math and science courses.
It is regretful that we had a miscommunication. I agree with you that computer science belongs into the same bucket as "real math". The thing is, in the context of Julia it is not easy to read "math" as "math, but not only the domains that Julia is concerned in, but really all fields of math, including theoretical computer science". At least thanks to your comment, I see what you mean more clearly, and I think it'll help some other potential readers as well.
I'm curious to know specifically about the specific advances of notation observed in other fields. By this you mean dot notation for method application? I'm unsure if `a.method(b).anotherMethod(c)` is more advanced than `a |> method(b) |> anotherMethod(c)` (Edit: or `(anotherMethod(c) . method(b))(a)`) notation-wise.
I'd like to add another point worth mentioning. When I read the other poster's post, I was thinking about the semantics of OOP, specifically encapsulated objects and messages. I may have misinterpreted the other poster, but I hope it at least makes my other messages a bit clearer. (I think it's a little unwieldy to use Matrices that encapsulate its state for Linear Algebra, for example.)
In programming it is typical to have an “eig” function, or something of the sort, which returns the eigenvalues of a matrix. eig(M) seems no less vomit inducing than M.eigenvaues(). Usually when mathematicians want eigenvalues they write something like “Given a matrix M with eigenvalues \lambda” and the eigenvalues just show up.
In Julia:
Which is much the same.Sure, my point is that even the math-y programming languages and libraries don’t map very well to how mathematicians seem to describe eigenvalues.
Personally I think of them as a characteristic of the matrix (but I just write numerical software, I’m not a mathematician) and implement them as the result of a process. So I don’t really see any reason to favor either syntax or call one more mathematical.
I used to miss "OOP" in Julia. The funny thing about the dot notation is that it makes the limitations of single dispatch seem natural. Once you get used to multiple dispatch and its benefits, the dot notation seems limiting.
`x.plus(y)` dispatches based on (runtime) type of x (polymorphism) (single dispatch), and might call different functions based on the (compile time) type of y (function overloading).
In Julia, it would be `plus(x, y)`, and it can dispatch based on the runtime type of both x and y (or determine the correct method to call at compile time already, if the types of x and y can be determined then).
Virtual method dispatch isn’t always a good thing though. It’s a tradeoff and typically you really want one or the other. C++ makes both available for that reason, although C++’s syntax is horrible.
It's multiple dispatch though, not virtual dispatch. If I understand correctly, virtual dispatch means the method is picked at runtime. In Julia, the compiler is able to pick the correct method at compile time, even when dispatching on all arguments of the function.
This is correct, but it's important to realize that compile time continues during runtime.
Julia's execution model is quite distinctive, and it does take a thorough understanding of how it plays out to get native-code speeds out of it.
As a person who respects many programming paradigms including OOP and FP, I beg to disagree. Specifically, I'm assuming you're talking about OOP the paradigm, not only the dot notation, for which the pipe operator is a viable alternative.
I would have agreed with you if I were developing business applications. However, for the domain of math and science, every piece of material that I have seen assumes the paradigm of numbers as immutable entities and functions, not the paradigm of mutable encapsulated objects and interacting messages. To a mathematician or scientist the semantics of a `Vector` (as data) and a `mean` function that takes the `Vector` as input should be more natural than a black-box `Vector` object that can receive a `mean` method.
If I am not using named arguments I find myself using the pipe operator a lot. I also find it more readable.
array |> mean |> round
For scenarios with named arguments there is a little not so cleaner workaround
array |> mean |> x->round(x, digits=2)
array |> mean |> round(; digits=2) should also work and is a bit more concise.
This doesn't work, at least for me on 1.10. `round(; digits=2)` would have to produce a function. But Julia doesn't have automatic currying, so that only happens if someone has implemented a special-case version (e.g. (==)(3) returns a partial function that tests for equality with 3.)
Well, Julia also has a pipeline operator built in so:
will do what you want and is arguably more readable than dot notation.Not supporting OOP is very much intentional. It has structs and multiple dispatch (which has proven to be insanely useful in Julia's domain of scientific computing) so about the only thing you lose is the namespacing that dot syntax facilitates.
That's not more readable, you're just switching a few characters.
Any language that has Uniform Function Call Syntax (UFCS) [1] lets you replace wrapped function calls with linear calls like that, it's clearly easier to read, and much easier to write, specially when you consider programmers' tooling (in case it's not obvious why: you can type the name of a symbol and a '.' and the IDE shows you the available functions... this works also for free functions that take the symbol's type as the first argument as well.... that could work for the pipe operator as well, of course, that's why I said that the two things are basically equivalent, just with slightly different notation).
[1] https://en.wikipedia.org/wiki/Uniform_Function_Call_Syntax
Julia is not really a pure functional programming language, though it does support that style if one desires.
It's actually a "multiple dispatch" language... thinking of OOP as single dispatch, Julia is conceptually OOP on steroids -- but to benefit from the full expressiveness, the programming style looks different from the typical OOP syntax/patterns one is used to.
having spent my PhD writing in Julia, coming back to object oriented programming felt like programming with a hand tied to my back. many operations just make more sense dispatching on multiple arguments rather than just one, and. cramming methods into a class felt unnatural
You can overload your own types to get a lot of those things (poor support for OOP is harder but you can sort of emulate it with traits). Overloading `getproperty` for the former case.
Of course it's not built in, which I understand is annoying if that's your preferred coding style. I personally am sad that really good traits aren't encoded in the language.
F#.
For example: https://diffsharp.github.io/index.html
But why would be better than Mathematica for example for calculus?
it's very hard to argue anything is "better" than Wolfram/Mathematica when it comes to symbolic stuff, after all, most theoretical physicists use Mathematica for their professional work.
But for (entry-level) learning and possibly pivoting to application, Julia is delightful to use and can transit into some symbolics and numerical. Besides, it's free and open source.
They do not.
Surely, many do? The ones I interacted with did, for example. I'd venture that the preferred CAS varies with schools and countries.
Some do, most don't.
Unless you're working on undergraduate integrals then it's pretty useless for advanced stuff without writing your own library. By that point you may as well write it in a language that's more performant and cheaper.
I agree that only some physicists use Mathematica. But I haven't really seen it being used it for calculus. Maybe some differential equations.
But mostly for symbolic algebriac manipulation. I used it during my phd to work with groups. Instead of having to calculate stuff by hand, you can just ask Mathematica to do it. Also lots of stuff with tensors in GR is so easy to do in Mathematica.
I think it's fair to say that most math/physics people use mathematica from time to time, but largely for different things than they use other programming languages for. It's very good as a CAS, but it's a pretty bad programming language for things that don't have analytical solutions.
It isn't. Mathematica is very much a niche product in academia.
The people who would get most out of it are students, but for some god forsaken reason universities don't support them.
I was in a pilot class with Mathematica back in 2006 and the review of the class were _all_ 5 stars and students on average got 10% higher marks in all other subjects they took that year.
They didn't run the course again.
Sagemath is now equally good if a teacher defines a DSL for the students to use in a class.
Sagemath seems much more appropriate; it is better not to bind students to proprietary tools.
The only "problem" with sagemath is that it is based on Python. The rationale is that Python is easy to start using and widely known. This is the usual "make it easy for newcomers" trap.
For the mathematical constructs we care about in symbolic programming, I have found Python's syntax and Sage's menagerie of objects awful to use. Initially you feel comfortable, but when you want to do some real work, it gets horribly in the way. The Wolfram language, a LISP variant, is less familiar and harder for a newbie to learn but it is vastly superior for actual work.
Wolfram is not a lisp, even under the hood it is its own thing.
Thanks, let me correct myself:
The only "problem" with sagemath is that it is based on Python. The rationale is that Python is easy to start using and widely known. This is the usual "make it easy for newcomers" trap.
For the mathematical constructs we care about in symbolic programming, I have found Python's syntax and Sage's menagerie of objects awful to use. Initially you feel comfortable, but when you want to do some real work, it gets horribly in the way. The Wolfram language, not a LISP variant, is less familiar and harder for a newbie to learn but it is vastly superior for actual work.
Hard disagree. Mathematica's symbolic dexterity makes abstract reasoning (with equations/expressions) very easy. Think of it as the companion tool for anyone doing pages of algebra that would go into a paper, or form the backend for some code.
The numerical capabilities of Mathematica are passable but nothing fancy. Once you have the math figured out, you might even want to reimplement "in a language that's more performant and cheaper." But I haven't see anything come close to Mathematica for convenience of symbolic reasoning -- not just as a technology (lisp is pretty good) but as a ready-for-use product.
TFA covers that question:
TL;DR: they want the student to actually do some of the work that Mathematic magics away.
I like Julia and Python, however when I'm playing with doing math, or using paper, I like APL or J.
The above example in J:
0 1 23 4 5
6 7 8
1 2 34 5 6
7 8 9
J is 0-indexed. ">:" is the increment operator. APL allows you to set a parameter to do 1-indexed vs. 0-indexed.
APL:
1 2 34 5 6
7 8 9
There is a Calculus text for learning it with J:
https://www.jsoftware.com/books/pdf/calculus.pdf
interesting, i was thinking of sharping my maths skill by programming. Have you tried something like lean, agda etc if so how do they fair against apl or j for doing maths
I honestly thought apl was extinct
Because it is a language specifically targeted for doing numerical analysis.
Python, which is it's main "competitor" has a notoriously poor syntax for mathematical operations with miniscule standard library support for mathematics, a very limited type system and abysmal runtime performance. Julia addresses all of these issues and gives a relatively simple to read language, which often closely resembles mathematical notation and has quite decent performance.
"its" not "it's"
numpy has similar syntax to MATLAB and Julia
Python's type system is pretty rich and allows overloading all operators
I would argue that default element-wise operations and 0 based indexing can lead to a lot of cognitive overhead vs the "more math-y" notation of MATLAB or Julia. And overloaded operators like block matrix notation or linear solves are much closer.
The type system in Julia is perhaps even better than MATLAB for some linear algebra operations (a Vector is always a Vector, a scalar is always a scalar). But the Python ecosystem is often far superior to both, and MATLAB has toolboxes which don't have equivalents in either other language. Julia has some uniquely strong package ecosystems (autodiff to some extent, differential equations, optimization), but far fewer.
I disagree on the zero-based indexing complaint. Indeed, the fact that Julia indexed from 1 is the sole reason I will never use an otherwise great language. I can’t comprehend how people came to the conclusion this was a good idea.
Julia, R, Mathematica, Matlab, Fortran, Smalltalk, Lua...
Julia being a mostly functional non-systems-programming language, where memory addressing is usually in the background, I think you'd find you rarely use numeric-literal indexing and you were getting upset over nothing.
The typical looping over i from 1 to length(a) is not idiomatic in Julia, where looping over eachindex() or pairs() abstracts away any concern over index type (OffsetArray can have an arbitrary first index). There's firstindex() and lastindex(); bracket indexing syntax has begin and end special keywords. Multidimensional arrays have axes().
"But what about modular arithmetic to calculate 1-based indexes?"
Julia has mod1(), fld1(), etc., which also provide a hint to anyone reading the code that it might be computing things to be used as index-like objects (rather than pure arithmetic or offset computation).
The strict way to handle indexing, I suppose, would be to have distinct 1-indexed (e.g. Ind1) and 0-indexed (e.g. Ind0) Int-based types, and refuse to accept regular Ints as indexes. That would eliminate ambiguity, at the cost of Int-to-Ind[01] conversions all over the place (if one insists on using lots of numeric literals as indexes). And how would one abstract away modular arithmetic differences between the Ind0 and Ind1 types? By redefining regular mod and div for Ind1? That would be hella confusing, so that abstraction probably isn't viable.
I would be happy if unsigned integer UInt would start indexing from 0, whereas for signed integer it would start from 1. Also a module level parameter where preference on whether a literal integer shall be interpreted as Int or UInt would alleviate many usability issues there.
It seems that a lot of programmers have trouble distinguishing between indexing and offsets. This is probably due to legacy of C language which conflates those two, possibly for performance and simplicity reasons. Same goes for case sensitivity in programming languages, again a legacy of C...are Print, print and PRINT really different functions, is that really what you intended??? Anyway, indexing is enumerating elements and it naturally starts at 1 for the first element and goes to Count for the last. Offsets are pointer arithmetic artifact where first element is at offset of 0 for most but not all data structures. Then you get "modern" languages like Python where they adopt indexing from 0 (no offsets since there are no pointers) and range where start is inclusive and end is exclusive...very natural behavior there. Oh, and lets make it case sensitive so you never know if it's Print, print, PRINT...or count, Count and COUNT until you run for 20 min and then crash.
I'm confused that you're bringing in case sensitivity into the discussion. For simplicity, early systems could not be case sensitive because they didn't have a large enough character set.
The comment about crashing due to spelling error really should be addressed by some form of static analysis, not by making name lookups more relaxed, in my opinion.
If you look at a textbook for numerical analysis likely the algorithms will be 1-indexed. Julia is a language primarily for writing down mathematics, it would be quite silly not to use mathematical conventions.
You think mathematicians had it all wrong for centuries?
Incidentally, I find it amusing that in Europe, the ground floor is zero, while in the US, it's one. (A friend of mine arrived at college in the US and was told that her room was on the first floor. She asked whether there was a lift as her suitcase was quite heavy...)
What would you say does the Python ecosystem have that the others are missing? The obvious thing is pytorch/tensorflow/.... I can't think of much else though.
Julia has Flux, which is great. To create a neural network with Flux, you write the forward operation however you want and just auto differentiate the whole thing automatically and get a trainable net.
It is extremely flexible and intuitive.
What Julia lacks is everything which isn't numerical analysis though.
I agree that Flux.jl is very cool but you can't use it to train a SOTA LLM, can you? The Julia ecosystem does not have the resources to even attempt building a viable alternative to Pytorch.
I'm still curious what concrete things you mean that Julia is missing. Maybe I'm so stuck doing numerical analysis, can't think of anything else...
Basically everything not related to numerical analysis. The problem isn't that there isn't an option which does work, but that those options aren't fully developed and mature to actually use for a commercial project.
E.g. look at GUI libraries, the native ones are just wrappers around other libraries and poorly documented. There is a single web framework, but is that supported enough that you would bet your entire company on it being continually supported?
If you are doing anything which isn't related to numerical analysis, Julia isn't your first choice, it also isn't your second or third choice. And I think that is fine, clearly Julia wants to be a good at a particular thing and has achieved that.
Pytorch and tensorflow is pretty big. It implies all state of the art research code is not written in Julia, so it is a non-starter for any neural network based project.
That is just total nonsense. You can of course do neural network research without pytorch and tensorflow. Julia is especially good, with Flux being the most flexible Neural Network library which exists.
Pytorch and tensorflow are important and a very good reason to use python or C++, but those two being unavailable is more impactful for the industry, of course your research might need them, but it might very well not.
I mean, zero based indexing isn't exclusive to computing. Plenty of times in physics or math we do x_0 for the start of a sequence vs x_1.
Although, fair, our textbook had the definition of a sequence being a mapping of elements to z+, which is zero exclusive
Numpy has absolutely awful syntax compared to Julia or MATLAB. It is fundamentally limited by pythons syntax, which was never meant to support the rich operations you can do in Julia.
I have done quite a bit in both languages and it is extremely clear which language was designed for numerical analysis and which wasn't .
The language barely has support for type annotations. Duck typing is actually a very bad thing for doing numerical analysis, since you actually care a lot about the data types you are operating on.
Pythons type system is so utterly inadequate that you need an external library to have things like proper float types.
You're just repeating that it's bad without any argument.
Numpy's syntax is the same as MATLAB's.
Both MATLAB and Julia also use dynamic typing, though it is true Julia does have support for ad-hoc static typing through annotations in a way that is stricter than Python's.
Numpy does provide all the traditional floating-point types.
That is just plain false. Can you try reading the numpy documentation? They have an article about exactly that topic and even say that numpys Syntax is more verbose.
If you read that documentation you would also realize that numpy has a different approach to Arrays than both MATLAB and Julia.
Here: https://numpy.org/doc/stable/user/numpy-for-matlab-users.htm...
"MATLAB’s scripting language was created for linear algebra so the syntax for some array manipulations is more compact than NumPy’s."
The only difference is that MATLAB has more operators like matrix multiplication in addition to element-wise multiply, but that applies to Julia too.
No, that is not the only difference. I literally linked you an article in the numpy documentation which goes over dozens differences, large and small. I encourage you to read the section about the different numpy types, those concepts don't exist in MATLAB, where everything is an n-dimensional array and matrices as such don't exist as a special object.
There are also many differences between Julia and MATALB, although Julia obviously borrowed a lot from MATALB, for the obvious reason that MATLAB had a similar design goal in it's language.
That doesn't change that python inherently won't overcome the fact that it wasn't designed for numerical analysis, while both MATLAB and Julia were (although those also had somewhat different design objectives themselves, especially when it comes to the role of functions).
All operators, you say? Which of the following:
How about these ones: Or these? These? There's a few more, don't worryI'm just guessing, but:
1. Open Source.
2. Easy to learn. (kind of)
3. Robust plotting and visualization capabilities, which are integral (I f*cking swear no pun intended) to understanding calculus, the foundational purpose of calculus being to find the area under the curve. (something I really wish they told me day one of Pre-Calc)
4. This one is just a vague feeling, but the fact that Python's most robust Symbolic Regression package, SymPy, relies on running Julia in the background to do all the real work suggests to me that Julia is somehow just superior when it comes to formulas as opposed to just calculations. IDK how, though.
1-3 apply exactly the same to python, 4 is just false, SymPy is written in python, it has nothing to do with Julia.
They're thinking of PySR but you are right on all points!
I also clicked around and felt the formatting and progression of the book was a bit confusing, but found some of the Julia features intriguing. (like “postfixing” allowing the same pencil notation of f’ and f’’ et al)
In my opinion, and experience, the best “calculus book” is Learn Physics with Functional Programming which only relies on libraries for plotting, and uses Haskell rather than Julia.
https://www.lpfp.io/
Julia is known as a “programming language for math” and was designed with that conceit steering a lot of its development.
Explicitly it supports a lot of mathematical notation that matches handwritten or latex symbols.
Implicitly they may be referencing the simplified (see Pythtonic) syntax, combined with broad interoperability (this tutorial uses SymPy for a lot of the heavy lifting), lots of built in parallel computing primitives, and its use of JIT compilation allowing for fast iteration/exploration.
Thank you for reminding me of LPFP!
Julia inherited a lot of MATLAB DNA like standard library linear algebra and 1 based indices. In addition it's got very commonly used Unicode support and intentionally can look like pseudocode in many cases.
Multiple-dispatch is the right abstraction for programming mathematics. It provides both the flexibility required to create scientific software in layers (science has a LOT more function overloading compared to any other field.), and the information required to compile to fast machine code.
This is a good overview:
https://computationalthinking.mit.edu/Spring21/
The very short story is that it's a language somewhat similar to Python (and, likewise, easy-to-use), but with much richer syntax for expressing mathematics directly.
It also has richer notebooks. The key property is that in python notebooks, you run cells. In julia notebooks, it handles things like dependencies. If I change x, either as a number or a slider, all the dependencies things update. You can define a plot, add sliders, and it just works.
(Also: I'm not an expert in Julia; most of my work is in Python and JavaScript. I'm sure there are other reasons as well, but the two above come out very clearly in similar courses)