Cute. All kidding aside, though, functional programming is worth the effort to learn, and it doesn't actually take 15 years. The payoff is at the end of the article:
"It’s quite natural to program in Haskell by building a declarative model of your domain data, writing pure functions over that data, and interacting with the real world at the program’s boundaries. That’s my favorite way to work, Haskell or not."
Haskell can be intimidating, though, so I would recommend F# for most beginners. It supports OOP and doesn't require every single function to be pure, so the learning curve is less intense, but you end up absorbing the same lesson as above.
Yes - the value of functional programming isn't that working in OCAML, or F#, or Haskell is 10x as productive as other languages. But that it can teach you worthwhile lessens about designing software that apply equally to imperative languages.
Modelling the business domain, reasoning and managing side effects, avoiding common imperative bugs, these are all valuable skills to develop.
F# is a great language to learn, and very approachable. Worst part about it is interacting with antiquated .NET API's. (I can't believe the state that .NET support for common serialization formats is still in...)
This is not true in my personal experience.
As has been famously said (paraphrased): Functional programming makes tough problems easy and easy problems tough.
In other words the value of functional programming depends on your domain.
That needs a qualifier: it can make easy problems tough if you're not familiar with how to solve them in a functional context.
A big part of that is because smart people have already solved the tough problems and made them available as language features or libraries.
Not really, certain problems are just inherently harder to express in a purely functional way than they are in an imperative way (and the reverse is just as true). For example, computing a histogram is much simpler in imperative terms (keep an array of histogram values, go through the original list, add 1 to the array element corresponding to the current element in this list) than in a purely functional style, especially if you need a somewhat efficient implementation.
My favorite example of this is implementing quicksort. It's significantly easier in C than it is in Haskell.
Oh please, what's so hard about
:)folks, take that with a big ol /s, you would never want to actually use that algorithm. But the real deal isn't all that awful: https://mmhaskell.com/blog/2019/5/13/quicksort-with-haskell
Well, I'd say using an entirely different collection type than the rest of language (STArray instead of [a]) is already a big complication. It also ends up being more than double the size of the Java code. And, as the author admits, it's actually even slower than the original not-quicksort implementation above, because it actually has to make a copy of the original list, and then return a copy of the mutated array.
So, one of the best sorting algorithms ever devised is not actually usable to sort a [a] in Haskell... I maintain this is a good example of making an easy problem tough.
Haskell uses many different collection types, just like any other language. Why not?
Sure, but it could also not do that, if callers are happy to provide a mutable array, just like any other language ...
Indeed! One of the best algorithms for sorting a mutable array can't be used on an immutable data type, just like any other language ...
None of this invalidates your original claim that "It's significantly easier in C than it is in Haskell" of course.
quicksort2 has a very reasonable constraint of Array, it's the helpers that use the STArray implementation. I suspect it wouldn't be hard to port to MArray, though I don't know that it would perform any better (maybe avoiding some copies since MArray is actually usable outside of runST). I also suspect the overhead of the copies pays off with larger lists given the lack of space leaks compared to the naive algorithm. Some benchmarks of different-sized lists would have been nice.
I'm not a cheerleader for Haskell, it's not the most appropriate language for every job (certainly not for quicksort). But some folks suddenly become hyper-optimizing assembly programmers whenever anyone has the thought of porting a sql crud app to another language... Horses for courses and all that.
I know you're being sarcastic, but even in this linked-list quicksort, it could be made more efficient. By defining lesser and greater separately, you're traversing the list twice. You could compute them at the same time by changing those last two lines to this:
I just learned about the existence of this function today, and wonder why I had never seen it used before.https://hackage.haskell.org/package/base-4.20.0.1/docs/Data-...
Heh, I've used some version of partition in my TS/JS code ever since underscore.js. But "Haskell Quicksort" is basically a code meme now, and I didn't think to optimize it (though I think the original was a one-liner using list comprehensions)
Arguably the issue with "quicksort in Haskell" is not that it's "hard" to implement, but rather it defeats the whole purpose of using a "purely functional" language.
The pragmatic way to look at Haskell is not that it's purely functional, but rather, you could write <del>imperative code</del> Monads if you wanted, and that gives a "functional by default" environment, whereas most imperative languages default to mutable objects etc that are not friendly to functional-style programming.
But then the more modern languages are catching on with immutable by default variables etc. so in the end the differences between newer languages may not be that great after all...
Absolutely! Any beginner can readily combine the catamorphisms and anamorphisms in `recursion-schemes`, or use the ready-made hylomorphisms for common tasks such as setting a value in a data structure. What could be simpler? /s
https://wiki.haskell.org/Zygohistomorphic_prepromorphisms
They have played us for absolute fools.
All problems are easy if you are familiar with how to solve them. Unfortunately it's part of the problem to find out how to solve them, and that can be unusually hard in case of functional programming. Like solving something with recursion instead of loops + states. There is a reason cookbooks use loops not recursion.
How do you “Hello World” in a functional language? Doesn’t it have side effects?
The string "Hello World" evaluates to itself, what else do you need?
Edit: Eh, I thought it was a fun quip.
I laughed
Never forget PHP's "Hello World":
Yes, and AFAIK, you're pretty much free to cause side-effects in functional languages; it's just a bit awkward and somewhat discouraged. It's kind of like how C still has goto, yet it's still a structured programming language.
Even in Haskell, which tries to control side-effects more, it's not hard; it's just that it's stuck with an "IO" annotation. Any function that calls an IO function also becomes an IO function; it's infectious.
There's some real confusion about what "functional" means, because it depends on who's speaking. Writing _exclusively_ pure functions is a Haskell thing. A much looser definition of the functional style would be to say that you mostly write pure functions over immutable data, but when you have to actually do a side effect you write a procedure that talks to the outside world and go on with your day. If you go digging in this site's archives from about ten years ago, I recall numerous debates about what constituted functional programming, and whether the latter counts at all. But if we _are_ talking about Haskell, the answer to your question is obviously "Monads."
It has an effect. Whether it's a "side effect" depends on how we've defined that.
One way of viewing Haskell is that you are lazily constructing the single composite "effect on the world" called main.
then is a value representing an effect, but it only actually happens when it becomes a part of main.Threads complicate this but don't completely destroy the model.
What easy problems are tough in F#? I’ve been using it for writing random scripts and as a Python replacement.
Writing recursive descent parsers in F# is a lot of fun with ADTs and pattern matching.
It's an overgeneralisation.
If you try and turn F# into Haskell at home you may run into that problem.
F# is functional first language so if an object oriented or procedural solution is the right call the options right there when you need it.
Maybe my phrasing is not clear - I meant that these languages are indeed not significantly more productive.
But (and I agree with the GP) they are. They are overwhelmingly more productive, in a way that you often can't even compare quantitatively.
They are also a lot less productive. It depends entirely on what you are doing.
By what measure? Haskell can be a huge productivity multiplier. The standard library is built upon many powerful, unifying and consistent mathematical abstractions. For example, there is almost no boilerplate to write for any traversal, mapping, error handling etc. The average Pythonista simply has no idea what they are missing. But Haskell doesn't have anywhere near the third party ecosystem of Python, so is less productive by some measures.
It’s only tough to change your way of thinking. Most people making the switch find it tough because they are trying to find imperative techniques to do something in a functional way and struggling because they can’t find an if else statement or a for loop. But if you were never taught to think in terms of conditional branching or looping indexes you’ll save a lot of time.
Exactly. I've long held the sentiment, that pure functional programming is easier than imperative programming. The only hard part, is switching from an imperative mindset.
And because of mutual recursion, that means that tough is easy (and easy tough). In other words, if we call the class of tough problems T and easy problems NT, we have T==NT, given FP.
So you’re saying that it does make you 10x as productive?
Hot take of the day: you learn that with imperative programming just as well.
I familiarized myself with fp to the point of writing scheme and haskell around 15 years ago. Read the classics, understood advanced typing, lambda calculus and so on. The best “fp” I’m using nowadays is closures, currying in the form of func.bind(this[, first]) and map/filter. Which all are absolutely learnable by the means of closures, which are useful but I can live without. Sometimes not having these makes you write effing code instead of fiddling with its forms for hours.
Still waiting for the returns from arcane fp-like code I produced earlier. Cannot recognize nor understand none of my projects in this style that I bothered to save in vcs. Imperative code reads like prose, I have some of it still in production since 2010.
These FP talks are disguised elitism imo (not necessarily bad faith). Beta reduction and monadic transformers sound so cool, but that’s it job-wise.
In theory, you could pick up your one language, say, Java, and through the course of a normal career learn everything necessary to program in that language in the best possible way.
In practice, it's a pretty well-known phenomenon experienced by many skilled programmers that being forced into different styles by different languages results in learning things that you would only have learned very slowly if you had stuck only to your original language. To be concrete about the "very slowly", I'm talking time frames of your entire career, if not your entire life and beyond. It would be a long time programming in Java before you discover the concept of something like "pure functions" as a distinct sort of function, a desirable sort of function, and one that you might want organize your programming style around.
Of course, having heard of the concept already, we'd all like to fancy ourselves smart enough to figure it out in less than, say, three decades. But we're just lying to ourselves when we do that. Even the smartest of us is not as smart as all of us. You are not independently capable of rediscovering everything all the various programming communities have discovered over decades. If you want to know what even the smartest of us can do on their own without reference to decades of experience of others, you can look into the state of computer programming in more-or-less the 1980s, 90s if you're feeling generous. I think we've learned a lot since then, and the delta between the modern programmer and a 1980s programmer certainly isn't in their IQ or anything similar, it is in their increased collective community experience.
By getting out into systems that force us to learn different paradigms, and into communities that have learned how to use them, we draw on the experience of others and cover far more ground than we could ever have covered on our own, or in the context of a single language where we can settle into a local optima comfort zone. Jumping out of your original community is kind of an annealing process for our programming skills.
"The best “fp” I’m using nowadays is closures, currying in the form of func.bind(this[, first]) and map/filter."
That is really not the lesson about software design that FP teaches, and blindly carrying those principles into imperative programming is at times a quite negative value, as your experience bears out. FP has more to say about purity of functions, the utility of composition of small parts, the flexibility of composition with small parts, ways to wrap parts of the program that can't be handled that way, and providing an existence proof that despite what an imperative programmer might think it is in fact possible to program this way at a system architecture level. I actually agree 100% that anyone whose takeaway from FP is "we should use map everywhere because they're better than for loops and anyone who uses for loops is a Bad Programmer" missed the forest for the leaves, and I choose that modification of the standard metaphor carefully. I consider my programming style highly influenced by my time in functional programming land and you'd need to do a very careful search to find a "map" in my code. That's not what it's about. I'm not surprised when imperative code is messed up by translating that into it.
"In theory, you could pick up your one language, say, Java, and through the course of a normal career learn everything necessary to program in that language in the best possible way."
OK, then you know about currying, immutable data structures, map/reduce/filter, &c.
Because Java has that since way back when. No real closures, I think, but that doesn't matter much because the anonymous functions do what you want pretty much all the time and you could probably invent your own closures if you really want something else.
> OK, then you know about currying, immutable data structures, map/reduce/filter, &c.
It's not a certainty you learn about those things from Java, depends on your team/manager/codebase. None of that is enforced in Java the way it is in fp. Plus, none of it is really core or native to Java, it was added on later.
That's how we got essays back in the 2000s like "The Perils of Java Schools", "Can Your Language Do This", and "Beating the Averages".
https://www.joelonsoftware.com/2005/12/29/the-perils-of-java...
https://www.joelonsoftware.com/2006/08/01/can-your-programmi...
https://paulgraham.com/avg.html
The constraint here is "the best possible way".
That might be, probably is. But the representation of FP gets mostly done by those who are only halfway there, creating an impression that it is a better way of programming overall, when it’s just a mixed bag of approaches dictated by a set of esoteric languages (from business pov). The worst part is that it doesn’t translate verbatim to any non-fringe language and creates a mess in it, due to adopted inertia. At least that is my experience with FP “recruitment”.
I wish I skipped this FP tour completely and instead learned how to structure my programs directly. Could save me a year or five. Maybe there’s no better way, but in practice clear explanations are always better than these arcane teachings where you repeat something until you get it by yourself.
> FP has more to say about purity of functions, the utility of composition of small parts, the flexibility of composition with small parts, ways to wrap parts of the program that can't be handled that way, and providing an existence proof that despite what an imperative programmer might think it is in fact possible to program this way at a system architecture level.
Adding to that, in my case it also made realize that deterministic elimination of entire classes of errors in large, complex code bases, in a systematic rather than ad-hoc way, is actually possible. Prior to discovering fp, and particularly Haskell's type system, I spent much effort trying to do that with a combination of TDD and increasingly elaborate try/catch/throw error handling. Discovering Haskell's compiler, type system, and monadic quarantining of effects obsoleted all that effort and was a huge eye opener for me. And a nice side-effect is easy, reliable refactor-ability. Being able to apply those concepts to imperative and other programming paradigms is where the real value in fp is, imho. Programmers still wrangling with the Tarpit [1] need to take a look if they haven't already.
[1]:https://news.ycombinator.com/item?id=34954126
They may be disguised mathematics. People are into math because it is neat / elegant / cool. So they study it regardless of whether it has a practical use or not.
Mathematics is just a kind of programming. And vice versa.
Programs fundamentally have state, while mathematical equations have not. Math is in its core declarative, while programming is essentially imperative, at least under the hood.
Some programmers have serious math envy. This can be good if they are self aware about it and keep it in check, because it makes them better programmers. Otherwise they can a pain to work with. Seniors should be people that have dealt with this aspect of their own talent, not juniors who are promoted in spite of or because of it
I commonly implement things in an imperative style as a quick hack, then if it gets use I translate it into a more functional style. It kind of just happens as I clean it up and refactor during revisits.
It might be a matter of taste, but I enjoy code built with functional abstractions that allow neat composable data flows and some caches loitering around. I find it also helps when adding UI. Sometimes performance could be better with mutation, but when I'm at that point I've already spent much more time tuning the thing with caches.
Ocaml definitely doesn’t make you more productive
I have not used OCaml, but presumably Jane Street thinks OCaml makes their coders more productive.
I've looked at their code and, I would not wish it on anyone to work there, it's on par with large Java codebase at a bank institution
I wouldn't even say antiquated, modern .Net APIs can suck to work with too, the entire ecosystem is written for C# ASP.Net core and everything else feels second class.
I love F#, the working with C# elements of the language drove me away.
Which elements were the source of pain?
Writing web applications and back-ends in F#, as far as I'm aware, is straightforward and a joy because of integration with the rest of ecosystem.
That’s interesting because F#’s OOP, as someone who knows neither C# nor Java, makes it more intimidating to me than OCaml.
Also interesting that when FP is mentioned, Hindley-Milner is implicitly understood to be part of FP too even though it doesn’t have to be. Clojure emphasizes immutability and FP but with dynamic typing and everything that comes with that.
Is "emphasizes" just another word for second-class support?
C++ emphasizes the importance of memory safety.
I don't know what's your personal definition of "second-class support" but what it means is that it's explicitly supported by the language.
C++ explicitly supports memory-safe programming. You can choose whether you want to mess around with raw pointer arithmetic.
What safeguards does the language actually put in-place?
I don't think you know what you're talking about. Managing object ownership through systems like smart pointers is not memory safety. Applications that use smart pointers still suffer from memory issues, and it's possible to adopt object ownership systems that still use raw pointers, such as It's object ownership system.
Right. I sound just like someone talking about how "a language which emphasizes immutability" is an OK replacement for a language with pure functions.
The world is much less black and white than you’d like to see it.
Functions in Haskell including Prelude can throw exceptions which is not reflected in the type signature of the function. That is an effect that makes seemingly pure functions impure.
You can’t judge a language from a list of buzzwords. You need to look at how it is used in practice.
No, bottom, or _|_, is an inhabitant of every lifted type. An exception is bottom. So the / function is still pure even though it can throw a divide-by-zero exception.
Does it make it type-safe though? In dynamic languages type errors also result in exceptions.
Immutability is definitely first class in clojure, but you can work with mutable structures when you need to.
This is sounds like memory safety in C++.
https://www.infoworld.com/article/3714401/c-plus-plus-creato...
This seems like a meaningless criticism when it comes to immutability in Clojure. You can have mutability in Haskell too. That doesn’t make it as unsafe as memory management in C++.
Haskell enforces this via a type system.
What safeguards around mutability does Clojure have?
If I import a method 'foo()', is there any kind of contract, notation ... anything which could suggest whether it mutates or not?
Very nearly the entire language and standard library operate on immutable values only. Immutable values are the default, and you will use them for the vast majority of logic. You must do so, unless you very specifically opt to use dedicated reference types, at which point you’ll still need to produce intermediate immutable values to interact with that vast majority of the standard library.
And…
Functions which mutate state are almost always suffixed !. They will typically fail if you pass them immutable values; they only operate on reference types, which have to be dereferenced (typically with the prefix @) to access their state.
Doesn't the "O" in OCaml stand for "Object", though? I think you could pick up either F# or OCaml just as easily.
The nuances of OOP in F# can be ignored by beginners, so I really wouldn’t let yourself be intimidated coming from Clojure.
[0] https://ocaml.org/docs/objects
OCaml classes and objects are (ironically) rarely used and generally discouraged. There are some cases where they’re practically required, such as GUI and FFI (js_of_ocaml). But otherwise, most code does encapsulation and abstraction using modules and functor modules (which are more like Haskell and Rust typeclasses than traditional OOP classes).
I don’t know much about F#, but last time I used it most of its standard library was in C# and .NET, so F# code would interact with objects and classes a lot. AFAIK F# also doesn’t have functor modules, so even without the dependence on C# code, you still can’t avoid classes and objects like you can with OCaml (e.g. you can’t write a generic collection module like `List` or `Set` without functors, it would have to be a collection of a specific type or a class).
F# uses .NET's generics, so the statement regarding List/Set is completely incorrect (all base collections are generic).
F# has "generics" just like Python and PHP now "have types".
It's not a yes/no feature.
Give F# a try. It has, and always had, true generics.
https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...
FTFM: It's not a true/false feature.
Alright. Humor me, what is the issue with F# generics as compared to other languages with generics? Which implementation (that is productively useful) is a "true" one?
I think you misread their claim - they said that generic list/set would have to be classes, not modules (generic modules are a specific thing in OCaml and aren't the same as a module of generic classes).
I was a college dropout and self taught bash and python programmer and quite some time ago, I read about Haskell, decided to teach myself to use it, and then realized I had absolutely no idea what programming actually was, and basically spent the next 15 years teaching myself computer science, category theory, abstract algebra and so on, so that I could finally understand Haskell code.
I still don't understand Haskell, but it did help me learn Rust when I decided to learn that. And I think I could explain what a monad is.
edit: It's a data structure that implements flat map. Hope this saves someone else a few years of their life.
It's not you. Haskell has very bad syntax. It's not hard to understand it, it you rewrite the same things in something saner. Haskell was developed by people who enjoy one-liners and don't really need to write practical programs.
Another aspect of Haskell is that it was written by people who were so misguided as to think that mathematical formulas are somehow superior to typical imperative languages, with meaningful variable names, predictable interpretations of sequences of instructions etc. They, instead, made it all bespoke. Every operation has its own syntax, variables are typically named as they would in math formulas (eg. X and X'). This makes no sense, and is, in fact, very harmful when writing real-world programs, but because, by and large, Haskell never raises to the task of writing real-world programs, it doesn't deter the adepts from using it.
You knew Paul Hudak, Simon Peyton Jones, Phil Wadler, etc? Were they thinking about the benefits of mathematical formulas over program counters and procedural keywords when designing Haskell?
I was under the impression from the History of Haskell [0] that they were interested in unifying research into lazy evaluation of functional programming languages.
Gosh, what am I doing with my life? I must have made up all those programs I wrote on my stream, the ones I use to maintain my website, and all the boring line-of-business code I write at work. /s
In all seriousness, Haskell has its warts, but being impractical isn't one of them. To some purists the committee has been overly pragmatic with the design of the language. As far as functional programming languages go it's pretty hairy. You have "pure" functions in the base libraries that can throw runtime exceptions when given the wrong values for their arguments (ie: the infamous head function). Bottom, a special kind of null value, is a member of every type. There exist functions to escape the type system entirely that are used with some frequency to make things work. The committee has gone back more than once to reshape the type-class hierarchy much to the chagrin of the community of maintainers who had to manually patch old code or risk having it not longer compile on new versions of the base libraries. These are all hairy, pragmatic trade-offs the language and ecosystem designers and maintainers have had to make... because people write software using this language to solve problems they have and they have to maintain these systems.
[0] https://www.microsoft.com/en-us/research/wp-content/uploads/...
You don't need to know the author personally to appreciate the result of their work...
I would ask the same question, but unironically. No, you didn't make up those programs of course. I didn't claim that real-world programs are impossible to write in Haskell. I claimed that Haskell is a bad tool for writing real-world programs. People make sub-optimal decisions all the time. That's just human nature... choosing Haskell for any program that would require debugging, long-term maintenance, cross-platform UI, or plenty of other desirable properties is just a very bad choice. But people like you do it anyways!
Why? -- there are plenty of possible answers. If I wanted to look for the flattering answers, I'd say that a lot of experienced and talented programmers like Haskell. So, choosing to write in Haskell for that reason isn't such a bad idea. But, if I wanted to judge Haskell on its engineering rather than social merits: it has very little to bring to the table.
Strange. In my experience Haskell is the best language for long-term maintainability! I can actually come back to code I've written years ago and understand what it does. I've never experience that with another language.
This is not really what maintainability means to me. To me, it just means you have a good memory.
To be able to maintain something you need to be able to transfer the ownership of the piece of code to someone else. You need to be able to amend the code easily to extend or to remove functionality. It means that the code can be easily split into pieces that can be given to multiple developers to work simultaneously towards a common goal.
Haskell scores poorly on any of those points.
It's very hard to transfer ownership because Haskell programs have too much bespoke syntax that anyone but the original author will not be familiar with. Haskell programs are very hard to understand by executing them because there's no chance of a good step debugger due to the language being "lazy". Haskell programs are always "full of surprises" because they allow a lot of flexibility in the interpretation of the same symbols. The problems C++ programmers complain about when faced with operators overloading are the kind of things Haskell programmers call "Tuesday".
"Pure" functions are a lot harder to modify to add functionality. Typically, such modifications would require swiping changes across the entire codebase. One may argue that this preserves "single responsibility" constraint, but in practice it means a lot more work.
Objects and modules were advertised as a solution to code modularity -- another necessary quality for maintainability. While Haskell has modules, it doesn't really have smaller units of encapsulation other than functions. If two programmers are tasked to work on the same module, they aren't going to have a good time.
I don't know what to tell you. I've been successfully using Haskell in production for over ten years, and it's the most maintainable language I've used for a variety of reasons, including the one given in my message above.
Interesting. I work at a company with over a hundred people committing to a Haskell code-base that runs our entire back-end. As far as I can tell we're continuously shipping code several times a day, every day, with very low impact on our service level indicators.
When we do have production issues they're rarely attributed to Haskell. I can think of one incident in 3 years where a space leak was caught in production. It was straight forward to address and fix before it impacted customers.
In practice, Haskell has been amazing to work with. I just rewrote over a thousand lines of code and shipped that to production without incident. I've never had that go off easily in any other language I've worked with (and I've been around for over twenty years and worked in everything from C, C++, Python, and JS).
I think we can drop the "real-world," qualifier as it is unlikely there is an "imaginary-world" that we write programs for.
What engineering merits did you have in mind?
I think academia is typically what's meant as an imaginary world, along with play that may be in reference to no world at all.
Just to give a different pov I find Haskell very intuitive, and particularly I find that code written by other people is very easy to understand (compared to Java or TypeScript at least).
And by the way x and x' are totally fine names for a value of a very generic type (or even a very specific type depending on the circumstances), as long as the types and the functions are decently named. I mean, how else would you call the arguments of
splitAt :: Eq a => a -> [a] -> [[a]]
?
There is no need for anything more complex than
splitAt x xs = ...
Those don't seem to be names of parameters, but rather of types. It's missing parameter names entirely.
I spent a good 2 minutes looking at that signature trying to figure it out (and I've read some Haskell tutorials so I'm at least familiar with the syntax). This would've helped:
`sep`, `separator`, `delim`, `delimiter` would've been good names.The rest of the definition is at the end, to see it as a whole:
To clarify, I assumed that by using the constraint `Eq a` and the name splitAt there was no need for extra clarification in the names of the parameters but apparently I was wrong.That's what (some) other people do. None of that stops you writing Haskell in whatever style you want, with meaningful variable names, curly braces and semicolons!
Unfortunately, writing isn't even half the battle. Before you start writing, you need to read a lot. And Haskell code is, in general, atrocious. It always feels like there was a normal way to do something, but the author decided to chose the most convoluted way they can imagine to accomplish the same thing for the sake of a bizarre fashion sense.
I think that's a good starting definition for programmers, but still could cause confusion when you run into something like IO in Haskell. IO isn't really a data structure, and it's hard to fit the "flat map" concept to it.
If you want you can still keep this point of view, by saying that IO is conceptually a data structure that builds a description of what the program does. In this point of view it follows that there is another, impure program that interprets the IO data structure and actually performs the computations
(Of course in practice IO isn't implemented like this, because it would be too slow)
(But in every other language, like Javascript or Python, you can define IO as a data structure. Or even in Haskell itself, you can define for eg. a free monad that gets interpreted later, and it can be made to be just as good as IO itself, though typically people make it less powerful than IO)
However note that every other "computational" monad (like the list monad or the Cont monad) actually is a data structure, even though they describe effects just like IO does. This is because IO is the only possible exception to the "monads are data structures" thing in Haskell (if you don't subscribe to the above view), because Haskell doesn't let you define types that aren't data structures
The only issue with this point of view is that you now need to say what flatMap means for things that are not shaped like arrays. Eg. does it make intuitive sense to flatMap a tree? (A retort is that it must make sense, whatever you call this operation; and flattening a tree means to turn a tree of trees into an one-level tree)
I feel like functional programming is pretty trivial. It's pure programming that is very difficult.
They're often conflated because Haskell is pure and functional and probably the most talked about heavily functional language.
I certainly didn't know that impure functional languages like OCaml existed for ages.
Is Haskell pure?
It has exceptions
You can divide by zero
It has unsafe IO primitives
You're right: "pure" is not a well-defined concept. The well-defined concept that describes Haskell's benefits in this regard is "referential transparency". That means that this code
(i.e. defining a variable x and then using it some number of times) is equivalent to Seen in the opposite direction (transforming the bottom code to the top code) this means that extracting repeated code is always a valid thing to do. It's not valid in most other languages, and certainly no mainstream ones.Well, technically that isn't true if you use, for example, unsafePerfomIO in the defintion of x. Referential transparency is still a spectrum, just like purity. Haskell is much closer to purity than the vast majority of languages out there.
Also, even if Haskell were perfectly pure, the fact that it uses lazy evaluation is far more important to actually being able to make use of referential transparency. In a strict language you will still see a massive performance differences if replacing the first version with the second, in the most common cases.
Ah, well, regardless of whether it holds in Haskell, referential transparency is a well-defined concept. Purity is not a well-defined concept (at least as far as I know. Please share a formal definition if you have one!). That's primarily what I'm trying to say.
But I also disagree with your point about unsafePerformIO. In practice, nothing in Haskell violates referential transparency in a significant way. Who knows why? It's an emergent phenomenon that in principle need not have occurred, but in practice it did. Debug.Trace and similar are about the only things that technically violate referential transparency (and they are extremely tame).
Yes, I agree with that.
It seems that someone did come up with a formal definition to try to capture the concept [0], though I haven't looked into the details to see whether it really matches the colloquial use of the terms "pure" and "impure" functional programming. In short, the formal definition they came up with is that a language is pure if the result of a valid program is the same under any parameter passing strategy.
I should note that I agree that, in practice, Haskell is almost always pure and/or referentially transparent. I was just pointing out that technically the GP was correct that it's not perfectly 100% so.
[0] https://www.cambridge.org/core/journals/journal-of-functiona...
Sabry's definition of "pure" fails to satisfy me for two reasons:
1. It assumes that the language is "a conservative extension of the simply typed λ-calculus". That's rather high-powered yet also really restrictive! Haskell doesn't satisfy that requirement. It also assumes the language has functions. Expression languages (i.e. ones without functions) are perfectly reasonable languages and it makes sense to ask whether they are pure.
2. It assumes that a language putatively has multiple evaluation orders (which I suppose is a consequence of the assumption "It is a conservative extension of the simply typed λ-calculus"). Haskell doesn't have multiple evaluation orders. It has one! (notwithstanding you can influence it with seq/!)
If you unpick the essence of what Sabry's really saying you find you can translate it into the Haskell world through imposing two conditions:
C1. soundness of the β-axiom (a.k.a. referential transparency) (this plays the role of Sabry's condition that call by need and call by value have the same result).
C2. That
gives the same result as whenever the latter terminates. (This plays the role of Sabry's condition that call by name and call by value have the same result.) I omitted this condition from my original. I probably shouldn't have because technically it's required, but it risks getting into the weeds of strictness versus laziness.So Sabry's definition of "pure" is a long-winded and restricted way saying something that can be much more conveniently expressed by C1 and C2. If you disagree with me, please demonstrate a property of purity that can't be deduced from C1 and C2!
OK, fine, but I also said the GP was correct! I am keen to point out, however, that exceptions (including division by zero) do not violate referential transparency (and if someone thinks they violate "purity" that may be a sign that "purity" is ill-defined).
I feel like exceptions where added as a mix of "look we can do that too" and "maybe if so many functions return optional values then it is going to be too much of a pain to use"
In hindsight I think few would now regret not having added them in the first place.
I strongly believe that there is a point in the PL design space that makes optionals everywhere usable. Maybe Haskell can still be the language that delivers this.
To plug my own solution, my effect system Bluefin makes exceptions visible in the type, well-scoped, and also freely composable with all other effects:
https://hackage.haskell.org/package/bluefin-0.0.3.0/docs/Blu...
It is pure in the same way that Rust is memory safe. That is too say there are a tiny number of exceptions/escape hatches, but they are not meant to be the norm. Every day programming doesn't involve them.
Exceptions aren't impure anyway.
Exceptions define an effect. Code with exceptions aren't actually pure in the sense that their return type doesn't fully describe what the code does, so it doesn't just map an input into an output: there is something else going on
In some pure functional languages, the pure fragment doesn't have exceptions, and you add exceptions as an effect (or as a monad)
(If you reify the effect with a type like Either or Result, then the code becomes pure: but that's just a monad like said above)
Anyway I really like the take that Haskell is pure in the same sense that Rust is safe
Haskell has impure constructs but sets you up to define referentially transparent abstractions with them, and this is actually valued by the Haskell community
To be tongue in cheek then it also has the side effect of heating the CPU.
Sir have you heard of GADTs
But why do we need Haskell for this?
Realistically we don't but it's very rare to meet a programmer who understands these distinctions thats not also a great functional programmer.
This is my experience after spending five years as a Haskell programmer and managing a Haskell team for several years and now moving back to the c++ world to play with AI.
I know lots of good c++ programmers working on cutting edge stuff, real experts in their field, but they sometimes still don't have a clear way to understand how to model data
That is my opinion. It's probably highly contentious.
I've actually had to fire a technically exceptional Haskell programmer because of the damage they did to our C# codebase (and arguably moreso, the team). Sometimes it's not a matter of talent or skill, but culture fit.
In my experience FP-aligned people on non-FP projects tend to be more likely to overengineer, more prone to argue in favor of the Great Rewrite For No Reason Except Aesthetics, and more likely to abuse "lesser" programmers when they put up PRs. They suck as team players on teams that are not made of language nerds. I am not just talking about the one person here who I fired, this is a legit pattern I've noticed over at least a half dozen people.
Conversely, they are exactly the right people to deploy when you have really tough, self-contained problems to solve that you wouldn't trust the normal Java 9-5ers to tackle.
No matter how they do it, you can always rewrite their working code in a more maintainable language later once it's working, and make it integrate well with the rest of your stack. :D
This is my experience, too. Some of the worst code I‘ve seen was a Haskell guy who first built his own (reactive?) concurrency framework and then implemented the actual functionality in completely unidiomatic and undocumented Java.
Some people don’t understand that the „best solution“ is not necessarily equal to the most beautiful abstraction they can think of.
There, fixed it for you.
But I have to be fair, whenever I see a demand for "idio(ma)tic code" I know that this is a place to avoid, no matter if they are imperatively or functionally inclined.
Your story matches my experience, but it always makes me think, why did this person want to work with you in the first place?
A great Haskell programmer (generally speaking) is going to be a culture misfit in any Java, C#, golang, etc shop. I know because I've been that miserable bastard who loves functional programming working with Java devs who don't know anything about FP and couldn't care less. To be clear I'm not saying you can't find a compatible Java shop (I actually did find a startup with a lot of Java devs who appreciated FP and used much of it in Java, and that was pretty great honestly), just that the odds are highly against you.
My biggest advice to people who like FP is: Find a job in a language like Clojure, Elixir, Scala, etc. There are a lot more jobs than you'd think. But if you can't, Ruby and Javascript/Typescript can be pretty close depending on where you go. Talk to existing devs and see how feel about FP in general before you join though!
I mean sure. Realistically, I'm certain I would do that if I were working on a typical code base, which is why I'm in an extremely niche field where that sort of thing is valued. From my extremely biased perspective, these are the 'hard' problems that need solving, versus the general run of the mill operational things. That probably sounds pretentious, but it takes skills for both.
Because for some reason there are no pure strict-by-default languages around.
I think Idris is the best example of a pure strict-by-default language (that also supports totality checking, I believe).
Yes, it does. It's also dependently typed.
PureScript is an example.
Elm is one example of such. However, it's also an illustration of why these languages are rare. With a strict semantics there's an almost unbearable temptation to add library functions with side effects. Elm only avoided this fate by giving its BDFL strict control over which packages could access the JS FFI. But that upset a lot of people.
All of them (especially newer ones) are, except Haskell (und some other, nowadays either obsolete or really obscure languages).
Idris (2), PureScript, Elm, Unison, Roc, Lean (4), Koka, Flix (and some other I've forgotten about).
I would recommend neither of those.
Haskell has very bad syntax (with extensive backing from Microsoft, iirc the guy who writes the compiler is a Microsoft's Research employee).
F# is a straight-up Microsoft's language.
It doesn't matter what other benefits it has. Just don't touch anything created by that company, and you will have one fewer regrets in your life.
But, if you still want a language from that category: SML or Erlang would be my pick.
SPJ has left MSR and is now at Epic games, working on a new PL. However, even while he was at MSR, MS didn't really have a say in how Haskell was developed.
Well, MS didn't have to do anything. It's enough that they have (or had) the opportunity to do something.
There isn't an Overmind in MS that in a creepy voice tells you to spawn more overlords. Less than that, there doesn't need to be a written document that tells you to give money to MS or your data etc. There's just a general accepted understanding among the people who run that company that ends justify the means. And by "ends" they mean them and their investors getting rich.
If Haskell compiler could've been turned into a money-making machine, and it only required killing off half of Haskell programmer, MS would be working overtime on the plan to hide the bodies, but they'd never even consider the possibility of killing being bad... (metaphorically speaking, hopefully)
Do you also suspect homicidal money making motives behind Z3, Lean, and F*? It seems more likely to me that they just want some useful knowledge out of these projects that they can integrate into a product that actually sells.
I have a misfortune to know personally some of the mid-to-high level execs from MS. What I write is based on the experience of working with these people. And I don't even know what Z3, Lean or F* are, so, pardon my ignorance, but I cannot answer your question.
What's wrong with Haskell's syntax? I think it's generally pretty nice though can be excessively terse at times.
From the point of view of writing a parser, Haskell's whitespace syntax seems like a hack. So, the grammar is defined with braces and semicolons, and to implement significant whitespace, the lexer inserts opening braces and semicolons at the start of each line according to some layout rules. That's not the hacky part; what makes it a hack is that to insert closing braces, the lexer inserts a closing brace when the parser signals an error. You can read about it here [0].
Also, on an aesthetic level, I think a lot of infix operators are kind of ugly. Examples include (<$>), ($), and (<*>). I think Haskell has too many infix operators. This is probably a result of allowing user-definable operators. I do like how you can turn functions into infix operators using backticks, though (e.g. "f x y" can be written as "x `f` y").
[0]: https://amelia.how/posts/parsing-layout.html
* Significant white space, and the rules around whitespace are very convoluted.
* There's no pattern or regularity to how infix / prefix / suffix operators are used which makes splitting program text into self-contained sub-programs virtually impossible if you don't know the exact behavior, including priority of each operator.
* There's a tradition of exceptionally bad names for variables, inherited from the realm of mathematical formulas. In mathematics, it's desirable to give variables names devoid of everyday meaning to emphasize the generic nature of the idea being expressed. This works in the context of very short formulas, but breaks entirely in the context of programs which are usually many orders of magnitude bigger than even the largest formula you've ever seen. There, having meaningful names is a life west.
* It's impossible to make a good debugger for Haskell because of the language being "lazy". Debuggers are essential tools that help programmers in understanding the behavior of their programs. Haskell programmers are forced to rely on their imagination when explaining to themselves how their program works.
* Excessive flexibility. For example, a Haskell programmer may decide to overload string literals (or any literals for that matter). This is orders of magnitude worse than eg. overloading operators in C++, which is criticizes for defying expectations of the reader.
One of these points would've been enough for me to make the experience of working with a language unpleasant. All of them combined is a lot more than unpleasant.
:-)
For the benefit(s) that you list, which are the best learning resources for F#?
https://fsharpforfunandprofit.com/
In modern Fortran, functions should be pure (although the language does not require this), and procedures that mutate arguments are made subroutines (which do not have return values).
Note that Fortran's interpretation of the term "pure" bizarrely allows a "pure" subprogram to depend on mutable state elsewhere (in a host, a module, or a COMMON block). So Fortran's "pure" functions aren't referentially transparent.
(F'2023 added a stronger form of "pure" and calls it "simple", but it didn't strengthen the places where a "pure" procedure should be required to be "simple", such as DO CONCURRENT, so being "simple" will be its own reward, if any compiler actually implements it. And a "simple" function's result value can still depend on a mutable pointer target.)
I really wanted to like F#, and I kinda do, but it has a number of quirks, compiler issues and cracks in the design that are getting worse:
First off, the compiler is single-pass. All your definitions have to be in order. This even extends to type hints - it can't use clues to the right to deduce the type of an expression on the left. This is supposedly for perf reasons, but the compiler can become extremely slow because the inference engine has to work so hard - slower than GHC for sure.
Speaking of slowness, Haskell is surprisingly fast. Idiomatic Haskell can be within 50% the perf of C, since its laziness and purity unlock powerful optimizations. F# is eager and the compiler doesn't do anything fancy. Perf often makes you reach for mutable state and imperative structure, which is disappointing.
The OOP paradigm feels bolted on. Should you use classes or modules? Pure functions or members? It depends, what mood are you in? Unfortunately only member functions support overloads, and overloads are useful for some SFINAE-type patterns with `inline` functions, so they get a bit overused.
`ref` struct support, which is vital for zero-copy and efficient immutable data, have very primitive support. even C# is ahead on this.
Very limited support for implicit conversions, no support for type classes and no function overloading leaves F# with nothing like a numeric tower you'd have in Lisp, and makes building something like Numpy clunky.
I use C# at work, and I love Haskell, so I really wanted to love F#. But it just doesn't get the love it needs from MS, and some design decisions aren't aging well - particularly as C# itself evolves in directions that are tricky for F#'s aging compiler to support.
Or Elixir! Quite easy to grasp as well.
I would suggest Scala as FP for beginners. It doesnt forces you to do pure functions. And its really beginners friendly to start with.
having never done F# or haskell, doesn't that start getting into the territory of languages that encourage functional programming like ruby or javascript (modern javascript)?
I feel the same pay-off - but arrived at that point via Clojure. Immutable-first, aim for purity, ability to drop out of it when necessary.
As stringent as you need it to be (static vs. dynamic types vs. specs), as flexible as you want it to be.