return to table of content

I learned Haskell in just 15 years

munchler
133 replies
2d13h

Cute. All kidding aside, though, functional programming is worth the effort to learn, and it doesn't actually take 15 years. The payoff is at the end of the article:

"It’s quite natural to program in Haskell by building a declarative model of your domain data, writing pure functions over that data, and interacting with the real world at the program’s boundaries. That’s my favorite way to work, Haskell or not."

Haskell can be intimidating, though, so I would recommend F# for most beginners. It supports OOP and doesn't require every single function to be pure, so the learning curve is less intense, but you end up absorbing the same lesson as above.

initplus
47 replies
2d13h

Yes - the value of functional programming isn't that working in OCAML, or F#, or Haskell is 10x as productive as other languages. But that it can teach you worthwhile lessens about designing software that apply equally to imperative languages.

Modelling the business domain, reasoning and managing side effects, avoiding common imperative bugs, these are all valuable skills to develop.

F# is a great language to learn, and very approachable. Worst part about it is interacting with antiquated .NET API's. (I can't believe the state that .NET support for common serialization formats is still in...)

sidkshatriya
29 replies
2d13h

Yes - the value of functional programming isn't that working in OCAML, or F#, or Haskell is 10x as productive as other languages.

This is not true in my personal experience.

As has been famously said (paraphrased): Functional programming makes tough problems easy and easy problems tough.

In other words the value of functional programming depends on your domain.

antonvs
11 replies
2d12h

easy problems tough.

That needs a qualifier: it can make easy problems tough if you're not familiar with how to solve them in a functional context.

A big part of that is because smart people have already solved the tough problems and made them available as language features or libraries.

simiones
7 replies
2d5h

Not really, certain problems are just inherently harder to express in a purely functional way than they are in an imperative way (and the reverse is just as true). For example, computing a histogram is much simpler in imperative terms (keep an array of histogram values, go through the original list, add 1 to the array element corresponding to the current element in this list) than in a purely functional style, especially if you need a somewhat efficient implementation.

My favorite example of this is implementing quicksort. It's significantly easier in C than it is in Haskell.

chuckadams
6 replies
2d4h

My favorite example of this is implementing quicksort. It's significantly easier in C than it is in Haskell.

Oh please, what's so hard about

    qsort :: Ord a => [a] -> [a]
    qsort []     = []
    qsort (p:xs) = qsort lesser ++ [p] ++ qsort greater
        where
            lesser  = filter (< p) xs
            greater = filter (>= p) xs
:)

folks, take that with a big ol /s, you would never want to actually use that algorithm. But the real deal isn't all that awful: https://mmhaskell.com/blog/2019/5/13/quicksort-with-haskell

simiones
2 replies
2d4h

Well, I'd say using an entirely different collection type than the rest of language (STArray instead of [a]) is already a big complication. It also ends up being more than double the size of the Java code. And, as the author admits, it's actually even slower than the original not-quicksort implementation above, because it actually has to make a copy of the original list, and then return a copy of the mutated array.

So, one of the best sorting algorithms ever devised is not actually usable to sort a [a] in Haskell... I maintain this is a good example of making an easy problem tough.

tome
0 replies
2d4h

I'd say using an entirely different collection type than the rest of language (STArray instead of [a]) is already a big complication

Haskell uses many different collection types, just like any other language. Why not?

it's actually even slower than the original not-quicksort implementation above, because it actually has to make a copy of the original list, and then return a copy of the mutated array.

Sure, but it could also not do that, if callers are happy to provide a mutable array, just like any other language ...

one of the best sorting algorithms ever devised is not actually usable to sort a [a] in Haskell

Indeed! One of the best algorithms for sorting a mutable array can't be used on an immutable data type, just like any other language ...

None of this invalidates your original claim that "It's significantly easier in C than it is in Haskell" of course.

chuckadams
0 replies
2d3h

quicksort2 has a very reasonable constraint of Array, it's the helpers that use the STArray implementation. I suspect it wouldn't be hard to port to MArray, though I don't know that it would perform any better (maybe avoiding some copies since MArray is actually usable outside of runST). I also suspect the overhead of the copies pays off with larger lists given the lack of space leaks compared to the naive algorithm. Some benchmarks of different-sized lists would have been nice.

I'm not a cheerleader for Haskell, it's not the most appropriate language for every job (certainly not for quicksort). But some folks suddenly become hyper-optimizing assembly programmers whenever anyone has the thought of porting a sql crud app to another language... Horses for courses and all that.

trealira
1 replies
2d3h

I know you're being sarcastic, but even in this linked-list quicksort, it could be made more efficient. By defining lesser and greater separately, you're traversing the list twice. You could compute them at the same time by changing those last two lines to this:

  where
    (lesser, greater) = partition (< p) xs
I just learned about the existence of this function today, and wonder why I had never seen it used before.

https://hackage.haskell.org/package/base-4.20.0.1/docs/Data-...

chuckadams
0 replies
2d2h

Heh, I've used some version of partition in my TS/JS code ever since underscore.js. But "Haskell Quicksort" is basically a code meme now, and I didn't think to optimize it (though I think the original was a one-liner using list comprehensions)

hnfong
0 replies
2d4h

Arguably the issue with "quicksort in Haskell" is not that it's "hard" to implement, but rather it defeats the whole purpose of using a "purely functional" language.

The pragmatic way to look at Haskell is not that it's purely functional, but rather, you could write <del>imperative code</del> Monads if you wanted, and that gives a "functional by default" environment, whereas most imperative languages default to mutable objects etc that are not friendly to functional-style programming.

But then the more modern languages are catching on with immutable by default variables etc. so in the end the differences between newer languages may not be that great after all...

jiggawatts
1 replies
2d10h

Absolutely! Any beginner can readily combine the catamorphisms and anamorphisms in `recursion-schemes`, or use the ready-made hylomorphisms for common tasks such as setting a value in a data structure. What could be simpler? /s

https://wiki.haskell.org/Zygohistomorphic_prepromorphisms

amoss
0 replies
2d

They have played us for absolute fools.

cubefox
0 replies
2d9h

That needs a qualifier: it can make easy problems tough if you're not familiar with how to solve them in a functional context.

All problems are easy if you are familiar with how to solve them. Unfortunately it's part of the problem to find out how to solve them, and that can be unusually hard in case of functional programming. Like solving something with recursion instead of loops + states. There is a reason cookbooks use loops not recursion.

pyuser583
6 replies
2d3h

How do you “Hello World” in a functional language? Doesn’t it have side effects?

cess11
2 replies
2d3h

The string "Hello World" evaluates to itself, what else do you need?

Edit: Eh, I thought it was a fun quip.

pyuser583
0 replies
1d23h

I laughed

NateEag
0 replies
1d20h

Never forget PHP's "Hello World":

    Hello, world!

trealira
0 replies
2d3h

Yes, and AFAIK, you're pretty much free to cause side-effects in functional languages; it's just a bit awkward and somewhat discouraged. It's kind of like how C still has goto, yet it's still a structured programming language.

Even in Haskell, which tries to control side-effects more, it's not hard; it's just that it's stuck with an "IO" annotation. Any function that calls an IO function also becomes an IO function; it's infectious.

  main :: IO ()
  main = putStrLn "hello, world"

sevensor
0 replies
2d3h

There's some real confusion about what "functional" means, because it depends on who's speaking. Writing _exclusively_ pure functions is a Haskell thing. A much looser definition of the functional style would be to say that you mostly write pure functions over immutable data, but when you have to actually do a side effect you write a procedure that talks to the outside world and go on with your day. If you go digging in this site's archives from about ten years ago, I recall numerous debates about what constituted functional programming, and whether the latter counts at all. But if we _are_ talking about Haskell, the answer to your question is obviously "Monads."

dllthomas
0 replies
2d2h

It has an effect. Whether it's a "side effect" depends on how we've defined that.

One way of viewing Haskell is that you are lazily constructing the single composite "effect on the world" called main.

    helloWorld :: IO ()
then is a value representing an effect, but it only actually happens when it becomes a part of main.

Threads complicate this but don't completely destroy the model.

williamcotton
2 replies
2d6h

What easy problems are tough in F#? I’ve been using it for writing random scripts and as a Python replacement.

z500
0 replies
2d6h

Writing recursive descent parsers in F# is a lot of fun with ADTs and pattern matching.

DimmieMan
0 replies
1d11h

It's an overgeneralisation.

If you try and turn F# into Haskell at home you may run into that problem.

F# is functional first language so if an object oriented or procedural solution is the right call the options right there when you need it.

initplus
2 replies
2d12h

Maybe my phrasing is not clear - I meant that these languages are indeed not significantly more productive.

marcosdumay
0 replies
2d2h

But (and I agree with the GP) they are. They are overwhelmingly more productive, in a way that you often can't even compare quantitatively.

They are also a lot less productive. It depends entirely on what you are doing.

grumpyprole
0 replies
2d

By what measure? Haskell can be a huge productivity multiplier. The standard library is built upon many powerful, unifying and consistent mathematical abstractions. For example, there is almost no boilerplate to write for any traversal, mapping, error handling etc. The average Pythonista simply has no idea what they are missing. But Haskell doesn't have anywhere near the third party ecosystem of Python, so is less productive by some measures.

chrischen
1 replies
2d3h

It’s only tough to change your way of thinking. Most people making the switch find it tough because they are trying to find imperative techniques to do something in a functional way and struggling because they can’t find an if else statement or a for loop. But if you were never taught to think in terms of conditional branching or looping indexes you’ll save a lot of time.

theCodeStig
0 replies
8h43m

Exactly. I've long held the sentiment, that pure functional programming is easier than imperative programming. The only hard part, is switching from an imperative mindset.

kreyenborgi
0 replies
2d12h

makes tough problems easy and easy problems tough

And because of mutual recursion, that means that tough is easy (and easy tough). In other words, if we call the class of tough problems T and easy problems NT, we have T==NT, given FP.

NinoScript
0 replies
2d12h

So you’re saying that it does make you 10x as productive?

wruza
11 replies
2d10h

Hot take of the day: you learn that with imperative programming just as well.

I familiarized myself with fp to the point of writing scheme and haskell around 15 years ago. Read the classics, understood advanced typing, lambda calculus and so on. The best “fp” I’m using nowadays is closures, currying in the form of func.bind(this[, first]) and map/filter. Which all are absolutely learnable by the means of closures, which are useful but I can live without. Sometimes not having these makes you write effing code instead of fiddling with its forms for hours.

Still waiting for the returns from arcane fp-like code I produced earlier. Cannot recognize nor understand none of my projects in this style that I bothered to save in vcs. Imperative code reads like prose, I have some of it still in production since 2010.

These FP talks are disguised elitism imo (not necessarily bad faith). Beta reduction and monadic transformers sound so cool, but that’s it job-wise.

jerf
5 replies
2d4h

In theory, you could pick up your one language, say, Java, and through the course of a normal career learn everything necessary to program in that language in the best possible way.

In practice, it's a pretty well-known phenomenon experienced by many skilled programmers that being forced into different styles by different languages results in learning things that you would only have learned very slowly if you had stuck only to your original language. To be concrete about the "very slowly", I'm talking time frames of your entire career, if not your entire life and beyond. It would be a long time programming in Java before you discover the concept of something like "pure functions" as a distinct sort of function, a desirable sort of function, and one that you might want organize your programming style around.

Of course, having heard of the concept already, we'd all like to fancy ourselves smart enough to figure it out in less than, say, three decades. But we're just lying to ourselves when we do that. Even the smartest of us is not as smart as all of us. You are not independently capable of rediscovering everything all the various programming communities have discovered over decades. If you want to know what even the smartest of us can do on their own without reference to decades of experience of others, you can look into the state of computer programming in more-or-less the 1980s, 90s if you're feeling generous. I think we've learned a lot since then, and the delta between the modern programmer and a 1980s programmer certainly isn't in their IQ or anything similar, it is in their increased collective community experience.

By getting out into systems that force us to learn different paradigms, and into communities that have learned how to use them, we draw on the experience of others and cover far more ground than we could ever have covered on our own, or in the context of a single language where we can settle into a local optima comfort zone. Jumping out of your original community is kind of an annealing process for our programming skills.

"The best “fp” I’m using nowadays is closures, currying in the form of func.bind(this[, first]) and map/filter."

That is really not the lesson about software design that FP teaches, and blindly carrying those principles into imperative programming is at times a quite negative value, as your experience bears out. FP has more to say about purity of functions, the utility of composition of small parts, the flexibility of composition with small parts, ways to wrap parts of the program that can't be handled that way, and providing an existence proof that despite what an imperative programmer might think it is in fact possible to program this way at a system architecture level. I actually agree 100% that anyone whose takeaway from FP is "we should use map everywhere because they're better than for loops and anyone who uses for loops is a Bad Programmer" missed the forest for the leaves, and I choose that modification of the standard metaphor carefully. I consider my programming style highly influenced by my time in functional programming land and you'd need to do a very careful search to find a "map" in my code. That's not what it's about. I'm not surprised when imperative code is messed up by translating that into it.

cess11
2 replies
2d3h

"In theory, you could pick up your one language, say, Java, and through the course of a normal career learn everything necessary to program in that language in the best possible way."

OK, then you know about currying, immutable data structures, map/reduce/filter, &c.

Because Java has that since way back when. No real closures, I think, but that doesn't matter much because the anonymous functions do what you want pretty much all the time and you could probably invent your own closures if you really want something else.

SkyMarshal
1 replies
1d22h

> OK, then you know about currying, immutable data structures, map/reduce/filter, &c.

It's not a certainty you learn about those things from Java, depends on your team/manager/codebase. None of that is enforced in Java the way it is in fp. Plus, none of it is really core or native to Java, it was added on later.

That's how we got essays back in the 2000s like "The Perils of Java Schools", "Can Your Language Do This", and "Beating the Averages".

https://www.joelonsoftware.com/2005/12/29/the-perils-of-java...

https://www.joelonsoftware.com/2006/08/01/can-your-programmi...

https://paulgraham.com/avg.html

cess11
0 replies
1d22h

The constraint here is "the best possible way".

wruza
0 replies
2d3h

That might be, probably is. But the representation of FP gets mostly done by those who are only halfway there, creating an impression that it is a better way of programming overall, when it’s just a mixed bag of approaches dictated by a set of esoteric languages (from business pov). The worst part is that it doesn’t translate verbatim to any non-fringe language and creates a mess in it, due to adopted inertia. At least that is my experience with FP “recruitment”.

I wish I skipped this FP tour completely and instead learned how to structure my programs directly. Could save me a year or five. Maybe there’s no better way, but in practice clear explanations are always better than these arcane teachings where you repeat something until you get it by yourself.

SkyMarshal
0 replies
2d

> FP has more to say about purity of functions, the utility of composition of small parts, the flexibility of composition with small parts, ways to wrap parts of the program that can't be handled that way, and providing an existence proof that despite what an imperative programmer might think it is in fact possible to program this way at a system architecture level.

Adding to that, in my case it also made realize that deterministic elimination of entire classes of errors in large, complex code bases, in a systematic rather than ad-hoc way, is actually possible. Prior to discovering fp, and particularly Haskell's type system, I spent much effort trying to do that with a combination of TDD and increasingly elaborate try/catch/throw error handling. Discovering Haskell's compiler, type system, and monadic quarantining of effects obsoleted all that effort and was a huge eye opener for me. And a nice side-effect is easy, reliable refactor-ability. Being able to apply those concepts to imperative and other programming paradigms is where the real value in fp is, imho. Programmers still wrangling with the Tarpit [1] need to take a look if they haven't already.

[1]:https://news.ycombinator.com/item?id=34954126

cubefox
3 replies
2d9h

These FP talks are disguised elitism imo (not necessarily bad faith). Beta reduction and monadic transformers sound so cool, but that’s it job-wise.

They may be disguised mathematics. People are into math because it is neat / elegant / cool. So they study it regardless of whether it has a practical use or not.

dboreham
1 replies
2d4h

Mathematics is just a kind of programming. And vice versa.

cubefox
0 replies
2d1h

Programs fundamentally have state, while mathematical equations have not. Math is in its core declarative, while programming is essentially imperative, at least under the hood.

photonthug
0 replies
2d6h

Some programmers have serious math envy. This can be good if they are self aware about it and keep it in check, because it makes them better programmers. Otherwise they can a pain to work with. Seniors should be people that have dealt with this aspect of their own talent, not juniors who are promoted in spite of or because of it

cess11
0 replies
2d3h

I commonly implement things in an imperative style as a quick hack, then if it gets use I translate it into a more functional style. It kind of just happens as I clean it up and refactor during revisits.

It might be a matter of taste, but I enjoy code built with functional abstractions that allow neat composable data flows and some caches loitering around. I find it also helps when adding UI. Sometimes performance could be better with mutation, but when I'm at that point I've already spent much more time tuning the thing with caches.

baby
2 replies
2d3h

Ocaml definitely doesn’t make you more productive

Bostonian
1 replies
2d1h

I have not used OCaml, but presumably Jane Street thinks OCaml makes their coders more productive.

baby
0 replies
1d

I've looked at their code and, I would not wish it on anyone to work there, it's on par with large Java codebase at a bank institution

DimmieMan
1 replies
1d11h

I wouldn't even say antiquated, modern .Net APIs can suck to work with too, the entire ecosystem is written for C# ASP.Net core and everything else feels second class.

I love F#, the working with C# elements of the language drove me away.

neonsunset
0 replies
1d11h

Which elements were the source of pain?

Writing web applications and back-ends in F#, as far as I'm aware, is straightforward and a joy because of integration with the rest of ecosystem.

nequo
21 replies
2d12h

That’s interesting because F#’s OOP, as someone who knows neither C# nor Java, makes it more intimidating to me than OCaml.

Also interesting that when FP is mentioned, Hindley-Milner is implicitly understood to be part of FP too even though it doesn’t have to be. Clojure emphasizes immutability and FP but with dynamic typing and everything that comes with that.

mrkeen
12 replies
2d12h

Clojure emphasizes immutability

Is "emphasizes" just another word for second-class support?

C++ emphasizes the importance of memory safety.

chipdart
6 replies
2d8h

Is "emphasizes" just another word for second-class support?

I don't know what's your personal definition of "second-class support" but what it means is that it's explicitly supported by the language.

mrkeen
5 replies
2d4h

C++ explicitly supports memory-safe programming. You can choose whether you want to mess around with raw pointer arithmetic.

What safeguards does the language actually put in-place?

chipdart
4 replies
2d

C++ explicitly supports memory-safe programming. You can choose whether you want to mess around with raw pointer arithmetic.

I don't think you know what you're talking about. Managing object ownership through systems like smart pointers is not memory safety. Applications that use smart pointers still suffer from memory issues, and it's possible to adopt object ownership systems that still use raw pointers, such as It's object ownership system.

mrkeen
3 replies
1d23h

I don't think you know what you're talking about.

Right. I sound just like someone talking about how "a language which emphasizes immutability" is an OK replacement for a language with pure functions.

nequo
2 replies
1d22h

The world is much less black and white than you’d like to see it.

Functions in Haskell including Prelude can throw exceptions which is not reflected in the type signature of the function. That is an effect that makes seemingly pure functions impure.

You can’t judge a language from a list of buzzwords. You need to look at how it is used in practice.

massysett
1 replies
1d21h

Functions in Haskell including Prelude can throw exceptions which is not reflected in the type signature of the function. That is an effect that makes seemingly pure functions impure.

No, bottom, or _|_, is an inhabitant of every lifted type. An exception is bottom. So the / function is still pure even though it can throw a divide-by-zero exception.

Nemerie
0 replies
23h1m

Does it make it type-safe though? In dynamic languages type errors also result in exceptions.

y1n0
4 replies
2d12h

Immutability is definitely first class in clojure, but you can work with mutable structures when you need to.

nequo
2 replies
2d5h

This seems like a meaningless criticism when it comes to immutability in Clojure. You can have mutability in Haskell too. That doesn’t make it as unsafe as memory management in C++.

mrkeen
1 replies
2d4h

You can have mutability in Haskell too.

Haskell enforces this via a type system.

What safeguards around mutability does Clojure have?

If I import a method 'foo()', is there any kind of contract, notation ... anything which could suggest whether it mutates or not?

eyelidlessness
0 replies
2d2h

What safeguards around mutability does Clojure have?

Very nearly the entire language and standard library operate on immutable values only. Immutable values are the default, and you will use them for the vast majority of logic. You must do so, unless you very specifically opt to use dedicated reference types, at which point you’ll still need to produce intermediate immutable values to interact with that vast majority of the standard library.

And…

is there any kind of contract, notation ... anything which could suggest whether it mutates or not?

Functions which mutate state are almost always suffixed !. They will typically fail if you pass them immutable values; they only operate on reference types, which have to be dereferenced (typically with the prefix @) to access their state.

munchler
7 replies
2d12h

Doesn't the "O" in OCaml stand for "Object", though? I think you could pick up either F# or OCaml just as easily.

The nuances of OOP in F# can be ignored by beginners, so I really wouldn’t let yourself be intimidated coming from Clojure.

[0] https://ocaml.org/docs/objects

armchairhacker
6 replies
2d10h

OCaml classes and objects are (ironically) rarely used and generally discouraged. There are some cases where they’re practically required, such as GUI and FFI (js_of_ocaml). But otherwise, most code does encapsulation and abstraction using modules and functor modules (which are more like Haskell and Rust typeclasses than traditional OOP classes).

I don’t know much about F#, but last time I used it most of its standard library was in C# and .NET, so F# code would interact with objects and classes a lot. AFAIK F# also doesn’t have functor modules, so even without the dependence on C# code, you still can’t avoid classes and objects like you can with OCaml (e.g. you can’t write a generic collection module like `List` or `Set` without functors, it would have to be a collection of a specific type or a class).

neonsunset
5 replies
2d6h

F# uses .NET's generics, so the statement regarding List/Set is completely incorrect (all base collections are generic).

mrkeen
3 replies
1d23h

F# has "generics" just like Python and PHP now "have types".

It's not a yes/no feature.

mrkeen
1 replies
1d11h

It's not a yes/no feature.

FTFM: It's not a true/false feature.

neonsunset
0 replies
1d10h

Alright. Humor me, what is the issue with F# generics as compared to other languages with generics? Which implementation (that is productively useful) is a "true" one?

penteract
0 replies
2d6h

I think you misread their claim - they said that generic list/set would have to be classes, not modules (generic modules are a specific thing in OCaml and aren't the same as a module of generic classes).

empath75
16 replies
2d4h

I was a college dropout and self taught bash and python programmer and quite some time ago, I read about Haskell, decided to teach myself to use it, and then realized I had absolutely no idea what programming actually was, and basically spent the next 15 years teaching myself computer science, category theory, abstract algebra and so on, so that I could finally understand Haskell code.

I still don't understand Haskell, but it did help me learn Rust when I decided to learn that. And I think I could explain what a monad is.

edit: It's a data structure that implements flat map. Hope this saves someone else a few years of their life.

crabbone
13 replies
2d2h

I still don't understand Haskell

It's not you. Haskell has very bad syntax. It's not hard to understand it, it you rewrite the same things in something saner. Haskell was developed by people who enjoy one-liners and don't really need to write practical programs.

Another aspect of Haskell is that it was written by people who were so misguided as to think that mathematical formulas are somehow superior to typical imperative languages, with meaningful variable names, predictable interpretations of sequences of instructions etc. They, instead, made it all bespoke. Every operation has its own syntax, variables are typically named as they would in math formulas (eg. X and X'). This makes no sense, and is, in fact, very harmful when writing real-world programs, but because, by and large, Haskell never raises to the task of writing real-world programs, it doesn't deter the adepts from using it.

agentultra
7 replies
2d

You knew Paul Hudak, Simon Peyton Jones, Phil Wadler, etc? Were they thinking about the benefits of mathematical formulas over program counters and procedural keywords when designing Haskell?

I was under the impression from the History of Haskell [0] that they were interested in unifying research into lazy evaluation of functional programming languages.

This makes no sense, and is, in fact, very harmful when writing real-world programs,

Gosh, what am I doing with my life? I must have made up all those programs I wrote on my stream, the ones I use to maintain my website, and all the boring line-of-business code I write at work. /s

In all seriousness, Haskell has its warts, but being impractical isn't one of them. To some purists the committee has been overly pragmatic with the design of the language. As far as functional programming languages go it's pretty hairy. You have "pure" functions in the base libraries that can throw runtime exceptions when given the wrong values for their arguments (ie: the infamous head function). Bottom, a special kind of null value, is a member of every type. There exist functions to escape the type system entirely that are used with some frequency to make things work. The committee has gone back more than once to reshape the type-class hierarchy much to the chagrin of the community of maintainers who had to manually patch old code or risk having it not longer compile on new versions of the base libraries. These are all hairy, pragmatic trade-offs the language and ecosystem designers and maintainers have had to make... because people write software using this language to solve problems they have and they have to maintain these systems.

[0] https://www.microsoft.com/en-us/research/wp-content/uploads/...

crabbone
6 replies
1d4h

You don't need to know the author personally to appreciate the result of their work...

Gosh, what am I doing with my life?

I would ask the same question, but unironically. No, you didn't make up those programs of course. I didn't claim that real-world programs are impossible to write in Haskell. I claimed that Haskell is a bad tool for writing real-world programs. People make sub-optimal decisions all the time. That's just human nature... choosing Haskell for any program that would require debugging, long-term maintenance, cross-platform UI, or plenty of other desirable properties is just a very bad choice. But people like you do it anyways!

Why? -- there are plenty of possible answers. If I wanted to look for the flattering answers, I'd say that a lot of experienced and talented programmers like Haskell. So, choosing to write in Haskell for that reason isn't such a bad idea. But, if I wanted to judge Haskell on its engineering rather than social merits: it has very little to bring to the table.

tome
3 replies
1d4h

Strange. In my experience Haskell is the best language for long-term maintainability! I can actually come back to code I've written years ago and understand what it does. I've never experience that with another language.

crabbone
2 replies
23h35m

This is not really what maintainability means to me. To me, it just means you have a good memory.

To be able to maintain something you need to be able to transfer the ownership of the piece of code to someone else. You need to be able to amend the code easily to extend or to remove functionality. It means that the code can be easily split into pieces that can be given to multiple developers to work simultaneously towards a common goal.

Haskell scores poorly on any of those points.

It's very hard to transfer ownership because Haskell programs have too much bespoke syntax that anyone but the original author will not be familiar with. Haskell programs are very hard to understand by executing them because there's no chance of a good step debugger due to the language being "lazy". Haskell programs are always "full of surprises" because they allow a lot of flexibility in the interpretation of the same symbols. The problems C++ programmers complain about when faced with operators overloading are the kind of things Haskell programmers call "Tuesday".

"Pure" functions are a lot harder to modify to add functionality. Typically, such modifications would require swiping changes across the entire codebase. One may argue that this preserves "single responsibility" constraint, but in practice it means a lot more work.

Objects and modules were advertised as a solution to code modularity -- another necessary quality for maintainability. While Haskell has modules, it doesn't really have smaller units of encapsulation other than functions. If two programmers are tasked to work on the same module, they aren't going to have a good time.

tome
0 replies
21h6m

I don't know what to tell you. I've been successfully using Haskell in production for over ten years, and it's the most maintainable language I've used for a variety of reasons, including the one given in my message above.

agentultra
0 replies
22h33m

To be able to maintain something you need to be able to transfer the ownership of the piece of code to someone else. You need to be able to amend the code easily to extend or to remove functionality. It means that the code can be easily split into pieces that can be given to multiple developers to work simultaneously towards a common goal.

Haskell scores poorly on any of those points

Interesting. I work at a company with over a hundred people committing to a Haskell code-base that runs our entire back-end. As far as I can tell we're continuously shipping code several times a day, every day, with very low impact on our service level indicators.

When we do have production issues they're rarely attributed to Haskell. I can think of one incident in 3 years where a space leak was caught in production. It was straight forward to address and fix before it impacted customers.

In practice, Haskell has been amazing to work with. I just rewrote over a thousand lines of code and shipped that to production without incident. I've never had that go off easily in any other language I've worked with (and I've been around for over twenty years and worked in everything from C, C++, Python, and JS).

agentultra
1 replies
1d2h

I think we can drop the "real-world," qualifier as it is unlikely there is an "imaginary-world" that we write programs for.

What engineering merits did you have in mind?

dllthomas
0 replies
1d

it is unlikely there is an "imaginary-world" that we write programs for

I think academia is typically what's meant as an imaginary world, along with play that may be in reference to no world at all.

jiiam
2 replies
1d20h

Just to give a different pov I find Haskell very intuitive, and particularly I find that code written by other people is very easy to understand (compared to Java or TypeScript at least).

And by the way x and x' are totally fine names for a value of a very generic type (or even a very specific type depending on the circumstances), as long as the types and the functions are decently named. I mean, how else would you call the arguments of

splitAt :: Eq a => a -> [a] -> [[a]]

?

There is no need for anything more complex than

splitAt x xs = ...

whytevuhuni
1 replies
1d9h

I mean, how else would you call the arguments of > splitAt :: Eq a => a -> [a] -> [[a]]

Those don't seem to be names of parameters, but rather of types. It's missing parameter names entirely.

I spent a good 2 minutes looking at that signature trying to figure it out (and I've read some Haskell tutorials so I'm at least familiar with the syntax). This would've helped:

    def split_by<T>(separator: T, list: List[T]) -> List[List[T]]
`sep`, `separator`, `delim`, `delimiter` would've been good names.

jiiam
0 replies
1d5h

Those don't seem to be names of parameters, but rather of types. It's missing parameter names entirely.

The rest of the definition is at the end, to see it as a whole:

  splitAt :: Eq a => a -> [a] -> [[a]]

  splitAt x xs = ...
To clarify, I assumed that by using the constraint `Eq a` and the name splitAt there was no need for extra clarification in the names of the parameters but apparently I was wrong.

tome
1 replies
2d2h

That's what (some) other people do. None of that stops you writing Haskell in whatever style you want, with meaningful variable names, curly braces and semicolons!

crabbone
0 replies
2d1h

Unfortunately, writing isn't even half the battle. Before you start writing, you need to read a lot. And Haskell code is, in general, atrocious. It always feels like there was a normal way to do something, but the author decided to chose the most convoluted way they can imagine to accomplish the same thing for the sake of a bizarre fashion sense.

bspammer
1 replies
2d3h

I think that's a good starting definition for programmers, but still could cause confusion when you run into something like IO in Haskell. IO isn't really a data structure, and it's hard to fit the "flat map" concept to it.

nextaccountic
0 replies
1d5h

If you want you can still keep this point of view, by saying that IO is conceptually a data structure that builds a description of what the program does. In this point of view it follows that there is another, impure program that interprets the IO data structure and actually performs the computations

(Of course in practice IO isn't implemented like this, because it would be too slow)

(But in every other language, like Javascript or Python, you can define IO as a data structure. Or even in Haskell itself, you can define for eg. a free monad that gets interpreted later, and it can be made to be just as good as IO itself, though typically people make it less powerful than IO)

However note that every other "computational" monad (like the list monad or the Cont monad) actually is a data structure, even though they describe effects just like IO does. This is because IO is the only possible exception to the "monads are data structures" thing in Haskell (if you don't subscribe to the above view), because Haskell doesn't let you define types that aren't data structures

The only issue with this point of view is that you now need to say what flatMap means for things that are not shaped like arrays. Eg. does it make intuitive sense to flatMap a tree? (A retort is that it must make sense, whatever you call this operation; and flattening a tree means to turn a tree of trees into an one-level tree)

IshKebab
14 replies
2d11h

I feel like functional programming is pretty trivial. It's pure programming that is very difficult.

They're often conflated because Haskell is pure and functional and probably the most talked about heavily functional language.

I certainly didn't know that impure functional languages like OCaml existed for ages.

fire_lake
12 replies
2d11h

Is Haskell pure?

It has exceptions

You can divide by zero

It has unsafe IO primitives

tome
4 replies
2d10h

You're right: "pure" is not a well-defined concept. The well-defined concept that describes Haskell's benefits in this regard is "referential transparency". That means that this code

    let x = <definition of x>
    in ... x ... x ... 
(i.e. defining a variable x and then using it some number of times) is equivalent to

    ... <definition of x> ... <definition of x> ...
Seen in the opposite direction (transforming the bottom code to the top code) this means that extracting repeated code is always a valid thing to do. It's not valid in most other languages, and certainly no mainstream ones.

simiones
3 replies
2d8h

Well, technically that isn't true if you use, for example, unsafePerfomIO in the defintion of x. Referential transparency is still a spectrum, just like purity. Haskell is much closer to purity than the vast majority of languages out there.

Also, even if Haskell were perfectly pure, the fact that it uses lazy evaluation is far more important to actually being able to make use of referential transparency. In a strict language you will still see a massive performance differences if replacing the first version with the second, in the most common cases.

tome
2 replies
2d7h

technically that isn't true if you use, for example, unsafePerfomIO in the defintion of x

Ah, well, regardless of whether it holds in Haskell, referential transparency is a well-defined concept. Purity is not a well-defined concept (at least as far as I know. Please share a formal definition if you have one!). That's primarily what I'm trying to say.

But I also disagree with your point about unsafePerformIO. In practice, nothing in Haskell violates referential transparency in a significant way. Who knows why? It's an emergent phenomenon that in principle need not have occurred, but in practice it did. Debug.Trace and similar are about the only things that technically violate referential transparency (and they are extremely tame).

the fact that it uses lazy evaluation is far more important to actually being able to make use of referential transparency

Yes, I agree with that.

simiones
1 replies
2d5h

It seems that someone did come up with a formal definition to try to capture the concept [0], though I haven't looked into the details to see whether it really matches the colloquial use of the terms "pure" and "impure" functional programming. In short, the formal definition they came up with is that a language is pure if the result of a valid program is the same under any parameter passing strategy.

I should note that I agree that, in practice, Haskell is almost always pure and/or referentially transparent. I was just pointing out that technically the GP was correct that it's not perfectly 100% so.

[0] https://www.cambridge.org/core/journals/journal-of-functiona...

tome
0 replies
2d4h

Sabry's definition of "pure" fails to satisfy me for two reasons:

1. It assumes that the language is "a conservative extension of the simply typed λ-calculus". That's rather high-powered yet also really restrictive! Haskell doesn't satisfy that requirement. It also assumes the language has functions. Expression languages (i.e. ones without functions) are perfectly reasonable languages and it makes sense to ask whether they are pure.

2. It assumes that a language putatively has multiple evaluation orders (which I suppose is a consequence of the assumption "It is a conservative extension of the simply typed λ-calculus"). Haskell doesn't have multiple evaluation orders. It has one! (notwithstanding you can influence it with seq/!)

If you unpick the essence of what Sabry's really saying you find you can translate it into the Haskell world through imposing two conditions:

C1. soundness of the β-axiom (a.k.a. referential transparency) (this plays the role of Sabry's condition that call by need and call by value have the same result).

C2. That

    let x = <definition of x> in ...
gives the same result as

    let !x = <definition of x> in ...
whenever the latter terminates. (This plays the role of Sabry's condition that call by name and call by value have the same result.) I omitted this condition from my original. I probably shouldn't have because technically it's required, but it risks getting into the weeds of strictness versus laziness.

So Sabry's definition of "pure" is a long-winded and restricted way saying something that can be much more conveniently expressed by C1 and C2. If you disagree with me, please demonstrate a property of purity that can't be deduced from C1 and C2!

I should note that I agree that, in practice, Haskell is almost always pure and/or referentially transparent. I was just pointing out that technically the GP was correct that it's not perfectly 100% so.

OK, fine, but I also said the GP was correct! I am keen to point out, however, that exceptions (including division by zero) do not violate referential transparency (and if someone thinks they violate "purity" that may be a sign that "purity" is ill-defined).

afiori
2 replies
2d9h

I feel like exceptions where added as a mix of "look we can do that too" and "maybe if so many functions return optional values then it is going to be too much of a pain to use"

In hindsight I think few would now regret not having added them in the first place.

fire_lake
1 replies
2d7h

maybe if so many functions return optional values then it is going to be too much of a pain to use

I strongly believe that there is a point in the PL design space that makes optionals everywhere usable. Maybe Haskell can still be the language that delivers this.

IshKebab
2 replies
1d19h

It is pure in the same way that Rust is memory safe. That is too say there are a tiny number of exceptions/escape hatches, but they are not meant to be the norm. Every day programming doesn't involve them.

Exceptions aren't impure anyway.

nextaccountic
0 replies
1d4h

Exceptions define an effect. Code with exceptions aren't actually pure in the sense that their return type doesn't fully describe what the code does, so it doesn't just map an input into an output: there is something else going on

In some pure functional languages, the pure fragment doesn't have exceptions, and you add exceptions as an effect (or as a monad)

(If you reify the effect with a type like Either or Result, then the code becomes pure: but that's just a monad like said above)

nextaccountic
0 replies
1d4h

Anyway I really like the take that Haskell is pure in the same sense that Rust is safe

Haskell has impure constructs but sets you up to define referentially transparent abstractions with them, and this is actually valued by the Haskell community

afiori
0 replies
2d9h

It has unsafe IO primitives

To be tongue in cheek then it also has the side effect of heating the CPU.

baby
0 replies
2d3h

Sir have you heard of GADTs

richrichie
12 replies
2d12h

But why do we need Haskell for this?

anon291
5 replies
2d12h

Realistically we don't but it's very rare to meet a programmer who understands these distinctions thats not also a great functional programmer.

This is my experience after spending five years as a Haskell programmer and managing a Haskell team for several years and now moving back to the c++ world to play with AI.

I know lots of good c++ programmers working on cutting edge stuff, real experts in their field, but they sometimes still don't have a clear way to understand how to model data

That is my opinion. It's probably highly contentious.

cornel_io
4 replies
2d10h

I've actually had to fire a technically exceptional Haskell programmer because of the damage they did to our C# codebase (and arguably moreso, the team). Sometimes it's not a matter of talent or skill, but culture fit.

In my experience FP-aligned people on non-FP projects tend to be more likely to overengineer, more prone to argue in favor of the Great Rewrite For No Reason Except Aesthetics, and more likely to abuse "lesser" programmers when they put up PRs. They suck as team players on teams that are not made of language nerds. I am not just talking about the one person here who I fired, this is a legit pattern I've noticed over at least a half dozen people.

Conversely, they are exactly the right people to deploy when you have really tough, self-contained problems to solve that you wouldn't trust the normal Java 9-5ers to tackle.

No matter how they do it, you can always rewrite their working code in a more maintainable language later once it's working, and make it integrate well with the rest of your stack. :D

MrBuddyCasino
1 replies
2d8h

This is my experience, too. Some of the worst code I‘ve seen was a Haskell guy who first built his own (reactive?) concurrency framework and then implemented the actual functionality in completely unidiomatic and undocumented Java.

Some people don’t understand that the „best solution“ is not necessarily equal to the most beautiful abstraction they can think of.

auggierose
0 replies
2d6h

Some of the best code I‘ve seen

There, fixed it for you.

But I have to be fair, whenever I see a demand for "idio(ma)tic code" I know that this is a place to avoid, no matter if they are imperatively or functionally inclined.

freedomben
0 replies
2d1h

Your story matches my experience, but it always makes me think, why did this person want to work with you in the first place?

A great Haskell programmer (generally speaking) is going to be a culture misfit in any Java, C#, golang, etc shop. I know because I've been that miserable bastard who loves functional programming working with Java devs who don't know anything about FP and couldn't care less. To be clear I'm not saying you can't find a compatible Java shop (I actually did find a startup with a lot of Java devs who appreciated FP and used much of it in Java, and that was pretty great honestly), just that the odds are highly against you.

My biggest advice to people who like FP is: Find a job in a language like Clojure, Elixir, Scala, etc. There are a lot more jobs than you'd think. But if you can't, Ruby and Javascript/Typescript can be pretty close depending on where you go. Talk to existing devs and see how feel about FP in general before you join though!

anon291
0 replies
2d2h

I mean sure. Realistically, I'm certain I would do that if I were working on a typical code base, which is why I'm in an extremely niche field where that sort of thing is valued. From my extremely biased perspective, these are the 'hard' problems that need solving, versus the general run of the mill operational things. That probably sounds pretentious, but it takes skills for both.

Joker_vD
5 replies
2d7h

Because for some reason there are no pure strict-by-default languages around.

simiones
1 replies
2d5h

I think Idris is the best example of a pure strict-by-default language (that also supports totality checking, I believe).

ReleaseCandidat
0 replies
2d3h

that also supports totality checking, I believe

Yes, it does. It's also dependently typed.

tome
0 replies
2d7h

PureScript is an example.

foldr
0 replies
2d7h

Elm is one example of such. However, it's also an illustration of why these languages are rare. With a strict semantics there's an almost unbearable temptation to add library functions with side effects. Elm only avoided this fate by giving its BDFL strict control over which packages could access the JS FFI. But that upset a lot of people.

ReleaseCandidat
0 replies
2d3h

All of them (especially newer ones) are, except Haskell (und some other, nowadays either obsolete or really obscure languages).

Idris (2), PureScript, Elm, Unison, Roc, Lean (4), Koka, Flix (and some other I've forgotten about).

crabbone
8 replies
2d2h

I would recommend neither of those.

Haskell has very bad syntax (with extensive backing from Microsoft, iirc the guy who writes the compiler is a Microsoft's Research employee).

F# is a straight-up Microsoft's language.

It doesn't matter what other benefits it has. Just don't touch anything created by that company, and you will have one fewer regrets in your life.

But, if you still want a language from that category: SML or Erlang would be my pick.

square_usual
3 replies
2d

SPJ has left MSR and is now at Epic games, working on a new PL. However, even while he was at MSR, MS didn't really have a say in how Haskell was developed.

crabbone
2 replies
1d21h

Well, MS didn't have to do anything. It's enough that they have (or had) the opportunity to do something.

There isn't an Overmind in MS that in a creepy voice tells you to spawn more overlords. Less than that, there doesn't need to be a written document that tells you to give money to MS or your data etc. There's just a general accepted understanding among the people who run that company that ends justify the means. And by "ends" they mean them and their investors getting rich.

If Haskell compiler could've been turned into a money-making machine, and it only required killing off half of Haskell programmer, MS would be working overtime on the plan to hide the bodies, but they'd never even consider the possibility of killing being bad... (metaphorically speaking, hopefully)

nequo
1 replies
1d13h

Do you also suspect homicidal money making motives behind Z3, Lean, and F*? It seems more likely to me that they just want some useful knowledge out of these projects that they can integrate into a product that actually sells.

crabbone
0 replies
1d4h

I have a misfortune to know personally some of the mid-to-high level execs from MS. What I write is based on the experience of working with these people. And I don't even know what Z3, Lean or F* are, so, pardon my ignorance, but I cannot answer your question.

bmoxb
2 replies
1d18h

What's wrong with Haskell's syntax? I think it's generally pretty nice though can be excessively terse at times.

trealira
0 replies
1d2h

From the point of view of writing a parser, Haskell's whitespace syntax seems like a hack. So, the grammar is defined with braces and semicolons, and to implement significant whitespace, the lexer inserts opening braces and semicolons at the start of each line according to some layout rules. That's not the hacky part; what makes it a hack is that to insert closing braces, the lexer inserts a closing brace when the parser signals an error. You can read about it here [0].

Also, on an aesthetic level, I think a lot of infix operators are kind of ugly. Examples include (<$>), ($), and (<*>). I think Haskell has too many infix operators. This is probably a result of allowing user-definable operators. I do like how you can turn functions into infix operators using backticks, though (e.g. "f x y" can be written as "x `f` y").

[0]: https://amelia.how/posts/parsing-layout.html

crabbone
0 replies
1d4h

* Significant white space, and the rules around whitespace are very convoluted.

* There's no pattern or regularity to how infix / prefix / suffix operators are used which makes splitting program text into self-contained sub-programs virtually impossible if you don't know the exact behavior, including priority of each operator.

* There's a tradition of exceptionally bad names for variables, inherited from the realm of mathematical formulas. In mathematics, it's desirable to give variables names devoid of everyday meaning to emphasize the generic nature of the idea being expressed. This works in the context of very short formulas, but breaks entirely in the context of programs which are usually many orders of magnitude bigger than even the largest formula you've ever seen. There, having meaningful names is a life west.

* It's impossible to make a good debugger for Haskell because of the language being "lazy". Debuggers are essential tools that help programmers in understanding the behavior of their programs. Haskell programmers are forced to rely on their imagination when explaining to themselves how their program works.

* Excessive flexibility. For example, a Haskell programmer may decide to overload string literals (or any literals for that matter). This is orders of magnitude worse than eg. overloading operators in C++, which is criticizes for defying expectations of the reader.

One of these points would've been enough for me to make the experience of working with a language unpleasant. All of them combined is a lot more than unpleasant.

fransje26
0 replies
2d1h

Just don't touch anything created by that company, and you will have one fewer regrets in your life.

:-)

__rito__
1 replies
2d10h

For the benefit(s) that you list, which are the best learning resources for F#?

Bostonian
1 replies
2d1h

In modern Fortran, functions should be pure (although the language does not require this), and procedures that mutate arguments are made subroutines (which do not have return values).

pklausler
0 replies
1d23h

Note that Fortran's interpretation of the term "pure" bizarrely allows a "pure" subprogram to depend on mutable state elsewhere (in a host, a module, or a COMMON block). So Fortran's "pure" functions aren't referentially transparent.

(F'2023 added a stronger form of "pure" and calls it "simple", but it didn't strengthen the places where a "pure" procedure should be required to be "simple", such as DO CONCURRENT, so being "simple" will be its own reward, if any compiler actually implements it. And a "simple" function's result value can still depend on a mutable pointer target.)

sterlind
0 replies
1d21h

I really wanted to like F#, and I kinda do, but it has a number of quirks, compiler issues and cracks in the design that are getting worse:

First off, the compiler is single-pass. All your definitions have to be in order. This even extends to type hints - it can't use clues to the right to deduce the type of an expression on the left. This is supposedly for perf reasons, but the compiler can become extremely slow because the inference engine has to work so hard - slower than GHC for sure.

Speaking of slowness, Haskell is surprisingly fast. Idiomatic Haskell can be within 50% the perf of C, since its laziness and purity unlock powerful optimizations. F# is eager and the compiler doesn't do anything fancy. Perf often makes you reach for mutable state and imperative structure, which is disappointing.

The OOP paradigm feels bolted on. Should you use classes or modules? Pure functions or members? It depends, what mood are you in? Unfortunately only member functions support overloads, and overloads are useful for some SFINAE-type patterns with `inline` functions, so they get a bit overused.

`ref` struct support, which is vital for zero-copy and efficient immutable data, have very primitive support. even C# is ahead on this.

Very limited support for implicit conversions, no support for type classes and no function overloading leaves F# with nothing like a numeric tower you'd have in Lisp, and makes building something like Numpy clunky.

I use C# at work, and I love Haskell, so I really wanted to love F#. But it just doesn't get the love it needs from MS, and some design decisions aren't aging well - particularly as C# itself evolves in directions that are tricky for F#'s aging compiler to support.

realPtolemy
0 replies
2d11h

Or Elixir! Quite easy to grasp as well.

dejvid123
0 replies
2d8h

I would suggest Scala as FP for beginners. It doesnt forces you to do pure functions. And its really beginners friendly to start with.

czhu12
0 replies
2d2h

doesn't require every single function to be pure

having never done F# or haskell, doesn't that start getting into the territory of languages that encourage functional programming like ruby or javascript (modern javascript)?

beders
0 replies
2d12h

I feel the same pay-off - but arrived at that point via Clojure. Immutable-first, aim for purity, ability to drop out of it when necessary.

As stringent as you need it to be (static vs. dynamic types vs. specs), as flexible as you want it to be.

TrackerFF
51 replies
2d12h

What's the benefit of learning a PURE functional programming language, opposed to just using a language which has adapted the best bits and pieces from the functional programming paradigm?

Given that you want write code that sees "real world" use, and is used to handle data and events from the real world. To me, sometimes the line between optimized code and intellectual curiosity blurs.

elbear
17 replies
2d11h

What's the benefit?

You start to see functions as self-contained things, as lego blocks. All the logic of the function is there in the function. It only works on values it receives as inputs (it can't read global variables). It only outputs its results (it doesn't assign them to some other global variable that you have to track down).

This makes your code modular. You can add a function in a chain of functions, if you want to perform an extra transformation. Or, you can replace a function with a different one, if you want to change something about the logic.

Dylan16807
16 replies
2d11h

Is there a benefit if you're already familiar with writing functions like that? Is it wrong for me to expect that most programmers are already familiar with functions that only use their inputs, but treat that style as significantly more optional?

I wrote pure functions for a minute there but that's not the same, a function that only uses its inputs can modify an object while a pure function would have to return a new object. But, similarly, I bet that a lot more people know about pure functions than have any working knowledge of Haskell.

elbear
11 replies
2d10h

It seems you only focused on one of the conditions I mentioned.

You have to follow both rules: the one about inputs and the one about outputs.

This is like a contract. If you enforce it throughout your program, you gain some guarantees about your program as a whole.

Dylan16807
10 replies
2d10h

I was looking at both rules, and specifically I was using the long version where you said "it doesn't assign them to some other global variable that you have to track down". If you pass in a mutable object then that's not "some other global variable".

If I interpret "It only outputs its results" in a very strict way, that still allows having output and in/out parameters. The latter of which can break purity.

Though you can break purity with just inputs:

  define f(o): return o.x
  let a = {x=1}
  f(a)
  a.x = 2
  f(a)
If you meant to describe pure functions then that's fine, that's why I addressed pure functions too, but I don't think your original description was a description of pure functions.

elbear
9 replies
2d9h

So, another definition of a pure function is that, for a particular input it will always return the same output.

Your example respects the rule:

    f({x=1}) == 1
    f({x=2}) == 2
But it's true that the two rules I gave are not enough to make a function pure. Because I didn't say anything about I/O. So, a function that follows the rules about inputs and outputs, could still do I/O and change its outputs based on that.

Starting from the question that gave birth to this whole thread: "What's the benefit of learning a PURE functional programming language..."

The other benefit is that such a language forces you to be explicit about I/O. It does it in such a way that even functions that do I/O are pure. The good part is that, if you use it long enough, it can teach you the discipline to be explicit about I/O and you can use this discipline in other languages.

For example, this is how I see this principles being used in Python:

https://elbear.com/functional-programming-principles-you-can...

Dylan16807
8 replies
2d9h

Your example respects the rule:

Every definition of purity I can find that talks about objects/references says that if you pass in the same object/reference with different contents then that's not pure.

Your version differs from mine on that aspect. It passes two unrelated objects.

Starting from the question that gave birth to this whole thread: "What's the benefit of learning a PURE functional programming language..."

I interpret saying a language is "purely functional" as being more about whether you're allowed to write anything that isn't functional. I can talk about BASIC being a "purely iterative" language or about "pure assembly" programs, without any implication of chunks of code being pure.

elbear
6 replies
1d13h

I gave it some more thought.

I now believe that learning a language like Haskell (or Elm or PureScript) forces you to see your program as pipes that you fuse together.

It's not just functions. Haskell has only expressions and declarations. That means, for example, that you are forced to provide an `else`, when you use `if`. The idea is that you have to keep the data flowing. If a function doesn't provide a meaningful value (so it returns nil, None), you have to handle that explicitly.

And, btw

Your version differs from mine on that aspect. It passes two unrelated objects.

Those two objects are not unrelated. They have the exact same structure (an attribute named "x"). So they could be considered two values of the same type.

Dylan16807
5 replies
1d13h

I mean that the identity is unrelated. Yes, you can say they're the same type. But I'm actually passing the same object in. If f evaluated lazily, it could return 2 from both calls. Something like:

  define f(o): return o.x
  let a = {x=1}
  n = f(a) // n is not evaluated yet
  a.x = 2
  m = f(a)
  return n + m // returns 4

elbear
4 replies
1d12h

Ok, you're probably proving the point that purity also requires immutability. I'm not sure, as I haven't considered all the implications of Haskell's design.

My two rules about inputs and outputs are more like heuristics. They can improve code organisation and probably also decrease the likelihood of some errors, but they don't guarantee correctness, as you're pointing out. They're shortcuts, so they're not perfect.

Edit: If I remember right, it's laziness that requires immutability. I think I read something about this in the Haskell subreddit as an explanation for Haskell's design.

Dylan16807
3 replies
1d2h

Even without laziness, you can get similar problems if f creates a closure or returns something that includes the parameter object.

elbear
0 replies
2h54m

Doesn't that example also show a kind of laziness?

I say this because the second solution to that question offers the solution of using `i` as a default argument when defining the function. That forces its evaluation and fixes the problem.

elbear
0 replies
2h48m

Ok, the Wikipedia definition of pure function is more strict and than what I was saying and I think it covers the issues you mentioned:

https://en.wikipedia.org/wiki/Pure_function

simiones
0 replies
2d5h

Purely functional language is pretty universally taken to mean that the language enforces function purity for all functions [perhaps with some minor escape hatches like Haskell's unsafePerformIO].

mrkeen
3 replies
2d8h

Is it wrong for me to expect that most programmers are already familiar with functions that only use their inputs

They'll experience no friction when using Haskell then. Haskell only refuses to compile when you declare "Oh yeah I know functions from other languages this is easy" but then do some mutation in your implementation.

Dylan16807
2 replies
2d7h

They'll experience no friction when using Haskell then.

The question was what benefit you'd get from learning a functional language, though. Existing knowledge making it easier to switch to a functional language is the inverse of that.

And there's no assumption they'll actually be making things in Haskell, so easy switching isn't by itself a benefit.

mrkeen
1 replies
2d4h

Yeah I can't really follow these threads.

I saw:

What's the benefit of learning a PURE functional programming language, opposed to just using a language which has adapted the best bits and pieces from the functional programming paradigm?

I also saw:

  Though you can break purity with just inputs:

  define f(o): return o.x
  let a = {x=1}
  f(a)
  a.x = 2
  f(a)
I don't know if that's the tail-end of a reductio ad absurdum which is trying to demonstrate the opposite of what it stated. Either way, to be clear, the above would be rejected by Haskell (if declared as a pure function.)

I guess if you learn a functional language "which has adapted the best bits and pieces from the functional programming paradigm" then you might think that the above is broken purity, but if you learn a "PURE functional programming language" then you wouldn't.

Dylan16807
0 replies
1d22h

The topic is what you would learn from a pure functional language.

A) You can learn and enforce full purity in other languages. B) You could also learn and adapt just the idea of clean inputs and outputs to those other languages.

Both of those are valid answers! It's very hard to be completely pure if you're not currently using Haskell.

The way they worded things, I wasn't sure which one they meant. They were describing option B, but I didn't know if that was on purpose or not.

So I responded talking about both. Complete purity and just the idea of clean inputs and outputs.

That code snippet is not some kind of absurd argument or strawman, it's there to demonstrate how the description they gave was not a description of purity. It's not aimed at the original question.

keybored
10 replies
2d7h

Given that you want write code that sees "real world" use, and is used to handle data and events from the real world.

Real world? As opposed to what?

Is there any benefit to answering such polemical questions as if they are not rhetorical?

lukan
9 replies
2d7h

As opposed to the abstract academic world?

The only time I had contact with Haskell was in university and I did not see it appealing back then, nor now, nor have I ever seen a program that I use, written in it.

So learning a bit of pure Haskell might have been beneficial for me to become a better programmer, but I still fail to see it being more than that - a academic language. Useful for didactic purposes. Less to actually ship software.

padthai
8 replies
2d5h

nor have I ever seen a program that I use, written in it

The only mass market Haskell software that I know of is Pandoc. Others like Shellcheck and Postgrest are popular in their niche.

I am not sure that Haskell is faring worse that other programming languages in its level of popularity, like Julia, Clojure or Erlang.

lukan
7 replies
2d4h

Pandoc seems useful, but maybe "mass market" is a bit of an overstatement?

And since many programmers like myself had to learn Haskell, I think Haskell should have a better head start and be in a better position, if it would be so useful for "real world" use cases.

But please don't take this as an attack on haskell. I have nothing against the language, or its users and I did not suffered because of it in university, I am just curious on the appeal. Because I love clean solutions, but I also want to ship things. So part of me are wondering if I am missing out, but I so far I see not much convincing data. (But I am also mainly interested in high performance and real time graphics and haskell is really not the best here)

padthai
4 replies
2d3h

I am not a user of the language (although I learned it like you). I just came to chime in that (a) there is at least one very popular software written in Haskell and (b) Haskell seems to ship a good amount of software for its popularity.

Haskell never got the “killer framework” like Rails or Spark that allowed to become more mainstream, even if it was teached in Universities all over the world.

lemonwaterlime
2 replies
1d17h

Haskell has yesod, which is Haskell’s Rails. It’s a batteries included web app scaffold. You still need to understand monads, though. But any Haskell shop with web apps is using that.

There’s also scotty and servant for web server stuff.

There’s Esqueleto and Persistent for doing postgreSQL database queries.

And so on.

padthai
0 replies
1d7h

I took a look at Yesod and looks more like Haskell’s Sinatra and comes 6 years later than Rails, in 2010. By 2010 a simple web frameworm is table stakes, no huge differentiator.

lukan
0 replies
1d10h

Yesod seems interesting indeed.

Even though they are biased:

"From a purely technical point of view, Haskell seems to be the perfect web development tool."

But I skimmed the tutorials and can say, I am really not surprised, why it did not take off.

The perfect web developement tool is simple in my opinion. Yesod isn't.

lukan
0 replies
2d2h

"Haskell never got the “killer framework” like Rails or Spark that allowed to become more mainstream"

But why is that the case?

Thinking about writing a "killer framework" with huskell gives me a headache. Doing UI in huskell? Eventloop? Callbacks? Is that even possible, without doing awkward workarounds?

lukan
0 replies
2d3h

I don't think markdown conversion is a mass market application, but maybe personally I will indeed use it soon, so that would be something I guess ..

Skinney
6 replies
2d11h

What's the benefit of learning a PURE functional programming language

1. It makes it easy to learn how to structure a program in a pure way, which is hard to do in languages that offers you a easy way out.

2. Since "everything" is pure, writing tests is easier.

3. You know for certain that if you discard the result of a function call, all the side-effects that it would normally trigger would be stopped as well.

4. A program where all side-effects are guaranteed to be pushed to the boundaries, is a program that's easy to reason about.

a language which has adapted the best bits and pieces [...]

Languages that has adapted to best bits and pieces from X, Y, Z tend to be worse than a language specifically for X, Y and Z.

For instance, Java supports functional programming but functional programming languages are much better at it because they were designed for that specific paradigm. In the same vein, sure you can write pure programs in F#, but not as easily as in Haskell that was designed for doing just that.

and is used to handle data and events from the real world

Pure code really only means (in practice) that side-effects are controlled, which is generally very helpful. It forces you to structure programs in a way which makes it easy to pinpoint where data is coming in, and where data is going out. It also makes for easier testing.

Being able to know, definetly, the answer to "will calling foo perform a network request" without having to read the source for foo is quite nice, especially when dealing with third-party code.

All this said, I probably wouldn't begin with Haskell. A language like Elm is much better suited for learning writing pure programs.

neonsunset
4 replies
2d5h

The problem with Haskell is that it's slow and memory-heavy (and OCaml is the same, but worse). F# and Scala (and Clojure?) are pretty much the only reasonably usable FP languages.

crabbone
3 replies
2d1h

Where are you getting your info from?

Typical OCaml programs, when compared to similar C++ would be slower but use less memory.

F# and Scala are both OCaml in disuse. I don't know what you mean by "reasonable"... but, if the idea is "easy to reason about", then these two don't particularly stand out much.

Languages that are easy to reason about would be generally in the category where you need to do fewer translations before you get to the way the program is executed (i.e. bytecode adds an extra step, thus making a language harder to reason about). Also, languages with fewer primitives are easier to reason about, because the program text becomes more predictable.

In general, "functional" languages are harder to reason about when compared to imperative, because computers inherently don't work in the way the programs are modeled in "functional" languages, so there will be some necessary translation layer that transforms an FP program into a real computer program. There are people who believe that FP programs are easier to reason about due to the lack of side effects. In my experience, the lack of side effects doesn't come close to compensating the advantages of being able to map the program to what computer actually does.

All kinds of behind-the-scenes mechanisms in the language, s.a. garbage collector, make the reasoning harder too, in a sense. We pretend that GC makes reasoning easier by making a mental shortcut: we pretend that it doesn't matter when memory is freed. But, if you really want a full picture, GC adds a whole new layer of complexity when it comes to understanding a program.

Yet another aspect of reasoning is the ability of reasoner to act on their reasoning. I.e. the reasoning might be imperfect, but still allow to act (which is kind of the human condition, the way we are prepared to deal with the world). So, often, while imperative programs cannot be formally easily reasoned about, it's easy to informally reason about them to be efficient enough to act on that reasoning. "Functional" programs are usually the reverse: they are easier to reason about formally, but they are very unnatural to the way humans reason about everyday stuff, so, acting on them is harder for humans.

"Functional" languages tend to be more in the bytecode + GC + multiple translations camp. And, if forced to choose with these constrains, I'd say Erlang would be the easiest and the best designed language of all the "popular" ones. SML would be my pick if you need to get into the world of Haskell, but deciphering Haskell syntax brings you to the boil.

neonsunset
1 replies
2d1h

Heh, no.

You are suggesting to replace FP languages with powerful type systems that perform marginally slower than C# and Java (and can access their ecosystems) with a language that is dynamically typed and performs, in most situations, marginally slower than PHP and marginally faster than Ruby.

crabbone
0 replies
1d4h

Every language is both statically and dynamically typed. But the more correct way of saying this is "dynamically or statically checked". Types don't appear or disappear when a program runs. The difference is in what can be known about types and at what stage.

What programmers actually care about is this:

How can we check more and sooner in a way that requires less mental energy on the side of the programmer to write?

In other words, we have three variables we want to optimize for: how much is checked, how much is checked before execution, how much effort does it take to write the check. When people argue for "statically or dynamically typed languages", they generally don't understand what they argue for (or against), as they don't have this kind of mental model in mind (they just learned the terms w/o clear understanding of what they mean).

And so do you.

So, I don't really know what do you mean when you say "dynamically typed". Which language is that? Are you talking about Erlang? SML? What aspect of the language are you trying to describe?

NB. I don't think either C# or Java have good type systems. My particular problem with these is subtyping, which is also a problem in OCaml and derivatives s.a. Scala or F#. It's not a solution anyone wanted, it's a kludge that was added into these systems to deal with classes and objects. So, if we are going after good type systems... well, it wouldn't be in any language with objects, that's for sure.

NB2. Unix Shell has a great type system. Everything is a string. It's a pleasure to work with, and you don't even need a type checker! For its domain, it seems like a perfect compromise between the three optimization objectives.

Skinney
0 replies
1d12h

Languages that are easy to reason about would be generally in the category where you need to do fewer translations before you get to the way the program is executed

This is a very interesting definition of "easy to reason about".

To me, "easy to reason about" means that it's easy for me to figure out what the intent of the code is, and how likely it is that the code does what it was intended to do.

How it translates to the machine is irrelevant.

Now, if you work in an environment where getting the most out of the machine is crucial, then I understand. In my domain, though, dealing with things like allocating and freeing memory makes it harder to see what the code is supposed to do. As a human, I don't think about which memory to store where and when that memory should be forgotten, I just act on memories.

Functional languages, then, tend to be high level enough to not expose you to the workings of the machine, which let's me focus on what I actually want to do.

freedomben
0 replies
2d1h

Agree with all the reasons, but number 1 is really the most important:

1. It makes it easy to learn how to structure a program in a pure way, which is hard to do in languages that offers you a easy way out.

When there's an escape hatch, you will reach for it at some point. It helps with getting things done, but you never end up really confronting the hard things when you have that, and the hard things are an important part of the learning/benefit.

kreyenborgi
3 replies
2d11h

Don't think of it as being all pure code, think of it as tracking in the type system which parts of your code may launch the missiles and which parts can't. Given the following program,

    main :: IO ()
    main = do
      coordinates <- getCoords
      launch trajectory
      where
        trajectory = calcTrajectory coordinates
    
    getCoords :: IO Coordinates
    getCoords = -- TODO
    
    launch :: Trajectory -> IO  ()
    launch = -- TODO
    
    calcTrajectory :: Coordinates -> Trajectory
    calcTrajectory = -- TODO
    
I can look at the types and be reasonably certain that calcTrajectory does no reads/writes to disk or the network or anything of that sort (the part after the last arrow isn't `IO something`), the only side effect is perhaps to heat up the CPU a bit.

This also nudges you in the direction of an Functional Core, Imperative Shell architecture https://www.destroyallsoftware.com/screencasts/catalog/funct...

foobazgt
1 replies
2d11h

FYI, I think you meant functional core, imperative shell.

kreyenborgi
0 replies
2d10h

haha yes, thanks!

Barrin92
0 replies
2d11h

as tracking in the type system which parts of your code may launch the missiles

given that Haskell is lazy by default there's a million ways to shoot yourself in the foot through memory leaks and performance issues (which is not unlike the problems the IO type attempts to make explicit in that domain), so I never really understand this kind of thing. Purity doesn't say much about safety or semantics of your code. By that logic you might as well introduce a recursion type and now you're tagging everything that is recursive because you can easily kill your program with an unexpected input in a recursive function. To me this is just semantics you have to think through anyway, putting this into the type system just ends up creating more convoluted programs.

mrkeen
2 replies
2d12h

This is all myth. People don't write Haskell, because they read why other non-Haskellers also don't write Haskell, based on what other non-Haskellers wrote.

a language which has adapted the best bits and pieces from the functional programming paradigm?

Why write in a statically-typed language when dynamically-typed languages have adapted the best bits and pieces from statically-typed languages?

cosmic_quanta
1 replies
2d2h

Why write in a statically-typed language when dynamically-typed languages have adapted the best bits and pieces from statically-typed languages?

Unfortunately, dynamically-typed languages haven't adapted the best bit from statically-typed languages: that all types are enforced at compile-time!

mrkeen
0 replies
2d1h

Yep, that's the parallel I was going for.

Functional languages give you the same output for the same input, and almost-functional languages ... probably give you the same output for the same input?

agentultra
2 replies
2d

There's a video game on steam you can buy with real dollars built in Haskell.

I work full-time writing Haskell. Fintech stuff. No shiny research going on here.

I've written some libraries and programs on my stream in Haskell. One is a client library for Postgres' streaming logical replication protocol. I've written a couple of games. Working on learning how to do wave function collapse.

Believe it or not, functional programmers -- even ones writing Haskell -- often think about and deliver software for "real world," use.

tasuki
1 replies
1d22h

There's a video game on steam you can buy with real dollars built in Haskell.

Link? Story?

zogrodea
0 replies
2d11h

I can't speak for others, but I never really understood the benefits of functional programming when my language pretty much allowed unbounded mutation anywhere. I would say there's a chance for impure languages to impede you in learning what functional programming is about (or at least my experience with F# and OCaml did not really help as much as it otherwise could have I think).

Your mileage might vary, but I've heard advice from others to learn Haskell and "go off the deep-end" because of people citing similar reasons.

js8
0 replies
1d23h

In my experience, I only really learned how to write small functions after Haskell. The discipline it forces on you is a good training.

cosmic_quanta
0 replies
2d2h

Pure functional programming doesn't preclude side-effects like IO; it makes side-effects explicit rather than implicit!

At my previous job, we used pure functional programming to ensure that custom programs only had access to certain side-effects (most importantly, not IO). This meant that it was trivial to run these custom programs in multiple environments, including various testing environments, and production.

agumonkey
0 replies
2d8h

In a way, one benefit is the whole ecosystem / culture / idioms built on top. Haskellers went further in that direction than most languages (except maybe scalaz and some hardcore typescript devs).

__s
0 replies
2d3h

Haskell interfaces with the real world. ST allows for mutability in pure context

https://github.com/serprex/Fractaler/blob/master/Fractaler.h... fractal renderer I wrote in highschool, has mouse controls for zoom / selecting a variety of fractals

https://github.com/serprex/bfhs/blob/master/bf.hs brainfuck interpreter which mostly executes in pure context, returning stdout with whether program is done or should be reinvoked with character input. Brainfuck tape implemented as zipper

your program may exist in real world, but most of it doesn't care about much of the real world

leononame
13 replies
2d12h

Great read! Can anyone here recommend a good resource for learning Haskell that's in the style of "Text-Mode Games as First Haskell Projects"? Haskell has been on my radar since forever, and I've got some FP concepts internalized by making a side project in F#, but I have no idea what a monad really is and a fun prohect to code along might be perfect.

brabel
4 replies
2d12h

I've been learning Unison [1] and I highly recommend. It's a Haskell-like language, but with some really interesting ideas around how code should be managed and distributed. They also use use algebraic effects (represented with "abilities" in Unison) instead of Monads, which gives some interesting advantages [2].

[1] https://www.unison-lang.org/

[2] https://www.unison-lang.org/docs/fundamentals/abilities/for-...

tasuki
3 replies
2d6h

Have you written anything in Unison yet? To me Unison feels extremely ahead of its time. They clearly thought things through and aren't afraid to challenge the status quo. Maybe a bit too much ahead of its time even...

I fear `ucm` a little. You mean I can't version my things with git? How do I... ehh, do anything? And how is the deployment story if one chooses not to use the Unison cloud?

carbonatom
1 replies
2d4h

Can you or someone else familiar with Unison tell us what specific things about Unison feel ahead of its time? I don't know Unison so these things will be a good motivation for me to learn Unison.

cflewis
0 replies
2d

Unison tries to swallow the whole elephant all at once, which is probably what the author is getting at.

* `ucm` is like a coding assistant that sits with you the whole time. You don't grep through code to find snippets or anything, you use `ucm`. It does a whole lot more, but that's just the trivial example.

* You need `ucm` because Unison stores code as a syntax tree, not text. This is awesome because versioning/dependency conflicts/rename issues just go away. This is not awesome because nothing else knows how to understand this: other source control systems will just not work.

* It's really trying to drag functional coding into this decade and what we use code for in production. It's not trying to be Haskell which is a great language, but doesn't (to me) feel like it was designed to do something like a simple web app.

* It is supposed to do distributed cloud computing without modification (this smells like where the VC money came from), but again, you have to use their platform because other clouds don't understand Unison.

The list goes on.

Each individual piece of Unison I think is really great. I love the `ucm` model of having an assistant sit next to you the whole time. What I don't love is that there is just _so much_ learning placed on the developer. To understand Unison, you need to understand a lot of what they're doing, a lot of what they are doing is novel, and so you have to eat the whole elephant that they are. I don't know if there was a path where they could have eaten the elephant one bite at a time, but it really makes the onboarding onerous.

brabel
0 replies
8h12m

To be honest, all I've done was try the weekly challenges that the Unison team publishes on Twitter :D

I just like the very light syntax that it got from Haskell, but without the need for Monads and the "ugly" monad operators you need to juggle, replaced by abilities which IMHO are much more elegant.

They are focusing on writing cloud-services and apparently their offering is already pretty good, so if I ever need a clouse service I am tempted to use Unison (for anything not very critical, at least until I gain confidence in it): https://www.unison.cloud/our-approach/

And I love ucm and think it may be very close to what the future will look like (which is why I do agree it's ahead of our time). It's already possible to work with Unison but still have proper code review: https://www.unison-lang.org/docs/tooling/project-workflows/#...

Here's an example change: https://share.unison-lang.org/@unison/base/contributions/42/...

But currently, I believe you do need to pull the branch locally and then load diffs with ucm into your favourite diff tool to properly see what's been changed (which is not ideal , and I believe they are working on making this easier, we'll see).

tombert
1 replies
2d6h

Haven’t you heard? Monads are burritos!

In all seriousness, it’s not “text mode”, but one of the things that I felt really showed how cool Haskell and a friend could pure model could be a was Netwire and Functional Reactive Programming. It allowed me to design graphical applications the way I always wanted to instead of how they’re typically structured in imperative languages. There are lots of tutorials out there for making little games with it.

tombert
0 replies
1d20h

Ugh, autocorrect and it's too late to edit.

I was suggesting the Netwire library, an FRP library for Haskell.

trealira
0 replies
2d2h

"Programming in Haskell" by Graham Hutton has a few small text mode games in the second half of the book: Nim, Hangman, the game of life, and Tic-Tac-Toe; and it walks you through the minimax algorithm. The author also implements a solution to the countdown problem, which is hard to explain, so as an example, you're given a sequence of numbers [1, 3, 7, 10, 25, 50] and the target 765; a correct solution is (1+50)*(25-10).

The first half of the book is more geared towards a newbie to functional programming in general.

lordwarnut
0 replies
2d12h

Kind of ironically I've enjoyed the 'Write Yourself a Scheme in 48 Hours'[1] which goes over how to write your own Scheme in Haskell. It introduces some of the more interesting monads although I'm not sure how idiomatic it is.

[1] https://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_...

kreyenborgi
0 replies
2d12h

https://learn-haskell.blog/

In this book, we will implement a simple static blog generator in Haskell, converting documents written in our own custom markup language to HTML.

We will:

    Implement a tiny HTML printer library
    Define and parse our own custom markup language
    Read files and glue things together
    Add command line arguments parsing
    Write tests and documentation
> In each chapter of the book, we will focus on a particular task we wish to achieve, and throughout the chapter, learn just enough Haskell to complete the task.

fire_lake
0 replies
2d11h

Write a few computation Expression builders in F# and monads will quickly make sense.

usgroup
8 replies
2d5h

In my opinion, if you are after the mystical experience of understanding functional programming, you're better off learning Prolog. I think it has more to offer in terms of insight, because wrapping your head around the language only takes a couple days, but wrapping your head around its consequences is a gift which keeps on giving for quite some time.

Immutable functional programming is basically what 80% of your Prolog code will look like. The benefit is that you'll be able to understand how everything works from end-to-end.

Valodim
7 replies
2d4h

Prolog is a logic programming language though, I wouldn't expect it to have a lot of overlap with functional programming?

cess11
4 replies
2d2h

An execution pipeline in a functional language is just half a relation in Prolog. Prolog allows you to also run it 'backwards', you can provide the output and have it figure out what the inputs would need to be.

usgroup
3 replies
2d1h

Run in backwards, so they say, but not really. Most things you'll write can't run backwards. You have to write them in a special way for that to be possible, and even then what it means is that you can do a depth first search to find the value.

There are some useful extensions like clpfd and asp but really if what you're doing is solving a constraint programming problem, you're much better of with OR-tools or MiniZinc.

Prolog is beautiful. I have practically no use for it. I've struggled to find something I can do better with Prolog than other tools, but I just love it aesthetically and that's enough for me sometimes.

cess11
2 replies
1d12h

I don't know, most things I've written in Prolog 'runs backwards'. Doesn't seem special to me, things like cut and whatnot that might interfere do.

I kind of feel that clpfd, clpz and so on are just libraries, in what way do you consider them extensions to the language?

For me it's a neat way to model problems, and when I have I've commonly learned something new about the problem domain. Performance might not be great, interoperability and FFI might not be great, but as a tool for thought I really think it is.

https://www.metalevel.at/prolog

usgroup
1 replies
1d12h

In SWI for example, clpfd and friends rely on attribute values and hook predicates to interject the backtracking process with their own thing. SWI provides those things so that it can be extended.

Non trivial Prolog programs which only use pure predicates and do not use cut are sparse. Yet those are also the only circumstances in which things like clpfd will work without special consideration.

cess11
0 replies
1d2h

Besides pengines and web development I have very little experience with SWI specifically.

And, well, yeah, it's common (e.g. look for cuts in https://github.com/mthom/scryer-prolog/blob/master/src/lib/c... ), but I don't think I've ever written a cut and I use Prolog rather effectively as a tool for problem solving. It's an interesting and quite powerful way to model and examine problems even if I don't produce programs with 'imperative' interfaces. Some scripting tasks are also quite easy in Prolog, e.g. certain log parsing, stuff like that.

usgroup
0 replies
2d4h

Prolog variables are immutable by default. Data structures are the same as the immutable functional programming counterparts (no arrays, and tree based everything). Recursion is the only way to loop. Map, filter, fold(l/r), reduce, accumulate, etc, are staple predicates.

As I said, 80% of your code, or more, will look just like a functional programme.

marcosdumay
0 replies
2d2h

It's also biased into informal definitions, mixing side effects with logic, and encapsulating complex behavior together. It's way more biased into being a scripting language, while Haskell has all those biases pointed at being an application language.

So, I'd say that both languages lead people into very different programing styles.

lonk
2 replies
2d

Just "15 years" or Nothing

js8
1 replies
1d23h

Are you saying they Maybe learned Haskell?

hun3
0 replies
19h5m

Either they have learned and Left the noob status, or they never really got it Right.

drwu
2 replies
2d7h

When Haskell was a hot topic around two decades ago, ML was also quite often discussed. Today ML almost only means machine learning

firesteelrain
0 replies
2d6h

That's what I remember from my Computer Science classes 2001-2002. Standard ML was hard to learn back then for a newbie especially a Computer Science newbie.

dboreham
0 replies
2d4h

ML exists today as OCaml and F#

stoorafa
1 replies
2d13h

Had a lot of fun reading this. I’d love to see some of the author’s code to get a sense what the journey produced, if that’s even possible

tomcam
0 replies
2d4h

Sure, lord it over the rest of us peons. We can’t all be overachievers, you know.

revskill
0 replies
2d1h

Yes, it's better to spend 15 years to learn Haskell than keep creating messy imperative programs without knowing how to improve.

LittleOtter
0 replies
1d16h

"What fascinated me about Haskell when I was still a teenager? Who knows. I had been coding with increasing enthusiasm since I was 10 or 11 but I was no wunderkind. I certainly hadn’t attained anything like the skill or, more importantly, taste I had after just a few years in the working world. What I like about it today is that it’s quite natural to program in Haskell by building a declarative model of your domain data, writing pure functions over that data, and interacting with the real world at the program’s boundaries. That’s my favorite way to work, Haskell or not."

I adore these sentences.:)When I read it,it feels like I met myself.