return to table of content

Why Haskell?

cpa
96 replies
1d9h

Haskell has had a profound impact on the way I think about programming and how I architect my code and build services. The stateless nature of Haskell is something that many rediscover at different points in their careers. Eg in webdev, it's mostly about offloading state to the database and treating the application as "dumb nodes." That's what most K8s deployments do.

The type system in Haskell, particularly union types, is incredibly powerful, easy to understand for the most part (you don't need to understand monads that deeply to use them), and highly useful.

And I've had a lot of fun micro-optimizing Haskell code for Project Euler problems when I was studying.

Give it a try. Especially, if you don't know what to expect, I can guarantee that you'll be surprised!

Granted, the tooling is sh*t.

adastra22
37 replies
1d9h

Haskell also changed the way I think about programming. But I wonder if it would have as much of an impact on someone coming from a language like Rust or even modern C++ which has adopted many of haskell’s features?

mmoll
17 replies
1d9h

True. I often think of Rust as a best-of compilation of Haskell and C++ (although I read somewhere that OCaml had a greater influence on it, but I don’t know that language well enough)

In real life, I find that Haskell suffers from trying too hard to use the most general concept that‘s applicable (no pun intended). Haskell programs happily use “Either Err Val” and “Left x” where other languages would use the more expressive but less general “Result Err Val” and “Error x”. Also, I don’t want to mentally parse nested liftM2s or learn the 5th effect system ;-)

hollerith
16 replies
1d6h

I read somewhere that OCaml had a greater influence on it

Whoever wrote that is wrong.

randomdata
6 replies
1d6h

If we could wave a magic wand and remove Haskell's influence on Rust, Rust would still exist in some kind of partial form. If we waved the same wand and removed OCaml's influence, Rust would no longer exist at all.

You are the one who is wrong, I'm afraid.

lkitching
5 replies
1d5h

Which OCaml features exist in Rust but not Haskell? The trait system looks very similar to Haskell typeclasses, but I'm not aware of any novel OCaml influence on the language.

randomdata
4 replies
1d5h

> Which OCaml features exist in Rust but not Haskell?

Rust's most important feature! The bootstrapped implementation.

lkitching
3 replies
1d4h

I'm not convinced the implementation language of the compiler counts as a feature of the Rust language. If the argument is that Rust wouldn't have been invented without the original author wanting a 'systems OCaml' then fine. But it's possible Rust would still look similar to how it does now in a counterfactual world where the original inspiration was Haskell rather than OCaml, but removing the Haskell influence from Rust as it is now would result in something quite different.

randomdata
2 replies
1d4h

Rust isn't just a language, though.

Additionally, unlike some languages that are formally specified before turning to implementation, Rust has subscribed to design-by-implementation. The implementation is the language.

lkitching
1 replies
1d3h

That just means the semantics of the language are defined by whatever the default implementation does. It's a big stretch to conclude that means Rust 'was' OCaml in some sense when the compiler was written with it. Especially now the Rust compiler is written in Rust itself.

randomdata
0 replies
1d3h

You're overthinking again. Read what is said, not what you want it to say in some fairytale land.

TwentyPosts
4 replies
1d5h

The original rust compiler was written in OCaml. That's not evidence it "had an influence", but it's highly striking considering how many other languages Greydon could've used.

hollerith
3 replies
1d4h

Yes: if a person knows nothing else about Rust and the languages that might have influenced it, then the fact that the original Rust compiler was written in OCaml should make that person conclude tentatively that OCaml was the language that influenced the design of Rust the most.

I'm not one to hold that one shouldn't form tentative conclusions until one "has all the fact". Also, I'm not one to hold that readers should trust the opinion of an internet comment writer they know nothing about. I could write a long explanation to support my opinion, but I'm probably not going to.

dimitrios1
2 replies
1d4h

It's like trying to say Elixir wasn't influenced the most by Erlang

hollerith
1 replies
1d3h

Was any Elixir interpreter or compiler every written in Erlang?

If not, what is the relevance of your comment?

hollerith
0 replies
18h14m

Haskell has algebraic data types, pattern matching and type inference, too, and has had them since Haskell first appeared in 1990.

Although SML is older (1983), OCaml is younger than Haskell.

setopt
15 replies
1d8h

I think it does, actually. Python also has many of Haskell's features (list comprehensions, map/filter/reduce, itertools, functools, etc.). But I only started reaching for those features after learning about them in Haskell.

In Python, it's very easy to just write out a for-loops to do these things, and you don't necessarily go looking for alternative ways to do these things unless you know the functional equivalents already. But in Haskell you're forced to do things this way since there is no for-loop available. But after learning that way of thinking, the result is then more compact code with arguably less risk of bugs.

z500
14 replies
1d6h

If anything, Python encourages you to use loops because the backwards arrangement of the arguments to map and filter makes it painful to chain them.

sgarland
12 replies
1d5h

    map(function, iterable)
That seems very logical to me, but then, I’m not a functional programmer, I just like map. It’s elegant, compact, and isn’t hard to understand. Not that list comps are hard to understand either, but they can sometimes get overly verbose.

filter has also lost ground in favor of list comps, partially because Guido hates FP [0], and probably due to that, there has been a lot of effort towards optimizing list comps over the years, and they’re now generally faster than filter (or map, sometimes).

[0]: https://www.artima.com/weblogs/viewpost.jsp?thread=98196

BeetleB
11 replies
1d4h

Yes, but how do you chain them?

    map(func4, map(func3, map(func2, map(func1, iter))))
vs

    iter.map(f1).map(f2).map(f3).map(f4)
I made up the syntax for the last one, but most functional languages have a nice syntax for it. Here's F#:

    iter |> f1 |> f2 |> f3 |> f4
Or plain shell:

    command | f1 | f2 | f3 | f4

TylerE
10 replies
1d2h

You don't.

Use generator syntax, which is really the more pythonic way to it.

    >>> iter = [1,2,3,4]
    >>> f1 = lambda x: x*2
    >>> f2 = lambda x: x+4
    >>> f3 = lambda x: x*1.25
    >>> [f3(f2(f1(x))) for x in iter]
    [7.5, 10.0, 12.5, 15.0]

BeetleB
6 replies
1d2h

First off, writing f3(f2(f1(x))) is painful - keeping track of parentheses. If you want to insert a function in the middle of the chain you have some bookkeeping to do.

Second, that's all good and well if all you want to do is map. But what if you need combinations of map and filter as well? You're suddenly dealing with nested comprehensions, which few people like.

In F#, it'll still be:

    iter |> f1 |> f2 |> f3 |> f4
Here's an example from real code I wrote:

    graph
    |> Map.filter isSubSetFunc
    |> Map.filter doesNotContainFunc
    |> Map.values
    |> Set.ofSeq
This would not be fun to write in List Comprehensions, but you could manage (only two list comprehensions). Now here's other code:

    graph
    |> removeTerminalExerciseNodes
    |> Map.filter isEmpty
    |> Map.keys
    |> Seq.map LookUpFunc
    |> Seq.map RemoveTrivialNodes
    |> Seq.sortBy GetLength
    |> Seq.rev
    |> Seq.toList
BTW, some of the named functions above are defined with their own chain of maps and filters.

Izkata
5 replies
1d1h

An alternative for python is to flip what you're iterating over at the outermost level. It's certainly not as clean as F# but neither is it as bad as the original example if there's a lot of functions:

  iter = [1,2,3,4]
  f1 = lambda x: x*2
  f2 = lambda x: x+4
  f3 = lambda x: x*1.25
  
  result = iter
  for f in [f1, f2, f3]:
    result = [f(v) for v in result]
Then the list comprehension can be moved up to mimic more closely what you're doing with F#, allowing for operations other than "map":

   result = iter
   for f in [
     lambda a: [f1(v) for v in a],
     lambda a: [f2(v) for v in a],
     lambda a: [f3(v) for v in a],
   ]:
     result = f(result)
And a step further if you don't like the "result" reassignment:

  from functools import reduce
  result = reduce(lambda step, f: f(step), [
    lambda a: [f1(v) for v in a],
    lambda a: [f2(v) for v in a],
    lambda a: [f3(v) for v in a],
  ], iter)

BeetleB
4 replies
1d

Fair, but how would it look if you had some filters and reduces thrown in the middle?

In my F# file of 300 lines[1], I do this chaining over 20 times in various functions. Would you really want to write the Python code your way every time, or wouldn't you prefer a simpler syntax? People generally don't do it your way often because it has a higher mental burden than it does with the simple syntax in F# and other languages. I don't do it 20 times because of an obsession, but because it's natural.

[1] Line count is seriously inflated due to my habit of chaining across multiple lines as in my example above.

kingdomcome50
2 replies
14h10m

I think we can just let this rest. These kinds of operations are not as ergonomic in python. That's pretty clear. No example provided is even remotely close to the simplicity of the F# example. Acquiesce.

Izkata
1 replies
6h27m

You do realize this was in my original comment, right?

It's certainly not as clean as F# but neither is it as bad as the original example if there's a lot of functions
z500
0 replies
6h3m

The fact is the language just works against you in this area if you have to jump through hoops to approximate a feature other languages just have. And I don't even mean extra syntax like F#'s pipe operators (although I do love them). Just swapping the arguments so you could chain the calls would look a lot better, if a little LISPy. It really is that bad.

itishappy
0 replies
23h40m

Generators require a __next__ method, yield statement, or generator comprehension. What you've got is lambdas and a list comprehension. Rewriting using generators would look something like:

    items = [1,2,3,4]
    gen1 = (x*2 for x in items)
    gen2 = (x+4 for x in gen1)
    gen3 = (x*1.25 for x in gen2)
    result = list(gen3)
It's nicer in a way, certainly closer to the pipe syntax the commenter your replying to is looking for, but kind of janky to have to name all the intermediate steps.

cma
0 replies
14h28m

f3(f2(f1

This is still backwards?

Izkata
0 replies
1d1h

You're not using generator syntax anywhere in that example.

itishappy
0 replies
1d5h

Don't Haskell and Python use the same argument order?

    filter(lambda x: x<5, map(lambda x: 2*x, [1,2,3,4,5]))

    filter (<5) . map (*2) $ [1,2,3,4,5]
(Technically the Python version should be cast to a list to have identical behavior.)

Same with comprehensions (although nesting comprehensions will always get weird).

    [x for x in [2*x for x in [1,2,3,4,5]] if x<5]

    [x | x <- [2*x | x <- [1,2,3,4,5]], x<5]

odyssey7
1 replies
1d6h

Check out Swift, too!

adastra22
0 replies
1d4h

Why would I use swift when more cross-platform solutions exist?

TylerE
0 replies
1d2h

Heck, even coming from Python (2) it felt very underwhelming and hugely oversold. (Edit: To be fair, I'd done a bit of Ocaml years earlier so algebraic data types weren't some huge revelation).

Laziness is mostly an anti-pattern.

moomin
15 replies
1d9h

Sum types are finally coming to C#. That’ll make it the first “Mainstream” language to adopt them. Will it be as solid and simple as Haskell’s implementation? Of course not. Will having a backing ecosystem make up for that deficiency? Yes.

pid-1
5 replies
1d6h

Python has sum types

optional_int: int | None = None

TwentyPosts
2 replies
1d5h

This is semantically not the same as a sum type (as understood in the sense of Rust, which is afaik the academically accepted way)!

Python's `A | B` is a union operation, but in Rust a sum type is always a disjoint union. In Python, if `A = B = None`, then `A | B` has one possible instance.

In Rust, this sum type has two possible instances. This might not sound like a big deal, but the semantics are quite different.

pid-1
1 replies
21h12m

Sorry, I could not grok the difference, even after reading a few Rust examples.

def foo(int | None = None) ...

... just means the variable's default value is None in a function definition. But it could be either in an actual function call.

SoothingSorbet
0 replies
17h55m

There's no difference there because the types are already disjoint.

Say you wanted to define some function taking `YYMMDD | MMDDYY`. If both YYMMDD and MMDDYY are just aliases to `str`, then you gain no information, you cannot discriminate on which one it is, since the union `str | str` just reduces to `str`.

Sum types are disjointed unions, you can't just say `str | str`, the terms are wrapped in unique nominal data constructors, like:

enum Date { MMDDYY(String), YYMMDD(String) }

Then when accepting a `Date` you can discriminate which format it's in. You could do the same in Python by defining two unique types and using `MMDDYY | YYMMDD`.

nicoburns
1 replies
1d6h

Every dynamically typed language effectively has one big Sum type that holds all of the other types. IMO this is one reason why dynamic languages have been so popular (because Sum types are incredibly useful, and mainstream statically typed languages have historically had very poor support for them).

tome
0 replies
10h58m

I like this observation! It explains a lot.

SkiFire13
5 replies
1d8h

What counts as mainstream for you?

Java has recently added sealed classes/interfaces which offer the same features as sum types, and I would argue that Java is definitely mainstream.

Kotlin has a similar feature. It might be used less than Java, but it's the default language for Android.

Swift has `enum` for sum types and is the default language for iOS and MacOS.

Likewise for Rust, which is gaining traction recently.

Typescript also has union/sum types and is gaining lot of traction.

thfuran
2 replies
1d3h

Sealed classes still won't let you have e.g. String|Integer, though I'll grant you that java is certainly mainstream.

kagakuninja
0 replies
1d1h

Scala 3 has had union types for 4 years now. Scala can be used to do Haskell style pure FP, but with much better tooling. And it has the power of the JVM, you can fall back to Java libraries if you want.

SkiFire13
0 replies
22h40m

You don't really need `String|Integer`, for most usecases an isomorphic type that you can exhaustively pattern match on is more than enough, and sealed classes (along with the support in `switch` expressions) does exactly that.

zozbot234
1 replies
1d6h

For that matter, PASCAL has had variant records (i.e. sum types) since the 1970s.

iso8859-1
0 replies
1d6h

Did it have an ergonomic way to exhaustively match on all the variants? Since the 70s?

How does the ABI work? If a library adds a new constructor, but I am still linking against the old version, I imagine that it could be reading the wrong fields, since the constructor it's reading is now at a different index?

n_plus_1_acc
1 replies
1d8h

Rust is mainstream, just not use in enterprise applications

gamegoblin
0 replies
17h16m

AWS uses Rust extensively

pjmlp
0 replies
1d6h

Not really, other mainstream languages got there first.

setopt
13 replies
1d8h

Haskell has had a profound impact on the way I think about programming and how I architect my code and build services.

And I've had a lot of fun micro-optimizing Haskell code for Project Euler problems when I was studying.

Sounds a lot like my experience. I never really used Haskell for "real work", where I need support for high-performance numerical calculations that is simply better in other languages (Python, Julia, C/C++, Fortran).

But learning functional programming through Haskell – mostly by following the "Learn you a Haskell" book and then spending time working through Project Euler exercises using it – had a quite formative effect on how I write code.

I even ended up baking some functional programming concepts into my Fortran code later. For instance, I implemented the ability to "map" functions on my data structures, and made heavy use of "pure functions" which are supported by the modern Fortran standard (the compiler then checks for side effects).

It's however hard to go all the way on functional programming in HPC contexts, although I wish there were better libraries available to enable this.

nextos
10 replies
1d6h

But learning functional programming through Haskell [...] had a quite formative effect on how I write code.

I think it is a shame Haskell has gained a reputation of being hard, because it can be an enriching learning experience. Lots of its complexity is accidental, and comes from the myriad of language extensions that have been created for research purposes.

There was an initiative to define a simpler subset of the language, which IMHO would have been great, but it didn't take off: https://www.simplehaskell.org. Ultimately, one can stick to Haskell 98 or Haskell 2010 plus some newer cherry-picked extensions.

bbkane
8 replies
1d5h

I think Elm is a fantastic "simplified Haskell" with pretty good beginner-friendly guides. It's unfortunate that Elm is mostly tied to the frontend and has been effectively abandoned for the last couple of years.

Interestingly, Elm has inspired a host of "successors", including Gleam + Lustre, which look really great (I haven't had a chance to really try them yet).

nextos
4 replies
1d5h

What about Roc or Koka? Or simply moving to OCaml? It's looking pretty great after v5, with multicore and effect handlers.

giraffe_lady
3 replies
1d4h

OCaml is great but the type system is actually quite different from Haskell's once you get into it. It also has many "escape hatches" out of the functional pathway. Even if you approach it with a learner's discipline you'll run into them even in the standard lib.

With haskell you can look to the ecosystem to see how to accomplish specific things with a pure functional approach. When you look at ocaml projects in that way you often find people choosing not to.

mattpallissard
2 replies
1d4h

Oh but the OCaml module system is the bees knees.

giraffe_lady
1 replies
1d4h

Yeah I didn't mean any of this as a negative lol. I haven't touched haskell since I learned ocaml. I still think haskell has the edge as an educational language for functional programming and type systems though, which is kind of what we're talking about but not entirely.

mattpallissard
0 replies
1d3h

No worries, I was just adding my two cents.

earth_walker
1 replies
1d3h

Elm's strengths are its constraints, which allow for simple, readable code that's easy to test and reason about - partly because libraries are also guaranteed to work within those constraints.

I've tried and failed several times to write Haskell in an Elm style, even though the syntax is so similar. It's probably me (it's definitely me!), but I've found that as soon as you depend on a library or two outside of prelude their complexities bleed into your project and eventually force you into peppering that readable, simple code with lifts, lenses, transformations and hidden magic.

Not to mention the error messages and compile times make developing in Haskell a chore in comparison.

p.s. Elm has not been abandoned, it's very active and getting better every day. You just can't measure by updates to the (stable, but with a few old bugs) core. For a small, unpopular language there is so much work going into high quality libraries and development tools. Check out

https://elmcraft.org/lore/elm-core-development

for a discussion.

Elm is so nice to work in. Great error messages, and near instant compile times, and a great ecosystem of static analysis, scaffolding, scripting, and hot reloading tools make the live development cycle super nice - it actually feels like what the lispers always promised would happen if we embraced repl-driven development.

bbkane
0 replies
1d3h

Thanks for the Elmcraft FAQ link. It's a great succinct explanation from the Elm leadership perspective (though tellingly not from the Elm leadership).

I feel like I understand that perspective, but I also don't think I'm wrong in claiming Elm has been effectively abandoned in a world where an FAQ like that needs to be written.

I'm not going to try to convince you though, enjoy Elm!!

britzkopf
0 replies
1d2h

I've often wondered if it having a reputation as being hard is accurate. Not necessarily because of syntax etc. but because of you don't already have a grounding in programming/engineering/comp sci. it can be difficult to fit the insights Haskell provides into any meaningful framework. That was my experience anyway, came to it too early and didn't understand the significance.

gtirloni
0 replies
1d6h

Sounds a lot like the C++ experience.

In my time learning Haskell a decade ago, it was rare to find some code that wasn't using an experimental extension.

l5870uoo9y
0 replies
1d5h

Pure functions are a crazy useful abstractions. Complex business logic? Extract it into a type-safe pure function. Still to "unsafe"? Testing pure functions are fast and simple. Unclear what a complex function does? Extract it into meaningful pure functions.

ayakang31415
0 replies
17h35m

Haskell sounds like a good language to hone your programming skills. What kind of projects is Haskell suited for to get started (besides Euler project)? I use Python primarily for scientific research (mostly numerical computation).

jillesvangurp
10 replies
1d7h

I think the tooling being not ideal is a reflection of how mature/serious the community is about non academic usage. Haskell has been around for ages but it never really escaped its academic nature. I actually studied in Utrecht in the nineties where there was a lot of activity around this topic at the time. Eric Meyer who later created F# at MS was a teacher there and there was a lot of activity around doing stuff with Gopher which is a Haskell predecessor, which I learned and used at the time. All our compiler courses were basically fiddling with compiler generator frameworks that came straight out of the graduate program. Awesome research group at the time.

My take on this is that this was all nice and interesting but a lot of this stuff was a bit academic. F# is probably the closest the community got to having a mature tooling and developer ecosystem.

I don't use Haskell myself and have no strong opinions on the topic. But usually a good community response to challenges like this is somebody stepping up and doing something about it. That starts with caring enough. If nobody cares, nothing happens.

Smalltalk kicked off a small tool revolution in the nineties with its refactoring browser. Smalltalk was famous for having its own IDE. That was no accident. Alan Kay, who was at Xerox PARC famously said that the best way to predict the future was to invent it. And of course he was (and is) very active in the Smalltalk community and its early development. Smalltalk was a language community that was from day one focused on having great tools. Lots of good stuff came out of that community at IBM (Visual Age, Eclipse) and later Jetbrains and other IDE makers.

Rust is a good recent example of a community that's very passionate about having good tools as well. Down to the firmware and operating system and everything up. In terms of IDE support they could do better perhaps. But I think there are ongoing efforts on making the compiler more suitable for IDE features (which overlap with compiler features). And of course Cargo has a good reputation. That's a community that cares.

I use Kotlin myself. Made by Jetbrains and heavily used in their IDEs and toolchains. It shows. This is a language made by tool specialists. Which is why I love it. Not necessarily for functional purists. Even though som Scala users have reluctantly switched to it. And the rest is flirting with things like Haskel and Elixir.

pyrale
3 replies
1d6h

I think the tooling being not ideal is a reflection of how mature/serious the community is about non academic usage.

I'd say it's more of a reflection of how having a very big company funding the language is making a difference.

People like to link Haskell's situation to its academic origins, but in reality, most of the issues with the ecosystem are related to acute underfunding compared to mainstream languages.

jillesvangurp
2 replies
1d5h

One doesn't happen without the other. Haskell is hugely influential with it's ideas and impact. But commercially it never really took off. Stuff like that needs to come from within the community; it's never going to come from the outside.

pyrale
1 replies
1d3h

Stuff like that needs to come from within the community

Either the community is large enough for it, or it comes from the sponsoring company.

Few languages start off by being in the first situation. The first example that comes to my mind (Python), well... Tooling was a long and painful road. And if the language hadn't been used/backed by many prominent companies, I don't see how man-hours would have flowed into tooling.

jillesvangurp
0 replies
13h31m

Python is a good example. Guido van Rossum was an academic when he built python. And then later he got employed to work on Python because indeed a lot of people found his work useful. By the time that happened, python was already quite widely used though.

Also time wise it's a good example because python emerged early nineties around the same time the Haskell community started forming. Haskell had a few years head start actually.

The difference was that python became popular quite early in things like Linux distributions and even though Haskell was available and similarly easy to install in those, it never really caught on. Sponsored development usually happens as a result of people finding uses for a language, not before.

odyssey7
3 replies
1d6h

“Greece, Rome’s captive, took Rome captive.”

The languages of engineering-aligned communities may appear to have won the race, though they have been adopting significant ideas from Haskell and related languages in their victories.

mrkeen
2 replies
1d5h

Something went wrong in the adoption process.

Haskell's biggest benefit is functions, not methods. To define a function, you need to stop directly mutating, and instead rely maps, folds, filters, etc. The bargain was: you give up your familiar and beloved for-loops, and in return you get software that will yield the same output given the same input.

So what happened with the adoption? The Java people willingly gave up the for-loops in favour of the Streams/maps/filters. But they didn't take up the reward of software that yields the same input given the same output.

What's something else in the top-5 killer Haskell features? No nulls. The value proposition is: if you have a value, then you have a value, no wondering about whether it's "uninitialised". The penalty you pay for this is more verbosity when representing missing values (i.e. Maybe).

Again, the penalty (verbose missing values ie. Optional<>) was adopted, and the reward (confidently-present values) was not.

the_af
0 replies
22h26m

Again, the penalty (verbose missing values ie. Optional<>) was adopted, and the reward (confidently-present values) was not.

Ah, the joys of having a Scala `Option` type and still having to consider the cases or Some[thing], Nothing and... null!

Yes, well-written Scala code knows not to use null with reckless abandon, but since when your coworkers coming from Java know to show restraint?

odyssey7
0 replies
1d

The type system is a big part and elements of that have shown up elsewhere. I’m with you on the belief that we should have better adoption for immutability, pure functions, and equational reasoning.

JavaScript promises can work analogously to the Maybe monad if you want them to.

Swift’s optionals are essentially the same thing as the Maybe monad.

Nelkins
1 replies
1d2h

Pretty sure F# was created by Don Syme, not Erik Meijer.

jillesvangurp
0 replies
13h25m

You are right. My mistake. They both worked for Microsoft though and Erik Meijer did work on things like Linq, which was an important part of the F# ecosystem. Also his work seems to have inspired Don Syme.

BoiledCabbage
5 replies
1d2h

Give it a try. Especially, if you don't know what to expect, I can guarantee that you'll be surprised!

And I will as strongly as possible emphasize the opposite you should not.

If you are are already experienced in functional programming, as well as in statically typed functional programming or something lovely in the ML family of languages then only then does Haskell make sense to learn.

If you are looking to learn about either FP in general, or staticly typed FP Haskell is about the single worst language anyone can start with. More people have been discouraged from using FP because they started with Haskell than is probably appreciated. The effort to insight ratio for Haskell is incredibly high.

You can learn the majority of the concepts faster in another language with likely 1/10th the effort. For general FP learn clojure, Racket, or another scheme. For statically typed FP learn F# or Scala or OCAML or even Elm.

In fact if you really want to learn Haskell is is faster to learn Elm and then Haskell than it is to just learn Haskel. Because the amout or weeds you have to navigate through to get to the concepts in Haskell are so high that you can first learn the concepts and approach in a tiny language like Elm and it will more than save the amount of time it would take to understand those approaches from trying to learn Haskell. It seems unbelievable but ai found it to be very try. You can learn two languages faster than just one because of how muddy Haskell is.

Now that said FP is valuable and in my opinion a cleaner design and why in general our industry keeps shifting that way. Monoids, Functors, Applicative are nice design patterns. Pushing side effects to the edge of your code (which is enforced by types) is a great practice. Monads are way overhyped, thinking in types is way undervalued. But you can get all of these concepts without learning Haskell.

So that's the end of my rant as I've grown tired of watching people dismiss FP because they confuse the great concepts of FP with the horrible warts that come with Haskell.

Haskell is a great language, and I'm glad I learned it (and am in no way an expert at it)- but it is the single worst language for an introduction to FP concepts. If you're already deep in FP it's and awesome addition to your toolbox of concepts and for that specific purpose I highly recommend it.

And finally, LYAH is a terrible resource.

dietlbomb
2 replies
21h5m

Is it worth learning JavaScript before learning Elm?

oneepic
0 replies
3h4m

Elm has stopped being updated. I naturally assumed it was quietly abandoned. https://elm-lang.org/news

BoiledCabbage
0 replies
15h49m

I'm not front end expert, but I have a working knowledge of html & js. I feel like it would still be ok without any JS background, but I could be wrong on that.

That said the language is small enough you can go through the tutorial in a weekend. You'll know pretty quickly if you feel like you're picking it up or feels too foreign.

My gut feel is general programming experience is enough, but don't hold me to that one.

the_af
1 replies
22h29m

And finally, LYAH is a terrible resource.

Could you elaborate? I know LYAH doesn't teach enough to write real programs, and does not introduce necessary concepts such as monad transformers, but why is it so terrible as an introduction to Haskell and FP? (In my mind, incomplete/flawed != terrible... Terrible means "avoid at all costs").

As for your overall point, I remember articles posted here on HN about someone teaching Haskell to children (no prior exposure to any other prog lang) with great success.

everial
0 replies
19h35m

(not gp, and from memory awhile ago so please forgive lack of exact quotes & page numbers)

Bunch of places where the tone masked or downplayed real issues in ways that made other text suspect. As a concrete example, `head [] -> Exception` with something like "of course it errors if you take the first part of something that's not there" and `take 1 [] -> []` with "obviously taking one thing from an empty list gets you an empty list" -- uh, no. Maybe it's a historical wart, maybe there are good technical reasons, but different behavior in these cases is definitely not obvious!

gtf21
3 replies
1d9h

Granted, the tooling is sh*t.

I hear this a lot, but am curious about two things: (a) which bit(s) of the toolchain are you thinking about specifically -- I know HLS can be quite janky but I haven't really been blocked by any tooling problems myself; (b) have you done much Haskell in production recently -- i.e. is this scar tissue from some ago or have you tried the toolchain recently and still found it to be lacking?

n_plus_1_acc
2 replies
1d8h

Everytime I use cabal and/or stack, it gives me a wall of errors and i just reinstall everyrhing all the time.

tome
0 replies
1d3h

If you share a transcript from a cabal session I'll look into this for you.

simonmic
0 replies
1d2h

And if you share stack transcripts I’ll look into those for you.

I’ve experienced this too, the tools can certainly be improved, but also a little more understanding of what they do and how to interpret their error messages could help you (I am guessing).

0x3444ac53
3 replies
1d2h

Would you mind explaining what you mean by stateless?

jgwil2
2 replies
1d

Haskell functions are pure, like mathematical functions: the same input to a function produces the same output every time, regardless of the state of the application. That means the function cannot read or write any data that is not passed directly to it as an argument. So the program is "stateless" in that the behavior does not depend on anything other than its inputs.

This is valuable because you as the developer have a lot less stuff to think about when you're trying to reason about your program's behavior.

mrkeen
1 replies
9h14m

>> The stateless nature of Haskell is something that many rediscover at different points in their careers. Eg in webdev, it's mostly about offloading state to the database

> Would you mind explaining what you mean by stateless?

the same input to a function produces the same output every time

It's good that the question about 'stateless' was raised, but because these are two different things. Working with pure functions does indeed have the above benefits, but a dumb web node deferring its behaviour to a stateful database is not stateless in that sense, and so does not have the above benefits.

jgwil2
0 replies
3m

[delayed]

cies
2 replies
1d6h

Haskell has had a profound impact on the way I think about programming and how I architect my code and build services.

Exactly the same for me.

Granted, the tooling is sh*t.

Stack and Stackage (one of the package managers and library distribution systems in Haskell-land) is the best I found in any language.

Other than that I also found some tools to be lacking.

dario_od
1 replies
1d4h

What makes you say that stack is the best you found in any language? I use it daily, and in my experience I'd put it just a bit above PHP's composer

cies
0 replies
3h3m

In other package manager there is no guarantee that the libs work together. Stackage test the whole ecosystem. You use Stackage X.Y (not libA X.Y + libB X.Y, etc).

All other ecosystems there are some libs that do not work together well, and usually you find out the hard way.

louthy
78 replies
1d10h

I love Haskell the language, but Haskell the ecosystem still has a way to go:

* The compiler is slower than most mainstream language compilers

* Its ability to effectively report errors is poorer

* It tends to have 'first error, breaks rest of compile' problems

* I don't mind the more verbose 'stack trace' of errors, but I know juniors/noobs can find that quite overwhelming.

* The tooling, although significantly better than it was, is still poor compared to other some other functional languages, and really poor compared to mainstream languages like C#

* This ^ significantly steepens the learning curve for juniors and those new to Haskell and generally gets in the way for those more experienced.

* The library ecosystem for key capabilities in 'enterprise dev' is poor. Many unmaintained, substandard, or incomplete implementations. Often trying their best to be academically interesting, but not necessarily usable.

The library ecosystem is probably the biggest issue. Because it's not something you can easily overcome without a lot of effort.

I used to be very bullish on Haskell and brought it into my company for a greenfield project. The company had already been using pure-FP techniques (functors, monads, etc.), so it wasn't a stretch. We ran a weekly book club studying Haskell to help out the juniors and newbies. So, we really gave it its best chance.

After a year of running a team with it, I came to the conclusions above. Everything was much slower -- I kept telling myself that the code would be less brittle, so slower was OK -- but in reality it sapped momentum from the team.

I think Haskell's biggest contribution to the wider community is its ideas, which have influenced many other languages. I'm not sure it will ever have its moment in the sun unfortunately.

mhitza
21 replies
1d9h

* It tends to have 'first error, breaks rest of compile' problems

`-fdefer-type-errors` will report those errors as warnings and fail at runtime, which is good when writing/refactoring code. Even better the Haskell LSP does this out of the box.

* The tooling, although significantly better than it was, is still poor compared to other some other functional languages, and really poor compared to mainstream languages like C#

Which other functional programming languages do you think have better tooling? Experimenting lately with OCaml, feels like Haskell's tooling is more mature, though OCaml's LSP starts up faster, almost instantly.

louthy
15 replies
1d9h

Which other functional programming languages do you think have better tooling?

F#, Scala

OCaml's LSP starts up faster

It was two years ago that I used Haskell last and the LSP was often crashing. But in general there were always lots of 'niggles' with all parts of the tool-chain that just killed developer flow.

As I state in a sibling comment, the tooling is on the right trajectory, it just isn't there yet. So, this isn't the main reason to not do Haskell.

rtpg
9 replies
1d8h

Has Scala gotten better because I remember it being quite painful in the past (tho probably mostly due to language issues more than anything)

bad_user
4 replies
1d5h

The IntelliJ IDEA plugin for Scala is built by Jetbrains, so it has official support. It has its quirks, but so does the Kotlin plugin.

Sbt is better than Gradle IMO, as it has a saner mental model, although for apps you can use Gradle or Maven. Sbat has had some awesome plugins that can help in bigger teams, such as Scalafmt (automatic formatting), Scalafix (automatic refactoring), Wartremover and others. Scalafmt specifically is best in class. With Sbt you can also specify versioning schemes for your dependencies and so you can make the build fail on problematic dependency evictions.

Scala CLI is also best in class, making it comfortable to use Scala for scripting – it replaced Python and Ruby for me: https://scala-cli.virtuslab.org/

Note that Java and Kotlin have Jbang, but Scala CLI is significantly better, also functioning as a REPL. Worth mentioning that other JVM languages hardly have a usable REPL, if at all.

The Scala compiler can be slow, but that's when you use libraries doing a lot of compile-time derivation or other uses of macros. You get the same effect in similar languages (with the exception of Ocaml). OTOH the Scala compiler can do incremental compilation, and alongside Sbt's support for multiple sub-projects or continuous testing, it works fairly well.

Scala also has a really good LSP implementation, Metals, built in cooperation with the compiler team, so you get good support in VS Code or Vim. To get a sense of where this matters, consider that Scala 3.5 introduces "best effort compilation": https://github.com/scala/scala3/pull/17582

I also like Kotlin and sadly, it's missing a good LSP implementation, and I don't think Jetbrains is interested in developing it.

Also you get all the tooling that's JVM specific, including all the profilers and debuggers. With GraalVM's native image, for example, Scala fares better than Java actually, because Scala code relies less on runtime reflection.

I'd also mention Scala Native or ScalaJS which are nice to have. Wasm support is provided via LLVM, but there's also initial support for Wasm GC.

So to answer your question, yes, Scala has really good tooling compared to other languages, although there's room for improvement. And if you're comparing it to any other language that people use for FP, then Scala definitely has better tooling.

dionian
0 replies
1d

SBT has a learning curve but it also has a nice ecosystem, for example sbt-native-packager is better than its competitors in maven or gradle.

bad_user
0 replies
14h18m

All build tools are terrible, and among the available build tools, Sbt is OK.

Let me give you an example … in Gradle, the order in which you specify plugins matters, due to the side effects. Say, if you specify something complex, like Kotlin's multiplatform plugin, in the wrong order with something else, it can break your build definition. I bumped into this right off the gate, with my first Kotlin project.

In Sbt this used to matter as well, but because Sbt has this design of having the build definition as an immutable data structure that's fairly declarative, people worked on solving the problem (via auto-loading), and since then, I've never bumped again into ordering issues.

There are other examples as well, such as consistency. In Sbt there's only one way to specify common settings, and the keys used are consistent. Specifying Java's targeted version, for example, uses the same key, regardless if the project is a plain JVM one, or a multiplatform one.

Sharing settings and code across subprojects is another area where Gradle is a clusterfuck, whereas in Sbt it's pretty straightforward.

Don't get me wrong, Gradle doesn't bother me, and it has some niceties too. Other ecosystems would be lucky to have something like Gradle. But I find it curious to see so many people criticizing it when almost everything else is pretty terrible, with few exceptions.

---

Note that Li Haoyi has great taste, and Mill is looking good, actually. But he also likes reinventing the wheel, and the problem with that for build tools is that standardization has value.

Standardization has so much value for me that I would have liked for Scala to use Gradle as the standard build tool, and for Scala folks to work with Gradle's authors to introduce Scala as an alternative scripting language for it, despite me liking Gradle a lot less. Because it would've made switching and cross-language JVM development easier.

rbonvall
0 replies
18h49m

Sbt is too complex and powerful for its own good. I had a love-hate relationship with it, and now I try to avoid it if I can.

I like scala-cli a lot. It's very promising, but I think it's too new to be proclaimed best-in-class yet. Time will tell, and I'm rooting for it.

weebull
1 replies
1d7h

Scala's has made some horrible language compromises in order to live on the JVM in my opinion.

bad_user
0 replies
1d5h

I'd argue that Scala's "compromises" in general make it a better language than many others, independent of the JVM.

But we can talk specifics if you want. Name some compromises.

draven
1 replies
1d7h

Well, there's Intellij IDEA with the scala plugin, and it's pretty good. I regularly debug my code in the IDE with conditional breakpoints, etc.

SBT still makes me want to throw the laptop through the window.

pxc
0 replies
1d6h

In the pre-LSP era, I worked as a novice Scala developer, and I didn most of my Scala work in Emacs with ENSIME. It was pretty good. I imagine the language server is pretty usable by now.

mhitza
3 replies
1d8h

Coincidentally I've started using the Haskell LSP around two years ago, and crashing is not one of the issues I've had with it.

Since you mention F#, and C# in your previous comment, are you on the Windows platform? Maybe our experience different because of platform as well. Using GHCup to keep in sync compatible versions of GHC, Cabal and the LSP probably contributed a lot to the consistent feel of the tooling.

I use the Haskell LSP for its autocompletion, reporting compile errors in my editor, and highlighting of types under cursor. There are still shortcomings with it that are annoyances:

* When I open up vim, it takes a good 5-10 seconds (if not a bit more) until the LSP is finally running.

* When a new dependency is added to the cabal file, the LSP needs to be restarted (usually I quit vim and reopen the project).

* Still no support for goto definition for external libraries. The workaround I have to use in this case is to `cabal get dependency-version` in a gitignored directory and use hasktags to keep a tags file to jump to those definitions and read the source code/comments.

The later two have open GitHub issues, so at least I know they will get solved at some point.

runevault
0 replies
1d

Since you mention F#, and C# in your previous comment, are you on the Windows platform?

Since dotnet core (now dotnet 5+), the Microsoft version of dotnet has not been tied to windows outside a few exceptions like old Windows UI libraries (WPF/WinForms) and stuff like WCF once they revived it.

louthy
0 replies
1d8h

Since you mention F#, and C# in your previous comment, are you on the Windows platform?

Linux Mint.

devmunchies
0 replies
23h47m

I’ve been using f# in production for 4+ years and haven’t used windows in like 15 years.

Speaking of LSP, the lsp standard is developed by Microsoft so naturally any dotnet language will have good lsp support.

nh2
0 replies
1d8h

The Haskell LSP crashes less often now than 2 years ago. It isn't perfect yet, but pretty usable for us.

mattpallissard
1 replies
1d3h

Experimenting lately with OCaml, feels like Haskell's tooling is more mature.

I feel like OCaml has been on a fast upward trajectory the past couple of years. Both in terms of language features and tooling. I expect the developer experience to surpass Haskell if it hasn't already.

I really like Merlin/ocaml-lsp. Sure, it doesn't have every LSP feature like a tool with a lot of eyes on it, such as clangd, but it handles nearly everything.

And yeah, dune is a little odd, but I haven't had any issues with it in a while. I even have some curve-ball projects that involve a fair amount of C/FFI work.

My only complaint with opam is how switches feel a little clunky. But still, I'll take it over pip, npm, all day.

I've been using OCaml for years now and compared to most other languages, the experience has been pretty pleasant.

mhitza
0 replies
1d1h

My little experiments with OCaml have been pleasant thus far (in terms of language ergonomics), but on the tooling side Haskell (or rather I should say GHC) is pretty sweet.

For what I had to do thus far, at one point I needed to step debug through my code. Whereas in GHC land I reload my project in the interpreter (GHCi or cabal repl), set a break point on function name and step through the execution. With OCaml I have to go through the separate bytecode compiler to build it with debug symbols and the I can navigate through program execution. The nice thing is that I can easily go back in execution in flow ("timetravel debugging"), but a less ergonomic. Also less experienced with this flow, to consider my issues authoritative.

I don't have that much experience with dune (aside from setting up a project and running dune build), but one thing that confused me at first, is that the libraries I have to add to the configuration do not necessarily match the Opam package names.

The LSP is fast, as mentioned before, it supports goto definition, but once I jump to a definition to one of my dependencies I get a bunch of squiggly lines in those files (probably can't see transitive dependency symbols, if I where to guess). I can navigate dependencies one level deeper than I can with the Haskell language server, though.

I actually want to better understand how to build my projects without Dune, and probably will attempt to do so in the future. The same way I know how to manage a Haskell project without Cabal. Feels like it gives me a better understanding of the ecosystem.

innocentoldguy
1 replies
23h16m

Elixir’s tooling is awesome, in my opinion.

travisgriggs
0 replies
16h4m

I’m curious what you think is awesome about its tooling? For me, mix is capable enough, but I consider the IDE story to be pretty lacking actually.

lkmain
0 replies
1d8h

I haven't yet felt the need for third party tooling in OCaml. OCaml has real abstractions, easily readable modules and one can keep the whole language in one's head.

Usually people do not use objects, and if they do, they don't create a tightly coupled object mess that can only be unraveled by an IDE.

catgary
12 replies
1d8h

I kind of agree that Haskell missed its window, and a big part of the problem is the academic-heavy ecosystem (everyone is doing great work, but there is a difference between academic and industrial code).

I’m personally quite interested in the Koka language. It has some novel ideas (functional-but-in-place algorithms, effect-handler-aware compilation, it uses reference counting rather than garbage collection) and is a Microsoft Research project. It’s starting to look more and more like an actual production-ready language. I can daydream about Microsoft throwing support behind it, along with some tooling to add some sort of Koka-Rust interoperability.

the_duke
11 replies
1d8h

Koka is indeed incredibly cool, but:

It sees sporadic bursts of activity, probably when an intern is working on it, and otherwise remains mostly dormant. There is no package manager that could facilitate ecosystem growth. There is no effort to market and popularize it.

I believe it is fated to remain a research language indefinitely.

catgary
10 replies
1d8h

You’re probably right. I just think it’s the only real candidate for a functional language that could enter the zeitgeist like Rust or Swift did, it’s a research language that has been percolating at Microsoft for some time. A new language requires a major company’s support, and they should build an industry-grade ecosystem for at least one problem domain.

psd1
4 replies
1d6h

F# exists

cies
2 replies
1d6h

Yups. And that's about it for F#. One can await the announcement that MSFT stops maintaining it.

devmunchies
0 replies
23h56m

A lot of major C# features were first implemented in F#. I think of it as a place for Microsoft engineers/researchers to be more experimental with novel features that still need to target the CLR (the dotnet VM). Sometimes even requiring changes to the CLR itself. In that lens, it has had a very large indirect financial impact on the dotnet ecosystem.

RandomThoughts3
0 replies
1d

People have been predicting the imminent demise of F# since its first version 20 years ago.

runevault
0 replies
1d1h

I keep (seems mistakenly) expecting them to try and push F# along with their dotnet ML tooling more since, while it is strictly typed, F# lets you mostly avoid making your types explicit so exploration of ideas in code is closer to Python than it is to c# while giving you the benefits of a type system and lots of functional goodies.

mgdev
3 replies
1d6h

I'm just now discovering Koka. I'm kinda blown away.

I'm also a little sad at this defeatist attitude. What you said might be true, but those things are solvable problems. Just requires a coordinated force of will from a few dedicated individuals.

bbkane
1 replies
1d5h

Be the change you want to see?

mgdev
0 replies
1d4h

Hear, hear!

catgary
0 replies
1d

There is a team of dedicated people working on Koka. They say the language isn’t production ready, and they don’t seem to be rushing. But I don’t think they’d bother with VSCode/IDE support if they didn’t feel like they were getting close.

alpinisme
0 replies
17h35m

A big if, granted, but if roc delivers on its promises it could also be a pretty compelling language — maybe a bit too niche for super enterprisey stuff but it could definitely have a zeitgeisty moment.

gtf21
8 replies
1d9h

The library ecosystem is probably the biggest issue.

I'd love to know which things specifically you're thinking about. For what we've been building, the "integration" libraries for postgres, AWS, etc. have been fine for us, likewise HTTP libraries (e.g. Servant) have been great.

I haven't _yet_ encountered a library problem, so am just very curious.

crote
3 replies
1d8h

A few years ago I tried to use Servant to make a CAS[0] implementation for an academic project.

One issue I ran into was that Servant didn't have a proper way of overriding content negotiation: the CAS protocol specified a "?format=json" / "?format=xml" parameter, but Servant had no proper way of overriding its automatic content negotiation - which is baked deeply into its type system. I believe at the time I came across an ancient bug report which concluded that it was an "open research question" which would require "probably a complete rework".

Another issue was that Servant doesn't have proper integrated error handling. The library is designed around returning a 200 response, and provides a lot of tooling to make that easy and safe. However, I noticed that at the time its design essentially completely ignored failures! Your best option was basically a `Maybe SomeResponseType` which in the `None` case gave a 200 response with a "{'status': 'error'}" content. There was a similar years-old bug report for this issue, which is quite worrying considering it's not exactly rocket science, and pretty much every single web developer is going to run into it.

All of this gave a feeling of a very rough and unfinished library, whose author was more concerned about writing a research paper than actually making useful software. Luckily those issues had no real-world implication for me, as I was only a student losing a few days on some minor project. But if I were to come across this during professional software development I'd be seriously pissed, and probably write off the entire ecosystem: if this is what I can expect from "great" libraries, what does the average ones look like - am I going to have to write every single trivial thing from scratch?

I really love the core language of Haskell, but after running into issues like these a few dozen times I unfortunately have trouble justifying using it to myself. Maybe Haskell will be great five or ten years from now, but in its current state I fear it is probably best to use something else.

[0]: https://en.wikipedia.org/wiki/Central_Authentication_Service

dwattttt
1 replies
1d5h

Your best option was basically a `Maybe SomeResponseType` which in the `None` case gave a 200 response with a "{'status': 'error'}" content.

This seems to be an area where my tastes diverge from the mainstream, but I'm not a fan of folding errors together. I'd rather a http status code only correspond to the actual http transport part, and if an API hosted there has an error to tell me, that should be layered on top.

troupo
0 replies
22h59m

Well, that's why errors have categories:

HTTP status ranges in a nutshell:

1xx: hold on

2xx: here you go

3xx: go away

4xx: you fucked up

5xx: I fucked up

(https://x.com/stevelosh/status/372740571749572610)

ParetoOptimal
0 replies
6h22m

At work we had to do both of these things and it is possible if im remembering correctly.

chii
1 replies
1d8h

Probably referring to something like spring (for java), which is a one stop shop for everything, including things like integration with monitoring/analytics, rate-limiting, etc

okkdev
0 replies
1d8h

Spring is probably the worst framework created, so I wouldn't list that as an example :/

louthy
0 replies
1d8h

The project was a cloud agnostic platform-as-a-service for building healthcare applications. It needed graph-DBs, Postgres, all clouds, localisation, event-streams, UIs, etc. I won't list where the problems were, because I don't think it's helpful -- each project has its own needs, you may well be lucky where we were not. Certainly the project wasn't a standard enterprise app, it was much more complex, so we had some foundational things we needed that perhaps your average dev doesn't need. However, other ecosystems would usually have a decent off-the-shelf versions, because they're more mature/evolved.

You have to realise that none of the problems were insurmountable, I had a talented team who could overcome any of the issues, it just became like walking through treacle trying to get moving.

And yes, Servant was great, we used that also. Although we didn't get super far in testing its range.

imoverclocked
0 replies
1d8h

I tried building a couple small projects to get familiar with the language.

One project did a bunch of calculation based on geolocation and geometry. I needed to output graphs and after looking around, reached for gnuplot. Turns out, it’s a wrapper around a system call to launch gnuplot in a child process. There is no handle returned so you can never know when the plot is done. If you exit as soon as the call returns, you get to race gnuplot to the temp file that gets automatically cleaned up by your process. The only way to eliminate the race is by sleeping… so if you add more plots, make sure you increase your sleep time too. :-/

Another utility was a network oriented daemon. I needed to capture packets and then run commands based on them… so I reached for pcap. It uses old bindings (which is fine) and doesn’t expose the socket or any way to set options for the socket. Long story short, it never worked. I looked at the various other interfaces around pcap but there was always a significant deficiency of some kind for my use case.

Now, I’m not a seasoned Haskell programmer by any means and it’s possible I am just missing out on something fundamental. However, it really looks to me like someone did a quick hack that worked for a very specific use-case for both of these libraries.

The language is cool but I’ve definitely struggled with libraries.

ants_everywhere
7 replies
1d5h

I completely agree. I'm interested in making the Haskell tooling system better. I would welcome anyone with Haskell experience to let me know what you think would be the highest priority items here.

I'm also curious about the slowness of compilation and whether that's intrinsic to the design of GHC.

tome
1 replies
1d2h

I would welcome anyone with Haskell experience to let me know what you think would be the highest priority items here.

Simplifying cabal probably, though that's a system-level problem, just just a cabal codebase problem.

ants_everywhere
0 replies
20h53m

thanks!

lemonwaterlime
1 replies
9h26m

The Brittany code fixer needs a maintainer. The previous one had to step away. It has a unique approach to code formatting that the ormolu/fourmolu formatters doesn’t. There’s lots of the philosophy and such in the docs.

I like it better than the ormolu family because it respects your placement of comments and just formats the code itself. But it isn’t maintained as of a few years ago.

https://hackage.haskell.org/package/brittany

ants_everywhere
0 replies
2h18m

thanks!

cptwunderlich
1 replies
7h4m

The Haskell Language Server (LSP) always needs help: https://github.com/haskell/haskell-language-server/issues?q=...

As for GHC compile times... hard to say. The compiler does do a lot of things. Type checking and inference of a complex type system, lots of optimizations etc. I don't think it's just some bug/inefficient implementation, bc. resources have been poured into optimizations and still are. But there are certainly ways to improve speed. For single issues, check the bug-tracker: https://gitlab.haskell.org/ghc/ghc/-/issues/?label_name%5B%5...

For the big picture, maybe ask in the discourse[1] or the mailing list. If you want to contribute to the compiler, I can recommend that you ask for a gitlab account via the mailing list and introduce youself and your interests. Start by picking easy tickets - GHC is a huge codebase, it takes a while to get familiar.

Other than that, I'd say some of the tooling could use some IDE integration (e.g., VS Code plugins).

[1]...https://discourse.haskell.org/

ants_everywhere
0 replies
2h18m

thanks!

drblue
0 replies
1h22m

The highest priority is probably making real debugging tools. Right now, the only decent debugging tool is ghc-debug to connect to a live process, and doing anything over that connection is slow, slow, slow. ghc-debug was the only thing which was able to resolve a long standing thunk leak in one of my systems, and I know that unexplained thunk leak caused a previous startup I was at to throw away their Haskell code and rewrite it in Rust. In my case, it found the single place where I had said `Just $` instead of `Just $!` which I had missed the three times I had inspected the ~15k line program. ghc-debug still feels like a pre-alpha though, go compare it to VisualVM for what other languages have.

Also, I have found very little use for the various runtime flags like `+RTS -p`. These flags aren't serious debugging tools; I couldn't find any way to even trigger them internally at runtime around a small section, which becomes a problem when it takes 10 minutes to load data from disk when the profiler is on.

The debugging situation with Haskell is really, really bad and it's enough that I try to steer people away from starting new projects with the language.

robocat
6 replies
1d2h

It is a shame that the article almost completely ignores the issue of the tooling. I particularly find the attitude in the following paragraph offensively academically true:

  All mainstream, general purpose programming languages are (basically) Turing-complete, and therefore any programme you can write in one you can, in fact, write in another. There is a computational equivalence between them. The main differences are instead in the expressiveness of the languages, the guardrails they give you, and their performance characteristics (although this is possibly more of a runtime/compiler implementation question).
I decided to have a go at learning the basics of Haskell and the first error I got immediately phased me because it reminded me of unhelpful compilers of the 80s. I have bashed my head against different languages and poor tooling enough times to know I can learn, but I've also done it enough times that I am unwilling to masochistically force myself through that gauntlet unless I have a very good reason to do so. The "joy" of learning is absent with unfriendly tools.

The syntax summary in the article is really good. Short and clear.

samatman
4 replies
21h21m

All mainstream, general purpose programming languages are (basically) Turing-complete, and therefore any programme you can write in one you can, in fact, write in another.

That stuck out to me as well, I said out loud "that is a very Haskell thing to say". It would be more accurate to say that Turing Completeness means that any programme you write in one language, may be run in another language by writing an emulator for the first programme's runtime, and executing the first programme in the second.

Because it is not "in fact" the case that a given developer can write a programme in language B just because that developer can write the program in language A. It isn't even "in principle" the case, computability and programming just aren't that closely related, it's like saying anything you can do with a chainsaw you can do with a pocketknife because they're both Sharp Complete.

I shook it off and enjoyed the rest of the article, though. Haskell will never be my jam but I like reading people sing the virtues of what they love.

gtf21
3 replies
20h50m

I think this is being taken as me saying “therefore you can write any programme in Haskell” which, while true, was not the point I was trying to make. Instead I was trying to reduce the possible interpretation that I was suggesting that Haskell can write more programmes than other languages, which I don’t think is true.

computability and programming just aren’t that related

I … don’t think I understand

samatman
2 replies
20h2m

> computability and programming just aren’t that related

I … don’t think I understand

That's such a Haskell thing to say!

Ok, I'm teasing a bit now. But there's a kernel of truth to it: a good model of the FP school which forked off Lisp into ML, Miranda, Haskell, is as an exploration of the question "what if programming was more like computability theory?", and fairly successfully, by its own "avoid success at all costs" criteria.

Computability: https://en.wikipedia.org/wiki/Computability_theory

Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees.

Programming: https://en.wikipedia.org/wiki/Computer_programming

Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks.

Related, yes, of course, much as physics and engineering are related. But engineering has many constraints which are not found in the physics of the domain, and many engineering decisions are not grounded in physics as a discipline.

So it is with computability and programming.

“therefore you can write any programme in Haskell” which, while true

It is not. That's my point. One can write an emulator for any programme in Haskell, in principle, but that's not at all the same thing as saying you can write any programme in fact.

For instance, you cannot write this in Haskell:

http://krue.net/avrforth/

You could write something in Haskell in which you could write this, but those are different complexity classes, different programs, and very, very different practices. They aren't the same, they don't reduce to each other. You can write an AVR emulator and run avrforth in it. But that's not going to get the blinkenlichten to flippen floppen on the dev board.

Haskell, in fact, goes to great lengths to restrict the possible programs one can write! That's one of the fundamental premises of the language, because (the hope is that) most of those programs are wrong. About the first half of your post is about things like accidental null dereferencing which Haskell won't let you do.

In programming, the tools one chooses, and ones abilities with those tools, and the nature of the problem domain, all intersect to, in fact, restrict and shape the nature, quality, completeness, and even getting-startedness, of the program. Turing Completeness doesn't change that, and even has limited bearing on it.

rramadass
0 replies
13h58m

Nicely said, this in particular;

In programming, the tools one chooses, and ones abilities with those tools, and the nature of the problem domain, all intersect to, in fact, restrict and shape the nature, quality, completeness, and even getting-startedness, of the program.

Language shapes thought and hence once the simpler Imperative programming models (Procedural, OOP) are learnt it becomes quite hard for the Programmer to switch mental models to FP. The FP community has really not done a good job of educating such programmers who are the mainstay in the Industry.

gtf21
0 replies
10h36m

Oh ok I get what you mean now, I thought you were being a bit more obtuse than that.

So my original intent with that paragraph was very different, but you're right that I was not very precise with some of those statements.

Thanks for taking the time to explain, you've definitely helped expand the way I've thought about this.

gtf21
0 replies
1d

It is a shame that the article almost completely ignores the issue of the tooling.

Mostly because while I found of the tooling occasionally difficult, I didn’t find Haskell particularly bad compared to other language ecosystems I’ve played with, with the exception of Rust, for which the compiler errors are really good.

The syntax summary in the article is really good

Thanks, I wasn’t so sure how to balance that bit.

Vosporos
5 replies
1d9h

If you are willing / able to report these pain points in detail to the Haskell Foundation, this is going to be valuable feedback that will help orient the investments in tooling in the near future.

adastra22
3 replies
1d9h

All bug reports are good. But is this not obvious? Do the Haskell developers not use other language ecosystems? This goes beyond “this edge case is difficult” and into “the whole tooling stack is infamously hard to work with.” I just assumed Haskell, like eMacs, attracted a certain kind of developer that embraced the warts.

mrkeen
1 replies
1d9h

No, we use plenty of other stuff.

My $DAYJOB language:

* Can't build a binary

* Uses an inexplicable amount of memory.

* Has an IDE which constantly puts itself into a bad state. E.g. it highlights and underlines code with red even when I know it's a pristine copy that passes its tests. I periodically have to close the project, navigate to it in the terminal, run 'git status --ignored' and delete all that crap and re-open the project.

* Is slow to start up.

* Has a build system with no obvious way to use a 'master list' of version numbers. In our microservice/microrepo system, it is a PITA to try to track down and remove a vulnerable dependency.

* Has been receiving loads of praise over the last 18 months for starting to include stuff that Haskell has included for ages. How's the latest "we're solving the null problem" going?

What the GHC compiler does for me is just so much better at producing working software than $DAYJOB language + professional $DAYJOB IDE, that I don't think about the tooling.

If you want to put yourself in my shoes: imagine you're getting shit done with TypeScript every day, and some C programmers come along and complain that it's missing the bare minimum of tools: strace, valgrind and gdb. How do you even reply to that?

actualwitch
0 replies
1d8h

If you want to put yourself in my shoes: imagine you're getting shit done with TypeScript every day, and some C programmers come along and complain that it's missing the bare minimum of tools: strace, valgrind and gdb. How do you even reply to that?

You tell them to strace/valgrind node whatever.js and instead of gdb use built-in v8 debugger as node inspect whatever.js

gtf21
0 replies
1d8h

We do use other ecosystems, yes. I haven't really found the tooling for Haskell to be particularly obstructive compared to other languages. I've run into plenty of mysteries in the typescript, python, ObjC/Swift, etc. ecosystems that have been just as irritating (sometimes much more irritating), and generally find that while HLS can be a bit janky, GHC is very good and I spend less time scratching my head looking at a piece of code that should work but does something wild than in other languages.

louthy
0 replies
1d9h

I think tooling is something that is clearly on a good trajectory. When I consider what the Haskell tooling was like when I first started using it, well, it was non-existent! (and Cabal didn't even understand what dependencies were, haha!)

So, it's much, much better than it was. It's still not comparable to mainstream languages, but it's going the right way. So, I wouldn't necessarily take that as the killer.

The biggest issue was the library ecosystem. We spent an not-small amount of time evaluating libraries, realising they were not up to scratch, trying to do build our own, or interacting with the authors to understand the plans. When you're trying to get moving at the start of a project, this can be quite painful. It takes longer to get to an MVP. That's tough when there are eyes on its success or not.

Even though I'd been using Haskell for at least a decade before we embarked upon that path, I hadn't really ever built anything substantial. The greenfield project was a complex beast on a number of levels (which was one of the reasons I felt Haskell would excel, it would force us to be more robust with our architecture). But, we just couldn't find the libraries that were good enough.

My sense was there's a lot of academics writing libraries. I'm not implying that academics write poor code; just that their motivations aren't always aligned with what an industry dev might want. Usually this is around simplicity and ease-of-use. And, because quite a lot of libraries were either poorly documented or their intent was impenetrable, it would take longer to evaluate.

I think if the Haskell Foundation are going to do anything, then they should probably write down the top 50 needed packages in industry, and then put some funding/effort towards helping the authors of existing libraries to bring them up to scratch (or potentially, developing their own), perhaps even create a 'mainstream adoption style guide', that standardises the library surfaces -- there's far too much variability. It needs a keen eye on what your average industry dev needs though.

I realise there are plenty of companies using Haskell successfully, so this should only be one data point. But, it is a data point of someone who is a massive Haskell (language) fan.

Haskell has had a massive influence on me and how I write code. It's directly influenced a major open-source project I have developed [1]. But, unfortunately, I don't think I'll use it again for a pro project.

[1] https://github.com/louthy/language-ext

pyrale
2 replies
1d7h

Most of your comments boil down to two items:

- The Haskell ecosystem doesn't have the budget of languages like Java or C# to build its tooling.

- The haskell ecosystem was innovative 20 years ago, but some newer languages like Rust or Elm have much better ergonomics due to learning from their forebearers.

Yes, it's true. And it's true for almost any smaller language out there.

troupo
0 replies
1d7h

Counterpoint: Elixir. While it sits on top of industrial-grade Erlang VM, the language itself produced a huge ecosystem of pragmatic and useful tools and libraries.

louthy
0 replies
1d6h

If you boil down my comments, sure, you could say that. But, that's why I didn't boil down my comments and used more words, because ultimately, it doesn't say that.

The thread is "Why Haskell?", I'm offering a counterpoint based on experience. YMMV and that's fine.

devjab
2 replies
1d5h

compared to mainstream languages like C#

Out of curiosity does this also hold true for F#?

louthy
1 replies
1d1h

F#’s tooling is worse than C# for sure, but it’s a big step-up from Haskell and has access to the .NET framework.

I listed C# because that’s the mainstream language I know the best, and arguably has best-in-class tooling.

Of course you have to be prepared to lose some of the creature comforts when using a more left-field language. But, you still need to be productive. The whole ecosystem has to be a net gain in productivity, or stability, or security, or maintainability — pick your poison depending on what matters to your situation.

I had hoped Haskell would pay dividends due to its purity, expressive type-system, battle tested-ness, etc. I expected us to be slower, just not as slow as it turned out.

Ultimately the trade off didn’t work for us.

devjab
0 replies
14h52m

Thank you for the answer. It’s exactly because of C#’s excellent tooling I was wondering if they had done similar for F#.

The whole ecosystem has to be a net gain in productivity, or stability, or security, or maintainability — pick your poison depending on what matters to your situation.

I very much agree with you on this. I’ve worked in places where we used Typescript on the back-end because it was easier for a small team to work together (and go on vacations) while working in the same language even though there was a trade off performance wise. Ultimately I think it’s always about finding the best way to be productive.

RandomThoughts3
2 replies
1d

The Haskell community is also very opinionated when it comes to style and some of the choices are not to everyone taste. I’m mostly thinking of point-free being seen as an ideal and the liberal usage of operators.

ParetoOptimal
1 replies
6h25m

Point-free and liberal use of operators are and have long been minority viewpoints in Haskell.

I say this as someone who prefers symbols, point free, and highly appreciates "notation as a tool of thought".

RandomThoughts3
0 replies
2h29m

For a minority viewpoint, it seems fairly pervasive in the library ecosystem at least to me. Then again, I hate most usage of point-free and I think the correct amount of custom operator is exactly zero so I might be particularly sensitive to it.

nh2
1 replies
4h12m

The compiler is slower than most mainstream language compilers

Depends on which mainstream languages one compares with; there's always C++.

My project here has 50k lines Haskell, 10k C++, 50k lines TypeScript (code-only, not comments). Counting user CPU time (1 core, Xeon E5-1650 v3 3.50GHz):

    TypeScript 123 lines/s
    Haskell     33 lines/s
    C++          7 lines/s

Liquid_Fire
0 replies
2h48m

Can you clarify what "7 lines/s" means? Surely you are not saying that your 10k lines of C++ take more than 23 minutes to compile on a single core? Is it 10k lines of template spaghetti?

For comparison, I just compiled a 25k line .cpp (probably upwards of 100k once you add all the headers it includes) from a large project, in 15s. Admittedly, on a slightly faster processor - let's call it 30s.

moomin
0 replies
1d9h

No lies detected. I love Haskell, but productivity is a function of the entire ecosystem, and it’s just not there compared to most mainstream languages.

cubefox
69 replies
1d8h

Using "Maybe" as a positive example of what Haskell can do isn't right. Say you have a function with input of type A and output of type B, written (A -> B). The problem with Maybe ("option" types) then is that if you have a function, in production use, of type (X -> Maybe Y), you can't "weaken your assumptions for the input and strengthen your promises for the output" (which would be an improvement) and rewrite it to the type (Maybe X -> Y). Because then you would have to modify all the code which already uses the function. Since "A" and "Maybe A" are incompatible types. Which is illogical.

Several other null-safe languages solve this correctly by allowing disjunctions of types (often called unions, though countless other type related things are also called unions). They have a type operator "|" (or) and the function type (X -> Y|Null) can be improved by rewriting the function to (X|Null -> Y). Code outside the function doesn't have to be changed: Accepting X or Null implies accepting X, and returning Y implies returning Y or Null.

pyrale
19 replies
1d7h

if you have a function, in production use, of type (X -> Maybe Y), you can't "weaken your assumptions for the input and strengthen your promises for the output" (which would be an improvement) and rewrite it to the type (Maybe X -> Y).

If your value of Y is predicated on receiving an X, I have trouble seeing how you would write such a function. If you have a default value, then Haskell would solve it just like any language with optionals:

    y :: Int -> String

    (fromMaybe "defaultValue" (map y (Just 3))
> Several other null-safe languages [...] returning Y implies returning Y or Null.

I have trouble seeing how the language is null-safe in that situation.

cubefox
18 replies
1d6h

If your value of Y is predicated on receiving an X

We didn't assume it is. Say you have a function of type (String -> String|Null). Further assume that you realize you don't necessarily need a String as input, and that you in fact are able to always output a string, no matter what. This means you can rewrite (improve!) the function such that it now has the type (String|Null -> String). Relaxing the type requirements for your inputs, or strengthening the guarantees for the type of your output, or both, is always an improvement. And there is no logical reason why you would need to change any external code for that. But many type systems are not able to automatically recognize and take advantage of this logical fact.

> Several other null-safe languages [...] returning Y implies returning Y or Null.

I have trouble seeing how the language is null-safe in that situation.

If you always assign a value of type Y to a variable of type Y|Null, the compiler will enforce a check for Null if you access the value of the variable, which is unnecessary (as the type of the variable could be changed to Y), but it can't result in a null pointer exception.

itishappy
6 replies
1d3h

Several other null-safe languages [...] returning Y implies returning Y or Null.

If `Y` is implicitly `Y|Null`, then it is no longer possible to declare "this function does not return null" in the type system. Now understanding what a function can return requires checking the code or comments. This is the opposite of null safe.

cubefox
5 replies
1d3h

If `Y` is implicitly `Y|Null`

It isn't. It's just that if you say "this function returns Y or null", and it returns Y, your statement was true. If you give me a hammer, this implies you gave me a hammer or a wrench.

itishappy
4 replies
1d2h

It must. If it is possible to rewrite `X -> Y|Null` as `X|Null -> Y` without changes to external code, then the `X` type needs to accept `X|Null` and the `Y` type needs to accept `Y|Null`, therefore any `T` must implicitly be `T|Null` and the language is not null safe. Result types are what you get when you require explicit conversions.

I may still be thinking about this incorrectly. Do you have an language in mind that contradicts this?

cubefox
3 replies
23h32m

You seem to think `X -> Y|Null` and `X|Null -> Y` have to be equivalent, but that's not the case. The second function type has a more general input type and a more restricted return type. And a function which can accept X as an input can be replaced with a function that can accept X or Null (or X or anything else) as input type. And a function which has can return types Y or Null (or Y or anything else) can be replaced with a function that can return type Y. Old call site code will still work. Of course this replacement only makes sense if it is possible to improve the function in this way from a perspective of business logic.

I may still be thinking about this incorrectly. Do you have an language in mind that contradicts this?

Any language which supports "union types" of this sort, e.g. Ceylon or, nowadays, Typescript.

itishappy
2 replies
21h59m

I get it! (Thanks, playing around with actual code helped a ton.) For example, in Typescript you're saying you can add a default value simply:

    # old
    function f(x: number): number {
        return 2 * x;
    }

    # new
    function f(x: number|null): number {
        x = x || 3;
        return 2 * x;
    }

    # usage
    # old
    f(2)
    # new
    f(2) # still works!
But in Haskell this requires changing the call sites:

    -- old
    f :: Int -> Int
    f = (*2)

    -- new
    f :: Maybe Int -> Int
    f = maybe 0 (*2)

    -- usage
    -- old
    f 2
    -- new
    f (Just 2) -- different!
But I actually feel this is an antipattern in Haskell (and maybe TypeScript too), and a separate wrapper function avoids refactoring while making things even more user friendly.

    -- old
    f :: Int -> Int
    f = (*2)

    -- new
    fMaybe :: Maybe Int -> Int
    fMaybe = maybe 3 f

    -- usage
    -- old
    f 2
    -- new
    f 2 -- still works!
    fMaybe Nothing -- works too!
Here's some wrappers for general functions (not that they're needed, they're essentially just raw prelude functions):

    maybeIn :: b -> (a -> b) -> (Maybe a -> b)
    maybeIn = maybe

    maybeOut :: (a -> b) -> (a -> Maybe b)
    maybeOut = fmap Just

    maybeBoth :: b -> (a -> b) -> (Maybe a -> Maybe b)
    maybeBoth d = maybeOut . maybeIn d
Added bonus, this approach avoids slowing down existing code with the null checks we just added.

itishappy
1 replies
19h41m

Got `maybeOut` wrong, can't edit, but it should be:

   maybeOut :: (a -> b) -> (a -> Maybe b)
   maybeOut = (.) Just
Also the parenthesis around the last two output types are added for emphasis, but can be safely removed.

itishappy
0 replies
18h42m

Last reply. Probably. Here's `maybe` in TypeScript:

    const maybe = <T,>(f: (_: T) => T, d:T) => (x: T|null) => f(x || d);

    console.log(maybe((x) => 2 * x, 3)(null)); // returns: 6

pyrale
3 replies
1d6h

The mainstream is languages that will happily accept null as anything, and crash at runtime. Sure, union types are cool, but they aren't expressible in most languages, while the optional construct is.

Haskell's type system definitely is a positive example of what can be done to avoid completely the null problem. Is it the utmost that can be done? No. But it's been a working proof of solution for 20 years, while proper typecheckers for union types are a recent thing.

cubefox
2 replies
1d6h

Yeah but there are arguably different standards for Haskell. Haskell's advanced type system is one of its main selling points, so it doesn't make sense to explain the benefits of Haskell with a case (Maybe) where its type system falls short (no "or" in types).

pyrale
1 replies
1d6h

Haskell's advanced type system is one of its main selling points, so it doesn't make sense to explain the benefits of Haskell with a case where it's type system falls short.

Falls short compared to what? Arguably, if you're talking to someone using Java or Python, Maybe is plenty enough; and getting started on type families is certainly not going to work well.

cubefox
0 replies
1d5h

These languages don't have null safety. Haskell does have null safety, but at the cost of the additional complexity that comes with Maybe wrapping. So it's not as unambiguously an improvement as union typing is (which adds less complexity but still grants null safety).

akritid
3 replies
1d6h

This came to mind while considering your interesting point: After such a change, wouldn’t you feel the urge to inspect all users of the stricter return type and remove unnecessary handling of a potential null return?

cubefox
2 replies
1d4h

I don't know about such urges. But sometimes there is no possibility to inspect all user code, e.g. when you are providing a library or API function.

akritid
1 replies
19h10m

Good point. In such case I would probably consider leaving the signature as is, even after tightening, and possibly offer a function with stricter signature for new code to use while deprecating the older variant. This would inform the users without rug pulling.

cubefox
0 replies
10h16m

But that's not necessary in a language with union types of this sort. No rugs being pulled.

kreetx
2 replies
1d6h

Don't mean to appear as talking down to you, but the "relaxation" or "strengthening" that you talk about exactly corresponds to either (1) changing the function that you use at the call site, or (2) changing the "external code" function. The thing you call "improvement" sounds like a plain type error to me.

gtf21
0 replies
1d5h

Yeah I also really don't understand the point that's being made here: it looks like a great way to introduce more errors.

cubefox
0 replies
1d5h

Then you are misunderstanding something...

magicalhippo
14 replies
1d7h

I don't use Haskell, so this is a dumb question. Why can't X be implicitly cast to Maybe X?

tobz619
8 replies
1d7h

It can using pure or return or if working with just Maybe specifically then Maybe is defined like so:

data Maybe a = Just a | Nothing

So to make an X a Maybe X, you'd put a Just before a value of type X.

For example:

one :: Int

one = 1

mOne :: Maybe Int

mOne = Just one -- alternatively, pure one since our type signature tells us what pure should resolve to.

Reason we can do this is because Maybe is also an Applicative and a Monad and so implements pure and return which takes an ordinary value and wraps it up into an instance of what we want.

magicalhippo
5 replies
1d5h

mOne = Just one

I'd call that explicit casting. Implicit casting would be

    mOne = one
Compiler already knows what "one" is, it could insert the "Just" itself, no? Possibly due to an operator defined on Maybe that does this transformation?

That is, are there some technical reasons it doesn't?

Or is it just (no pun inteded) a language choice?

yakshaving_jgt
4 replies
12h35m

Why would this be useful? Why do you want the types to change underneath you?

cubefox
3 replies
7h44m

Better question: Why would you want your call site code to break when your type signature gets changed in a way that doesn't necessitate breaking anything?

yakshaving_jgt
2 replies
7h34m

Because what you're asking for precludes the concept of mathematical guarantees. I'm not taking your question at face value, because you could be asking why call site code should break when the type signature generalises (which is a useful thing), but that's not what you're asking.

It seems you're asking for code to be both null safe and not null safe simultaneously.

Having a language just decide that it would like to change the types of the values flowing through a system is wild. It's one of the reasons that JavaScript is a trash fire.

cubefox
1 replies
3h34m

You are misunderstanding things.

yakshaving_jgt
0 replies
3h13m

I’m certainly misunderstanding why so many people in this thread insist on speaking authoritatively on a topic they clearly know very little about.

ninkendo
0 replies
1d6h

Sounds similar to how you need to do Some(x) when passing x to something expecting an Option in rust.

Swift interestingly doesn’t require this, but only because Optionals are granted lots of extra syntax sugar in the language. It’s really wrapping it in .some(x) for you behind the scenes, but the compiler can figure this out on its own.

This means that in swift, changing a function from f(T) to f(T?) (ie. f(Optional<T>)) is a source-compatible change, albeit not an ABI-compatible one.

cubefox
0 replies
1d7h

Isn't that explicit casting? Implicit casting would be automatically performed by the compiler without the need to (re)write any explicit code.

mrkeen
4 replies
1d7h

Haskell/GHC tells you what the types are. Proper, global, most-general type inference. Not that local inference crap [1] that the sour-grapes types will say is better.

You lose this ability if you start letting the compiler decide that `Int` is as good as `Maybe Int`. Or if an `Async (Async String)` may as well be an `Async String`.

That's not to say it's not easy to transform (just a few keystrokes), but explicit beats implicit when it comes to casting (in any language).

[1] Does this work?

  var x = 1 == 2 ? Optional.of(2L) : Optional.empty().map(y -> y + y);
  // Operator '+' cannot be applied to 'java. lang. Object', 'java. lang. Object'
How about

  Optional<Long> x = 1 == 2 ? Optional.of(2L) : Optional.empty().map(y -> y + y);
  // Operator '+' cannot be applied to 'java. lang. Object', 'java. lang. Object'
or even

  Optional<Long> x = 1 == 2 ? Optional.of(2L) : Optional.empty().map((Long y) -> y + y);
  // Cannot infer functional interface type
No, we needed Optional.<Long>empty() instead of Optional.empty() to make it work

magicalhippo
3 replies
1d6h

letting the compiler decide that `Int` is as good as `Maybe Int`

I was thinking more like explicitly telling the compiler that an implicit cast is OK, in other languages done by implementing the implicit cast operator for example.

edit: but if I understood you correctly, Haskell just doesn't support any implicit casting?

mrkeen
2 replies
1d6h

It will do some wrangling of literals for you, as long as it can unambiguously decide on an exact type during type-checking.

If no other info is given, it will treat `3 + 3` as Integer + Integer (and emit a compiler warning because it guessed the type).

With `(3 :: Int64) + 3`, the right 3 will resolve to Int64. Same if you swap their positions.

`(3 :: Int64) + (3 :: Int32)` is a compile error.

"Text literals" can become a String, a Text, or a ByteString if you're not explicit about it.

implicit cast operator

Wouldn't that make it explicit?

magicalhippo
0 replies
1d5h

Wouldn't that make it explicit?

No, the casting is still done implicitly. That is I can make the following compile fine in Delphi if I add an implicit cast operator to either Foo or Bar:

    Foo x := Foo.Create();
    Bar y := x;
If neither of them have a suitable implicit cast operator defined, it will of course fail to compile.

Just an example, nothing unique about Delphi. You can see an example of the operator definition here[1].

[1]: https://docwiki.embarcadero.com/RADStudio/Alexandria/en/Oper...

kyjasdfwus
14 replies
1d7h

Scala seems to be the only language that recognizes the merit of having both unions and ADTs. It even has GADTs!

cubefox
10 replies
1d6h

Yet ironically Scala is not null safe I believe.

cubefox
7 replies
1d4h

Opt-in is better than nothing, but in practice I assume this sees little use because it breaks compatibility with old code. Null safety (and type safety in general) has to be present in a programming language from the start; it can't realistically be added as a feature later.

vips7L
6 replies
1d3h

I don't think I agree. C# added it and its gone well and Java is adding null safety without breaking backwards compatibility at all.

cubefox
4 replies
1d3h

Types in old code may be nullable, or not, the compiler doesn't know, so the only way the compiler can ensure null safety for using old code is by enforcing you to do null checks everywhere. That's not very practical. Moreover, the old code itself may still produce null pointer exceptions.

vips7L
2 replies
1d2h

But that doesn't break compatibility like you claimed and it also doesn't support your conclusion that it will not likely be used.

he compiler doesn't know, so the only way the compiler can ensure null safety for using old code is by enforcing you to do null checks everywhere

This isn't necessarily true. Java's approach is to have 3 types: nullable, non-null, and platform (just like Kotlin). Platform types are types without nullness markers and don't require strict null checks to prevent backwards compatibility breaking. Yes, old code may still product null pointers, but we don't need 100% correctness right away. At some point platform types will be 1% of code in the wild rather than 100%.

cubefox
1 replies
23h10m

At some point platform types will be 1% of code in the wild rather than 100%.

In the case of Java this could take decades. Or people simply continue to write platform types because they are lazy (the compiler doesn't force them). Then platform types will never decrease substantially.

vips7L
0 replies
22h8m

I don't think that's true and I don't think there is any data to back that up. We've already seen in the C# community rapid adoption of nullness markers. This whole goal post moving and the idea that if we can't have 100% we shouldn't do it at all is a bit exhausting so I think I'm done here. Cheers man.

valenterry
0 replies
1d

Well yeah, there is no way around this on the JVM. That's one of the JVM's problems/drawbacks. Everything can be null and everything can throw exceptions.

But in practice, as long as you use only Scala libraries and are careful when using Java libraries it's not really an issue. (speaking as a professional Scala developer for many many years)

neonsunset
0 replies
1d2h

If anything, it's a default to have them on in any reasonably recent project - you get that with all templates, default projects, etc. Actively maintained libraries are always expected to come with NRT support. If this is not the case, it's usually a sign the library is maintained by developers who ignore the conventions/guidelines and actively went out of their way to disable them, which usually a strong signal to either file an issue or completely avoid such library.

Similar logic applies to code that has migrated over to .NET 6/8, or any newly written code past ~.NET 5.

dionian
0 replies
13h41m

I have been doing scala professionally for a decade, I have probably have 1 NPE per year on average, max. And pretty much never from an established scala lib. It's by convention not to use null - heavy use of Option, etc.

kyjasdfwus
0 replies
1d7h

Yep, Scala is influenced by Haskell.

iso8859-1
0 replies
1d6h

You're misinterpreting 'GHC2024'. It's just a language edition, a short hand of enabling a bunch of extensions. You have been able to enable GADTs for many years now, with just a single pragma. It has been built in to GHC for all these years.

tobz619
11 replies
1d7h

If you have a value "aValue :: a" and a monadic function of "mFunc :: (a -> Maybe b)" that's essentially just asking you to use `>>= :: Maybe a -> (a -> Maybe b) -> Maybe b` as well as `pure :: Applicative f => a -> f a` which will lift our regular `aValue` to a `Maybe a` in this instance.

Then to get the result "b" you can use the `maybe :: b -> (a -> b) -> Maybe a -> b` function to get your "b" back and do the weakening as you desire.

`Maybe` assumes a computation can fail, and the `maybe` function forces you to give a default value in the case that your computation fails (aka returns Nothing) or a transformation of the result that's of the same resultant type.

Overall, you'd end up with a function call that looks like:

foo :: b

foo = maybe someDefaultValueOnFailure someFuncOnResult (pure aValue >>= mFunc)

or if you don't want to change the result then you can use `fromMaybe :: a -> Maybe a -> a`

bar :: b

bar = fromMaybe someOtherDefaultValueOnFailure (pure value >>= mFunc) -- if the last computation succeeds, return that value of resultant type of your computation

HelloNurse
9 replies
1d7h

This is fine and understandable in theory, but a usability disaster in practice.

If function f returns b or nothing/error, and is then improved to return b always, client code that calls f should not require changes or become invalid, except perhaps for a dead code warning on the parts that deal with the now impossible case of a missing result from f.

You are suggesting not only a pile of monad-related ugly complications to deal with the mismatch between b and Maybe b, which are probably the best Haskell can do, but also introducing default values that can only have the practical effect of poisoning error handling.

the_af
8 replies
1d6h

If function f returns b or nothing/error, and is then improved to return b always, client code that calls f should not require changes or become invalid

Why do you need to change the type signature at all? You "improved" [1] a function to make impossible for the error case to occur, but it's used everywhere and the calling code must handle the error case (I mean, that's what static typing of this sort). So there you have it: the client code is not rendered invalid, it just has dead code for handling a case that will never happen (or more usually, this just bubbles up to the error handler, not even requiring dead code at every call site).

As an aside, I don't see the problem with the "pile of monads" and it doesn't seem very complicated.

----

[1] which I assume means "I know I'll be calling this with values that make it impossible for the error to occur". If you are actually changing the code, well, it goes without saying that if the assumptions you made when choosing the type changed when re-writing the function, well... the calling sites breaking everywhere is a strength of static typing.

HelloNurse
7 replies
1d5h

Changing the type signature (which, by the way, could be at least in easy cases implicitly deduced by the compiler rather than edited by hand) allows new client code to assume that the result is present.

cubefox
3 replies
1d5h

(or absent in the case of input parameters)

the_af
2 replies
1d5h

A function in which the input is needed for the computation is very different to one where it's not needed. I would expect the type signature to reflect this, why would you want it otherwise?

cubefox
1 replies
1d4h

Say you have a function which expects objects of type Foo as an input and which returns objects of type Baz. One day, the function is improved by also accepting the type Bar, i.e. Foo|Bar. So Foo isn't needed for the computation, because Bar is also accepted.

Or you have a function which expects objects of type String as an input. But then you realize that in your case, null values can be handled just like empty strings. So the input type can be relaxed to String|Null.

tobz619
0 replies
10h23m

There's a difference between empty strings and Null values imo.

Just "" != Nothing

If you want to handle empty strings as a input in haskell then you have a function of type `f :: String -> b` and you pattern match on your input?

  f "" = someResult
  f ...
Nothing assumes a proper null in that there is genuinely nothing to work with. Still you can make a function to handle it or use `maybe`?

the_af
2 replies
1d5h

Changing the type signature to relax/strengthen pre or post conditions is a fundamental change though. I would expect it to break call sites. That's a feature!

HelloNurse
1 replies
1d2h

Strengthening postconditions and relaxing preconditions is harmless in theory, so it should be harmless in practice.

Haskell gets in the way by facilitating clashes of incompatible types: there are reasons to make breaking changes to type signatures that in more deliberately designed languages might remain unaltered or compatible, without breaking call sites.

the_af
0 replies
1h32m

If function f returns b or nothing/error, and is then improved to return b always, client code that calls f should not require changes or become invalid, except perhaps for a dead code warning on the parts that deal with the now impossible case of a missing result from f.

You can achieve this by not changing the type and keeping the result as Maybe b. Dead code to handle `Nothing`, no harm done.

However, you clarify you don't want this because:

Changing the type signature (which, by the way, could be at least in easy cases implicitly deduced by the compiler rather than edited by hand) allows new client code to assume that the result is present.

But this cuts both ways. If the old call site can assume there may be errors (even though the new type "b" doesn't specify them) then the new call site cannot assume there are no errors (what works for old must work for new).

I must say I see no real objection to the proposal at https://news.ycombinator.com/item?id=41519649 besides "I don't like it", which is not very compelling to me.

cubefox
0 replies
1d7h

Perhaps that theoretically solves the problem, but it sounds awfullly complicated in practice.

solomonb
4 replies
1d2h

So in your hypothetical language with union types `X | Null -> Y` is a function that can actually return `Y | Null`? Why would you want to allow that as an implicit behavior? This would make for surprising and unclear error handling requirements.

One of the main points of encoding error information in the type system in the type system is that it forces you to account for it when you modify your code.

By "weakening" your assumptions on your function to allow it to produce Null values you have introduced a new requirement at all your call sites. Everywhere that you call this function now needs to handle the Null value. Its a GOOD thing that Haskell forces you to handle this via the type system.

cubefox
3 replies
23h20m

So in your hypothetical language with union types `X | Null -> Y` is a function that can actually return `Y | Null`?

No, but a function which returns Y | Null can be replaced with a function which returns Y without changing code on the call site.

Imagine I always used to give you a nail (Y) or nothing (Null), and you can handle both receiving a nail and receiving nothing. Then I can, at any time, change my behavior to giving you nails only. Because you can already handle nails, and the fact that I now never give you nothing doesn't bother you. You just perform a (now useless) check of whether you have received a nail or nothing.

solomonb
2 replies
21h32m

a function which returns Y | Null can be replaced with a function which returns Y without changing code on the call site.

Yes this falls out of injectivity.

the function type (X -> Y|Null) can be improved by rewriting the function to (X|Null -> Y)

I agree that any value received by the former function (`X`) can be received by the latter function (`X|Null`). However you cannot rewrite the former to have the signature of the latter.

You would need to write:

prf : (X -> Y|Null) -> X|Null -> Y

You would have to be able to convert a `Null` value into a `Y`.

You could definitely use `X|Null -> Y` to implement `X -> Y|Null` but that is not what you are claiming.

cubefox
1 replies
10h43m

However you cannot rewrite the former to have the signature of the latter.

Of course I can. I can always change the code to anything I like. That's what "rewriting" is. The question is only whether the business logic still makes sense, and whether the old call site code still works. Just look at the example I gave above.

You could definitely use `X|Null -> Y` to implement `X -> Y|Null` but that is not what you are claiming.

"Implementing" is a special case of rewriting, so how can you say you can implement something but not rewrite it?

solomonb
0 replies
2h39m

Of course I can. I can always change the code to anything I like. That's what "rewriting" is. The question is only whether the business logic still makes sense, and whether the old call site code still works. Just look at the example I gave above.

You have a function that can return a Null response and you are claiming you can rewrite it to be one that does not return a Null.

This means that in the cases where your function previously produced a `Null`, you have to produce a `Y`. You claimed you can do this if you write the function to receive `X|Null`. In other words you are claiming you can write `(X -> Y|Null) -> X|Null -> Y`. I challenge you to write this function.

"Implementing" is a special case of rewriting, so how can you say you can implement something but not rewrite it?

I didn't say that. You claimed you can write `(X -> Y|Null) -> X|Null -> Y`. I am saying that is impossible but you could write `(X|Null -> Y) -> X -> Y|Null`. Do you see the difference?

dsign
1 replies
1d7h

data Maybe a = Nothing | Just a deriving (Eq, Ord)

> Because then you would have to modify all the code which already uses the function, as "A" and "Maybe A" are incompatible types. Which is illogical.

I'm not so sure about your statement. If the type of the function changes, revising its usage at every use point is good for your sanity. I would go further and say that sometimes Maybe X is too weak, because it doesn't contain precise semantics for its Nothing and Just x alternatives. For example, sometimes you want a `Nothing` for a value that hasn't yet been found, but could potentially be filled up the evaluation chain, e.g. `NothingYet`, and a different Nothing for a value that is conclusively not there, e.g. `TerrifyingVoid`. If you fork your `Nothing` value into these two variants after you discover the need for it, you will have to revise each call anyway, and check what's the proper course of action. And this is a feature I wish I could use from Python.

More generally, in large Haskell code bases, having the type-checker help you track, at compile time, code that breaks far away from where you made your changes, is an incredible time-saver.

cubefox
0 replies
1d7h

Yes, there are edge cases where you would like to have multiple different "kinds of null", but those use cases seem so uncommon in practice that they are mostly irrelevant.

iso8859-1
65 replies
1d6h

We can generalise this idea of being forced to handle the failure cases by saying that Haskell makes us write total functions rather than partial functions.

Haskell doesn't prevent endless recursion. (try e.g. `main = main`)

As the typed FP ecosystem is moving towards dependent typing (Agda, Idris, Lean), this becomes an issue, because you don't want the type checker to run indefinitely.

The many ad-hoc extensions to Haskell (TypeFamilies, DataKinds) are tying it down. Even the foundations might be a bit too ad-hoc: I've seen the type class resolution algorithm compared to a bad implementation of Prolog.

That's why, if you like the Haskell philosophy, why would you restrict yourself to Haskell? It's not bleeding edge any more.

Haskell had the possibility of being a standardized language, but look at how few packages MicroHS compiles (Lennart admitted to this at ICFP '24[0]). So the standardization has failed. The ecosystem is built upon C. The Wasm backend can't use the Wasm GC because of how idiosyncratic GHC's RTS is.[1]

So what does unique value proposition does GHC have left? Possibly the GHC runtime system, but it's not as sexy to pitch in a blog post like this.

[0]: Lennart Augustsson, MicroHS: https://www.youtube.com/watch?v=uMurx1a6Zck&t=36m

[1]: Cheng Shao, the Wasm backend for GHC: https://www.youtube.com/watch?v=uMurx1a6Zck&t=13290s

samvher
27 replies
1d5h

For a long time already I've wanted to make the leap towards learning dependently typed programming, but I was never sure which language to invest in - they all seemed either very focused on just proofs (Coq, Lean) or just relatively far from Haskell in terms of maturity (Agda, Idris).

I went through Software Foundations [0] (Coq) which was fun and interesting but I can't say I ever really applied what I used there in software (I did get more comfortable with induction proofs).

You're mentioning Lean with Agda and Idris - is Lean usable as a general purpose language? I've been curious about Lean but I got the impression it sort of steps away from Haskell's legacy in terms of syntax and the like (unlike Agda and Idris) so was concerned it would be a large investment and wouldn't add much to what I've learned from Coq.

I'd love any insights on what's a useful way to learn more in the area of dependent types for a working engineer today.

[0] https://softwarefoundations.cis.upenn.edu/

pmarreck
7 replies
1d1h

One reason I took interest in Idris (and lately Roc, although it's even less mature) is the promise of a functional but usable to solve problems today language with all the latest thinking on writing good code baked-in already, compiling to a single binary (something I always envied about Go, although unfortunately it's Go). There simply isn't a lot there yet in the space of "pure functional language with only immutable values and compile time type checking that builds a single fast binary (and has some neat developer-friendly features/ideas such as dependent types, Roc's "tags" or pattern-matching with destructuring)" (this rules out OCaml, for example, despite it being mature). You get a lot of that, but not all of it, with other options (OCaml, Elixir/Erlang, Haskell... but those 3 offer a far larger library of ready-to-import software at this point). Haskell did manage to teach everyone who cares about these things that managing side-effects and keeping track of "purity" is important.

But it's frankly still early-days and we're still far from nirvana; Rust is starting to show some warts (despite still being a massive improvement over C from a safety perspective), and people are looking around for what's next.

One barely-touched thing is that there are compiler optimizations made possible by pure functional/pure immutable languages (such as caching a guaranteed result of an operation where those guarantees simply can't be given elsewhere) that have simply been impossible until now. (Roc is trying to go there, from what I can tell, and I'm here for it! Presumably, Rust has already, as long as you stick with its functional constructs, which I hear is hard sometimes)

staunton
6 replies
22h9m

Rust is starting to show some warts (despite still being a massive improvement over C from a safety perspective)

The way it seems to me is that actually Rust aims to be an improvement over C++ rather than C. (And Zig aims to be an improvement over C rather than C++.)

The major downsides of both will be the same as their reference point: Rust will eventually be too complicated for anyone to understand while still not being really safe (and the complexity then comes back to bite you one more time). Zig will be easy to understand and use but too unsafe to use for important applications (at least once people start really caring about software in important applications).

Both of these will be fairly niche because compiling to a single binary just isn't as important, as elegant as it might be.

samatman
5 replies
19h34m

Zig will be easy to understand and use but too unsafe to use for important applications

This is an outside perspective on Zig, and I have to say, not an informed one.

If you'd like to understand Zig (and what I mean) better, this video is a good start: https://www.youtube.com/watch?v=w3WYdYyjek4

Zig is, right now, being used for high-assurance systems where correctness is a terminal value, and it provides many tools and affordances to assist in doing so. It isn't a good choice for projects which give lip-service to correctness, but for ones which actually mean it, and are willing and able to put in the effort and expense to achieve it, it's an excellent choice. I'm willing to gloss that domain as "important applications", but perhaps you meant something different by that term.

staunton
4 replies
11h54m

Zig is, right now, being used for high-assurance systems

I'm not convinced this is telling us very much. I was talking about software that, e.g., causes deaths when it fails. But regardless of what level of assurance one looks at, most "high assurance" systems continue to be built using C. The very few that use Zig surely chose it primarily due to compatibility with C, with the hope that it's safer than C (a low bar). Maybe in some cases also a wish to play around with new toys played a role in the decision, though in "important applicatios" I'd like to hope that's rare.

In the end, we'd have to look at harm done due to bugs. For C, I'd say the record is abysmal. For Zig, it's way to early to look at this.

My judgement above is mostly based on Zig not making it at all hard to violate memory safety and the analogy with C. Needless to say, Zig is better than C in this respect and that's a good thing. If your argument is something like "memory safety doesn't really matter, even for critical applications", we'll just not agree on this.

ArtixFox
2 replies
8h14m

I think you are wrong in a funny way. Memory safety and memory leaks and stuff dont always matter.

This sparked and interesting memory for me. I was once working with a customer who was producing on-board software for a missile. In my analysis of the code, I pointed out that they had a number of problems with storage leaks. Imagine my surprise when the customers chief software engineer said "Of course it leaks". He went on to point out that they had calculated the amount of memory the application would leak in the total possible flight time for the missile and then doubled that number. They added this much additional memory to the hardware to "support" the leaks. Since the missile will explode when it hits it's target or at the end of it's flight, the ultimate in garbage collection is performed without programmer intervention.

https://groups.google.com/g/comp.lang.ada/c/E9bNCvDQ12k/m/1t...

while a funny example, you are very wrong. High assurance systems have a specification and a bazillion tests because memory safety is an insanely tiny problem in the sea of problems they face. [that is why, frama-C and friends are preferred over Rust,Ada, ATS, and whatever else that exists.] Correctness of the spec and perfectly following the spec is far more important. Zig allows correct code to feel easier to write when compared to C. Thats why it was chosen in tigerbeetle and thats why, if the community wants, it will have an insanely bright future in high assurance systems.

KsassPeuk
1 replies
7h22m

As a Frama-C developer, and more precisely the deductive verification tool, I'd say that formal proof of programs (especially proof that the program conforms to its specification) would be significantly easier on Rust. The main reason is related to the handling of memory aliases which is a nightmare in C, and that generates formulas that kill SMT solvers. The consequence is that the ongoing development tend to target something that has a lot in common with Rust: we try to assume memory separation most of the time, and check that it is true on function call, but it as harder to do it than with a type system.

samatman
0 replies
2h30m

the handling of memory aliases which is a nightmare in C

Zig has some aliasing issues which are to-date unsolved. The core and community are keenly aware of them, and they'll be solved or greatly ameliorated before a 1.0 release. It's why TigerBeetle has a coding standard which requires all container types to be passed by constant pointer, not reference.

It isn't ideal, but I think it's reasonable to withhold judgement on a pre-1.0 language for some sorts of known issues which are as yet unaddressed. It's worth taking the time to find an excellent solution to the problem, rather than brute-force it or regress one of the several features which combine into the danger zone here.

If you're curious or interested in the details, look up "Attack of the Killer Features" on YouTube. Given your background, I'm sure the community would value your feedback on the various issues tracking this.

samatman
0 replies
2h44m

If your argument is something like "memory safety doesn't really matter, even for critical applications"

Not at all, not even close. What I will say is "memory safety can be achieved by policy as well as by construction, and indeed, can only be achieved by construction through policy".

Let's break that down. Rust and Go are two examples of languages generally referred to as memory-safe. Rust achieves this by construction, through the borrow checker, Go achieves it by construction through the garbage collector.

If there are bugs in the borrow checker, or the garbage collector, then the result is no longer memory-safe. That assurance can only be achieved by policy: the garbage collector and the borrow checker must be correct.

TigerBeetle, the program I linked to, achieves memory safety with a different policy. All allocation is performed once, at startup. After this, the replica's memory use is static. This, if correct, is memory-safe.

Zig makes this practical, because all allocation in the standard library is performed using the allocation interface: any function which allocates receives an Allocator as an argument. Libraries can violate that policy, but they generally don't, and TigerBeetle is a zero-dependency program, so that's not a concern for them. Other languages where it maybe isn't immediately obvious if something goes on a heap? Not so easy to achieve a memory policy like that one.

So this:

Zig not making it at all hard to violate memory safety

Is irrelevant. What's needed in high-assurance systems is the practical ability to create memory safety by policy, and Zig provides a difference-in-class in this matter compared to C.

most "high assurance" systems continue to be built using C

Yes, other than your scare quotes, this is correct. Programs like SQLite are memory safe by construction and exhaustive testing, you're welcome to try and land a CVE if you disagree. Every few years someone gets a little one, maybe you'll be the lucky next player. Or you could try your luck at QEMU.

My premise is that Zig makes it massively easier to achieve this, through numerous choices which add up: the allocator interface, built-in idiomatic leak detection, a null-safe type system, slices and arrays carrying bounds, and much else. It has `std.testing.checkAllAllocationFailures`, which can be used to verify that a function doesn't leak or otherwise misuse memory even if any one allocation anywhere downstack of a function call fails. You might want to compare this with what happens if a Rust function fails to allocate.

Basically you're taking the lessons of C, with all of its history and warts, and trying to apply them, without due consideration, to Zig. That's a mistake.

bmitc
7 replies
22h45m

When I last looked into Lean, I was highly unimpressed, even for doing math proofs. There's no way I'd invest into as a general-purpose language.

Idris at least does state that they what people building real programs with it and don't want it to just be a research language.

For dependent types, I myself am skeptical about languages trying to continuously push more and more stuff into types. I am not certain that such efforts are a net positive on writing good software. By their very definition, the more typed a language gets, the less programs it can represents. That obviously reduces buggy programs, but it also reduces non-buggy programs that you can implement. Highly typed languages force more and more effort into pre-compile time and you will often find yourself trying to fit a problem into the chains of the type system.

Rather, I think reasonably multi-paradigm languages like F# are the sweet spot. Just enough strict typing and functional core to get you going for most of your program, but then it allows classes and imperative programming when those paradigms are appropriate.

I think the way to go to write better software is better tooling and ergonomics. I don't think type systems are going to magically save us.

tsimionescu
3 replies
21h56m

By their very definition, the more typed a language gets, the less programs it can represents. That obviously reduces buggy programs, but it also reduces non-buggy programs that you can implement.

While I generally share your skepticism, I think this is quite wrong. A good part of the point of advanced type systems is to make more complex problems possible while still being well typed. For example, in C, if you want a function whose return type is tied to an input argument's type, you either use void* and casts (no type safety), or you don't write that function. In languages with even slightly more advanced type systems, you can write that function and still get full type safety.

Even more advanced type systems achieve the same things: you can take programs that can only be written in a simpler type system and make them safe. In standard Haskell, for example, you can't write a Monad and actually have the compiler check that it respects the Monad laws - the implementation of Monad functions just assumes that any type that implements the right shape of functions will work as a monad. With dependent types, you can actually enforce that functions designed to work with monads only apply to types that actually respect the monad laws.

The trade-off with very complex type systems is different, in my opinion: after some point, you start duplicating your program's logic, once in the implentation code, but again in the type signatures. For example, if you want to specify that a sort function actually sorts the input list, you might find that the type specification ends up not much shorter than the actual code of the function. And apart from raw effort, this means that your type specifications start being large enough that they have their own bugs.

User23
1 replies
16h53m

I think GP's point was that most[1] programs that can be represented will fail to please the programmer or his principals. The act of programming is navigating the state space of all possible programs and somehow finding one that has the desired properties and also doesn't otherwise suck. When viewed through that lens, a type system preventing most programs from being represented is a good thing, since odds are every single program it prevents is one that is unpleasant or otherwise sucks.

[1] of the countably infinite possible programs, virtually all

auggierose
0 replies
8h36m

That would make sense if writing a program would be similar to randomly drawing a program from a pot of programs.

If instead I have a good idea what I want to write, the type system may either guide me towards the solution, or hinder me. It usually hinders me, I don't need a type system to guide me, but I like a type system that can check for trivial errors (oh, I meant to pass a list of numbers, not just a single number).

IshKebab
0 replies
3h42m

For example, if you want to specify that a sort function actually sorts the input list, you might find that the type specification ends up not much shorter than the actual code of the function. And apart from raw effort, this means that your type specifications start being large enough that they have their own bugs.

Not to mention the tools to debug complex type errors are generally much less mature than the tools to debug runtime errors.

But even so, I think we could still benefit from going a little further towards the "proof" end of the type system spectrum in most cases. I don't think anyone really wants to deal with Coq and similar, but having used a language with dependent types for integers and vector lengths it's really nice to be able to say stuff like "this integer is in the range [0, 8)" and then have it catch errors when you pass it to a function that expects [0, 3) or whatever.

a57721
1 replies
11h56m

When I last looked into Lean, I was highly unimpressed, even for doing math proofs.

I remember exploring different proof assistants for the first time in the 2000s. Back then, only people with a background in logic were involved, and most of the proofs that were formalized as showcases were of textbook results from the 19th century at most, or some combinatorial stuff like the four-color theorem.

I believe Voevodsky was one of the first prominent non-logicians to become interested in proof assistants, using Coq around 2010. Nowadays, several mathematicians coming from algebraic geometry, number theory, etc. are promoting formal proofs, and it seems like most of them have chosen Lean. I don't know whether this is because Lean is somehow better suited for working mathematicians, or if it was simply a random personal preference among people who got enthusiastic about this stuff and started advertising it to their colleagues?

I am not familiar with every proof assistant out there, but many of them are a very hard sell for mathematicians and lack a comprehensive math library. Lean seems to be one of the few exceptions.

nextos
0 replies
3h43m

Isabelle also has a fairly large set of mathematical proofs and supporting libraries, see https://www.isa-afp.org.

People routinely publish new proofs there, it is actually a regular referred journal.

ykonstant
0 replies
5h47m

When I last looked into Lean, I was highly unimpressed, even for doing math proofs. There's no way I'd invest into as a general-purpose language.

Can you elaborate? I am using Lean as a general-purpose language writing simple little programs, so I have not encountered the deeper parts of the runtime etc. I'd like to see some criticism/different perspectives from more experienced people.

iso8859-1
6 replies
1d5h

Lean aims to be a general purpose language, but I haven't seen people actually write HTTP servers in it. If Leo de Moura really wanted it to be general purpose, what does the concurrent runtime look like then? To my knowledge, there isn't one?

That's why I've been writing an HTTP server in Idris2 instead. Here's a todo list demo app[1] and a hello world demo[2]. The advantage of Idris is that it compiles to e.g. Racket, a high level language with a concurrent runtime you can bind to from Idris.

It's also interesting how languages don't need their own hosting (e.g. Hackage) any more. Idris packages are just listed in a TOML file[3] (like Stackage) but still hosted on GitHub. No need for versions, just use git commit hashes. It's all experimental anyway.

[1]: https://janus.srht.site/docs/todolist.html [2]: https://git.sr.ht/~janus/web-server-racket-hello-world/tree/... [3]: https://github.com/stefan-hoeck/idris2-pack-db/blob/main/STA...

ants_everywhere
4 replies
1d5h

(like Stackage) but still hosted on GitHub

I don't have much experience with Haskell, but one of the worst experiences has been Stack's compile time dependency on GitHub. GitHub rate limits you and builds take forever.

tome
3 replies
1d3h

That's interesting. Could you say more? This is something that we (speaking as part of the Haskell community) should fix. As far as I know Stack/Stackage should pick up packages from Hackage. What does it use GitHub for?

ants_everywhere
2 replies
20h44m

I'm not entirely sure where it uses GitHub and where Hackage, but there are a few GitHub issues on the Stack repo about it:

- Binary upgrade of Stack fails due to GitHub API request limit #4979 (https://github.com/commercialhaskell/stack/issues/4979)

- GitHub rate limiting can affect Stack CI #6034 (https://github.com/commercialhaskell/stack/issues/6034)

And a few more. The "fix" is having Stack impersonate the user (https://github.com/commercialhaskell/stack/pull/6036) and authenticate to the API. This unblocks progress, but this is really a design bug and not something I think people should emulate.

Every other language I've used allows you to build code without authenticating to a remote service.

tome
1 replies
11h30m

Thanks! So it seems to be not packages, but templates, and this comment suggests it wasn't GitHub doing the rate limiting after all: https://github.com/commercialhaskell/stack/issues/4979#issue...

Every other language I've used allows you to build code without authenticating to a remote service.

Sure, the problem here wasn't "building". It was downloading a package template (which one doesn't tend to do 60 times per hour). I agree packages shouldn't be fetched from GitHub.

ants_everywhere
0 replies
2h7m

and this comment suggests it wasn't GitHub doing the rate limiting after all

That comment is from someone other than the ticket filer who was seeing another issue even after sending the GitHub token. It was this second issue that wasn't caused by GitHub rate limiting -- the original one was.

It was downloading a package template (which one doesn't tend to do 60 times per hour).

I've personally had Stack-related GitHub API rate limiting delay builds by at least an hour due to extreme slowness. So whatever the rate limits are, Stack occasionally hits them.

orbifold
0 replies
1d3h

There are tasks, which are implemented as part of the runtime and they appear to plan to integrate libuv in the future. Some of the runtime seems to be fairly easy to hack and have somewhat nice ways of interoperating with both C, C++ and Rust.

agentultra
1 replies
1d4h

Lean can be used to write software in [0]. I dare say that it may even be the intended use for Lean 4. Work on porting mathlib to Lean 4 is far along and the mathematicians using it will certainly continue to do so. However there is more space for software written in Lean 4 as well.

However...

it's no where near ready for production use. They don't care about maintaining backwards compatibility. They are more focused on getting the language itself right than they are about helping people build and maintain software written in it. At least for the foreseeable future. If you do build things in it you're working on shifting ground.

But it has a lot of potential. The C code generated by Lean 4 is good. Although, that's another trade-off: compiling to C is another source of "quirks."

[0] https://agentultra.github.io/lean-4-hackers/

staunton
0 replies
22h15m

Work on porting mathlib to Lean 4 is far along

As far as I understand, that work is in fact done.

saghm
0 replies
16h0m

I took that course as well, and for me, the big takeaway wasn't that I specifically want to use Coq for anything practical, but the idea that you can actually do quite a lot with a non-Turing complete language. Realizing that constraints in a language can be an asset rather than a limitation is something that I think isn't as widely understood as it should be.

mebassett
0 replies
1d2h

Lean definitely intends to be usable as a general purpose language someday. but I think the bulk of the people involved are more focused on automated theorem proving. The Lean FRO [0] has funds to guide development of the language and they are planning to carve out a niche for stuff that requires formal verification. I'd say in terms of general purpose programming it fits into the category of being "relatively far from haskell in terms of maturity".

[0] https://lean-fro.org/about/roadmap-y2/

lemonwaterlime
11 replies
1d5h

why would you restrict yourself to Haskell? It's not bleeding edge any more.

I'm not using Haskell because it's bleeding edge.

I use it because it is advanced enough and practical enough. It's at a good balanced spot now to do practical things while tapping into some of the advances in programming language theory.

The compiler and the build system have gotten a lot more stable over the past several years. The libraries for most production-type activities have gotten a lot more mature.

And I get all of the above plus strong type safety and composability, which helps me maintain applications in a way that I find satisfactory. For someone who aims to be pragmatic with a hint of scholarliness, Haskell is great.

HelloNurse
7 replies
7h25m

The libraries for most production-type activities have gotten a lot more mature.

Can you provide an example of a "mature" way to match a regular expression with Unicode character classes or download a file over HTTPS?

Tarean
6 replies
6h46m

For Regex I like lens-regex-pcre

    > import Control.Regex.Lens.Text
    > "Foo, bar" ^.. [regex|\p{L}+|] . match
    ["Foo", "bar"]
    > "Foo, bar" & [regex|\p{L}+|] . ix 1 . match %~ T.intersperse '-' . T.toUpper
    "Foo, B-A-R" 
For web requests wreq has a nice interface. The openssl bindings come from a different library so it does need an extra config line, the wreq docs have this example:

    import OpenSSL.Session (context)
    import Network.HTTP.Client.OpenSSL

    let opts = defaults & manager .~ Left (opensslManagerSettings context)
    withOpenSSL $
      getWith opts "https://httpbin.org/get"
There are native Haskell tls implementations that you could plug into the manager config. But openssl is probably the more mature option.

HelloNurse
5 replies
5h55m

You are matching ASCII letters? Cute. What about Unicode character classes like \p{Spacing_Combining_Mark} and non-BMP characters?

Can you translate the examples at https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... to Haskell? This Control.Regex.Lens.Text library doesn't seem to believe in documenting the supported syntax, options, etc.

tome
3 replies
5h21m

"Cute" comes across as very dismissive. I'm not sure if you intended that. lens-regex-pcre is just a wrapper around PCRE, so anything that works in PCRE will work, for example, from your Mozilla reference:

    ghci> "California rolls $6.99\nCrunchy rolls $8.49\nShrimp tempura $10.99" ^.. [regex|\p{Sc}\s*[\d.,]+|] . match
    ["$6.99","$8.49","$10.99"]
"Spacing combining mark" seems to be "Mc" so this works:

https://unicode.org/reports/tr18/#General_Category_Property

    ghci> "foo bar \x093b baz" ^.. [regex|\p{Mc}|] . match
["\2363"]

(U+093b is a spacing combining mark, according to https://graphemica.com/categories/spacing-combining-mark)

I think in general that Haskellers would probably move to parser combinators in preference to regex when things get this complicated. I mean, who wants to read "\p{Sc}\s*[\d.,]+" in any case?

HelloNurse
2 replies
3h58m

U+093b is still in the BMP. By the way, what text encodings for source files are supported by GHC? Escaping everything isn't fun.

And I am not sold on lens-regex-pcre documentation; "anything that works in PCRE will work" comes across as very dismissive. What string-like types are supported? What version of PCRE or PCRE2 does it use?

tome
0 replies
1h58m

U+093b is still in the BMP

I'm sorry, I don't know what that means. If you have a specific character you'd like me to try then please tell me what it is. My Unicode expertise is quite limited.

I am not sold on lens-regex-pcre documentation

Nor me. It seems to leave a lot to be desired. In fact, I don't see the point of this lens approach to regex.

"anything that works in PCRE will work" comes across as very dismissive

Noted, thanks, and apologies. That was not my intention. I was trying to make a statement of fact in response to your question.

By the way, what text encodings for source files are supported by GHC?

UTF-8 I think. For example, pasting that character into GHC yields:

    ghci> mapM_ T.putStr ("foo bar ः baz" ^.. [regex|\p{Mc}|] . match)
    ः
> What string-like types are supported?

ByteString (raw byte arrays) and Text (Unicode, internal representation UTF-8), as you can see from:

https://hackage.haskell.org/package/lens-regex-pcre

What version of PCRE or PCRE2 does it use?

Whatever your system version is. For me on Debian it's:

    Package: libpcre3-dev
    Source: pcre3
    Version: 2:8.39-15

iso8859-1
0 replies
2h57m

version of PCRE

It uses https://hackage.haskell.org/package/pcre-light , which seems to link with the system version. So it depends on what you install. With Nix, it will be part of your system expression, of course.

Tarean
0 replies
4h25m

Either hackernews or autocorrect ate the p, it was supposed to be \p{L} which is a unicode character class.

As the other comment mentioned pcre-compatible Regex are a standard, though the pcre spec isn't super readable. There are some projects that have more readable docs like mariadb and PHP, but it doesn't really make sense to repeat the spec in library docs https://www.php.net/manual/en/regexp.reference.unicode.php

There are libraries for pcre2 or gnu regex syntax with the same API if you prefer those

iso8859-1
2 replies
1d3h

The compiler and the build system have gotten a lot more stable over the past several years.

GHC2021 promises backwards compatibility, but it includes ill-specified extensions like ScopedTypeVariables. TypeAbstractions were just added, and they do the same thing, but differently.[0] It hasn't even been decided yet which extensions are stable[1], yet GHC2021 still promises compatibility in future compiler versions. So either, you'll have GHC retain inferior semantics because of backwards compatibility, or multiple ways of doing the same thing.

GHC2024 goes even further and includes extensions that are even more unstable, like DataKinds.

Another sign of instability is the fact that GHC 9.4 is still the recommended[2] release even though there are three newer 'stable' GHCs. I don't know of other languages where the recommendation is so far behind! GHC 9.4.1 is from Aug 2022.

It was the same situation with Cabal, it took forever to move beyond Cabal 3.6 because the subsequent releases had bugs.[3]

[0]: https://serokell.io/blog/ghc-dependent-types-in-haskell-3 [1]: https://github.com/ghc-proposals/ghc-proposals/pull/669 [2]: https://github.com/haskell/ghcup-metadata/issues/220 [3]: https://github.com/haskell/ghcup-metadata/issues/40

tome
1 replies
11h34m

GHC2024 goes even further and includes extensions that are even more unstable, like DataKinds.

But DataKinds is not stable. It's one of the most stable extensions possible! The link you provided even says it's stable:

https://github.com/telser/ghc-proposals/blob/initial-extensi...

It hasn't even been decided yet which extensions are stable

It's essentially known, but it's not formally agreed. The fact that this proposal exists is evidence of that!

GHC2021 still promises compatibility in future compiler versions. So either, you'll have GHC retain inferior semantics because of backwards compatibility, or multiple ways of doing the same thing.

GHC2021 will always provide ScopedTypeVariables. A future edition will probably provide TypeAbstractions instead. Being able to make progress to the default language like this is the point of having language editions!

tome
0 replies
6h51m

But DataKinds is not stable

I mean "not unstable"

tkz1312
8 replies
1d1h

So what does unique value proposition does GHC have left? Possibly the GHC runtime system, but it's not as sexy to pitch in a blog post like this.

The point is that programming in a pure language with typed side effects and immutable data dramatically reduces the size of the state space that must be reasoned about. This makes programming significantly easier (especially over the long term).

Of the languages that support this programming style Haskell remains the one with the largest library ecosystem, most comprehensive documentation, and most optimised compiler. I love lean and use it professionally, but it is nowhere near the usability of Haskell when it comes to being a production ready general purpose language.

dataflow
5 replies
20h48m

dramatically reduces the size of the state space that must be reasoned about

True

This makes programming significantly easier (especially over the long term)

Not true. (As in, the implication is not true.)

There are many many factors that affect ease of programming and the structure of the stage space is just one of them.

tkz1312
4 replies
15h54m

It’s true that things like docs and error messages are also important, but the fundamental task of understanding and reasoning about code is significantly easier if you restrict yourself to pure functions over immutable data.

dataflow
3 replies
15h39m

No, I didn't mean docs and error messages, I meant even more basic things. Like sheer code size, visual noise, and intuitiveness, to give a few examples. There's no free lunch, everything is a tradeoff. Just because you're constraining the program's state space that doesn't imply you're making the code more succinct or intuitive. You could easily be adding a ton of distracting noise or obscuring the core logic with all your awesome static typing.

tkz1312
2 replies
11h47m

I think the relative importance of syntax compared to actual semantics when it comes to ease of understanding is probably rather low.

Either way Haskell is also probably the language that lets you produce the most succinct code of anything that could be reasonably be used in production.

lemonwaterlime
1 replies
9h39m

Haskell is also probably the language that lets you produce the most succinct code

It is. Haskell has the low verbosity of scripting languages like Ruby and Python while letting you manage applications that would otherwise be written in high verbosity languages like C++, Java, and Rust.

ykonstant
0 replies
5h49m

Idris and Lean have a similar advantage; once you get past the general notation, the code syntax is very smooth and easy to follow.

User23
1 replies
16h40m

When you start mathematically characterizing state spaces it quickly becomes apparent that pure functional languages advantage over imperative ones is more a matter of the poor design of popular imperative languages rather than an intrinsic difference.

That (in)famous goto paper isn't really about spaghetti code, it's about how on Earth do you mathematically define the semantics of any statement in a language with unrestricted goto. If any continuation can follow literally anything then you're pretty much in no man's land. On the other hand imperative code is easy and natural to reason about when it uses a small set of well defined primitives.

If that sounds surprising, consider how mathematical logic itself, especially obviously in the calculational proof style, is essentially a series of assignment statements.

tkz1312
0 replies
15h50m

I’m not quite sure what the point is here. I agree that well written imperative code can be easy to read, and that it’s often the natural style for many problems. I just think it’s always better to use that style in a system that makes the available context explicit and enforces a strict discipline via the type system (e.g. a State monad).

Regarding semantics my experience is that defining formal semantics for languages with unrestricted mutation (or even worse aliased pointers into mutable state) than one that avoids those features.

js8
5 replies
1d3h

As the typed FP ecosystem is moving towards dependent typing (Agda, Idris, Lean), this becomes an issue, because you don't want the type checker to run indefinitely.

First of all, does ecosystem move to dependent types? I think the practical value of Hindley-Milner is exactly in the fact that there is a nice boundary between types and terms.

Second, why would type checking running indefinitely be a practical problem? If I can't prove a theorem, I can't use it. The program that doesn't typecheck in practical amount of time is in practice identical to non-type-checked program, i.e. no worse than a status quo.

DonaldPShimoda
4 replies
23h49m

No, the FP community at large is definitely not moving toward dependent types. However, much more of the FP research community is now focused on dependent types, but a good chunk of that research is concerned with questions like "How do we make X benefit of dependent types work in a more limited fashion for languages without a fully dependent type system?"

I think we'll continue to see lots of work in this direction and, subsequently, a lot of more mainstream FP languages will adopt features derived from dependent types research, but it's not like everybody's going to be writing Agda or Coq or Idris in 10 years instead of, like, OCaml and Haskell.

cubefox
3 replies
20h54m

I'm not even sure if any human is still writing code in 10 years.

ParetoOptimal
2 replies
6h34m

Based on what?

cubefox
1 replies
5h40m

Looking back where we were 10 years ago in terms of AI: If we get a similar jump in 10 years from now, we have superintelligence.

NoGravitas
0 replies
3h9m

Honestly, it's more likely that we won't have the working infrastructure for running computer programs, making writing them fairly pointless.

aSanchezStern
3 replies
21h30m

Uhh, endless recursion doesn't cause your typechecker to run indefinitely; all recursion is sort of "endless" from a type perspective, since the recursion only hits a base case based on values. The problem with non-well-founded recursion like `main = main` is that it prevents you from soundly using types as propositions, since you can trivially inhabit any type.

remexre
2 replies
17h13m

The infinite loop case is:

    loopType : Int -> Type
    loopType x = loopType x

    foo : List (loopType 3) -> Int
    foo _ = 42

    bar : List (loopType 4)
    bar = []

    baz : Int
    baz = foo bar
Determining if baz type-checks requires evaluating loopType 3 and loopType 4 to determine if they're equal.

HelloNurse
1 replies
7h15m

Given line "loopType : Int -> Type", how can line "loopType x = loopType x" mean anything useful? It should be rejected and ignored as a tautology, leaving loopType undefined or defined by default as a distinct unique value for each int.

remexre
0 replies
1h5m

What makes it ill-defined is that it computes infinitely -- that's why you need a totality checker (or a total language).

throwthrow5643
1 replies
10h27m

Haskell doesn't prevent endless recursion. (try e.g. `main = main`)

Do you mean to say Haskell hasn't solved the halting problem yet?

xigoi
0 replies
9h42m

There are languages that don’t permit non-terminating programs (at the cost of not being Turing-complete), such as Agda.

giraffe_lady
1 replies
1d4h

As the typed FP ecosystem is moving towards dependent typing (Agda, Idris, Lean)

I'm not really sure where the borders of "the typed FP language ecosystem" would be but feel pretty certain that such a thing would enclose also F#, Haskell, and OCaml. Any one of which has more users and more successful "public facing" projects than the languages you mentioned combined. This is not a dig on those languages, but they are niche languages even by the standards of the niche we're talking about.

You could argue that they point to the future but I don't seriously believe a trend among them represents a shift in the main stream of functional programming.

xupybd
0 replies
20h20m

This is the F# time that I've seen F# contrasted as the more mainstream option and it warms my heart.

mightybyte
0 replies
1d4h

That's why, if you like the Haskell philosophy, why would you restrict yourself to Haskell? It's not bleeding edge any more.

Because it has a robust and mature ecosystem that is more viable for mainstream commercial use than any of the other "bleeding edge" languages.

gtf21
0 replies
1d5h

That's why, if you like the Haskell philosophy, why would you restrict yourself to Haskell?

In the essay, I didn't say "Haskell is the only thing you should use", what I said was:

Many languages have bits of these features, but only a few have all of them, and, of those languages (others include Idris, Agda, and Lean), Haskell is the most mature, and therefore has the largest ecosystem.

On this:

It's not bleeding edge any more.

"Bleeding edge" is certainly not something I've used as a benefit in this essay, so not really sure where this comes from (unless you're not actually responding to the linked essay itself, but rather to ... something else?).

agentultra
26 replies
1d5h

It's really good for boring, line of business software (BLOBS).

The vast majority of business logic can be modelled with a handful of simple types and pattern matching. Very few design patterns are needed. And if you keep to the simple parts you can even teach the syntax (just the types) to non-technical contributors in an afternoon. Then they can read it and help verify that you implemented the business process correctly.

It's also just nice for how my brain works. I like being able to substitute terms and get an equivalent program. Or that I can remember a handful of transformation rules that often get me from a first cut of a program to an efficient, fast one.

And it's just fun.

gavinray
9 replies
1d4h

Only on HN will you read someone unironically suggest writing LOB software in Haskell.

tome
4 replies
1d4h

Why not? Many of us do it every day.

gavinray
3 replies
1d4h

Let's suppose that you and I are non-technical founders of some medium-size software product.

If we were to rank the most important factors in choosing how to build our product, I think we may be able to agree that they're likely:

- The talent pool and availability of the language

- The ecosystem of libraries and ancillary tools like monitoring/debugging/observability

- The speed-of-development vs cost-of-maintenance tradeoff of the language

I will give Haskell that it can be rapidly written by those proficient, and tends to have less bugs if it compiles than many languages.

But for "what language is easy to employ and has an expansive ecosystem + tooling", I feel like you have to hand it to Java, .NET, Python, TypeScript, Go, etc...

tome
0 replies
1d4h

That's shifting the goalposts somewhat! Can Haskell be used for LOB software. Yes! In fact it's the one I am most effective in for that purpose. If I was starting a startup, it would be in Haskell, no question. "Let's suppose that you and I are non-technical founders of some medium-size software product ..." Well, that's something else entirely.

rebeccaskinner
0 replies
1d3h

I think you're taking a particular view of things that can work, but it's not the only correct view.

The talent pool and availability of the language

There are certainly more Javascript or Python developers out there than Haskell developers, but I think it's wrong to imply that Haskell is a hard language to hire for. There are more people out there who want to work with Haskell than there are Haskell jobs, and picking Haskell can be a really great way to recruit high quality talent. It's also quite possible to train developers on Haskell. A lot of companies hire people who don't have experience with their particular language. The learning curve for Haskell may be a bit steeper, but it's certainly tractable if you are hiring people who are eager to learn.

The ecosystem of libraries and ancillary tools like monitoring/debugging/observability

Other languages have _more_ of these, but it's not like Haskell is missing basic ecosystem things. I actually find that Haskell is pretty nice with this stuff overall. It's not quite as automatic as what you might get with running something in the JVM, but it's not that big of a lift, and for a lot of teams the marginal extra effort here is more than worth it because of the other benefits you get from Haskell.

The speed-of-development vs cost-of-maintenance tradeoff of the language

Haskell is really excellent here in my experience. You can write unmaintainable code in any language, but Haskell gives you a lot of choice in how you build your application, and it makes refactoring a lot nicer than in any other language I've used. You don't get some of the nice IDE features to rename things or move code around automatically, but working in a large Haskell codebase you really do start to see ways that the language makes structural and architectural refactoring a lot easier.

But for "what language is easy to employ and has an expansive ecosystem + tooling", I feel like you have to hand it to Java, .NET, Python, TypeScript, Go, etc...

Those are all perfectly good choices. I think what people tend to overlook is that Haskell is _also_ a perfectly good choice. Everything has tradeoffs, but Haskell isn't some terrible esoteric choice that forces you to leave everything practical on the table. It really is useful day to day as a general purpose language.

mchaver
0 replies
1d3h

I am unironically being paid to do it!

My experience is Haskell is one of those ecosystems that has a greater talent pool than there are available positions. I feel like cost of maintenance is pretty nice because you have less bugs. You may have to roll up your sleeves and get your hands dirty to update open source libraries or make stuff that is missing, but code reliability seems to be worth it.

__MatrixMan__
1 replies
1d4h

Not just HN, Cardano's smart contract language, Plutus, is based on Haskell.

simonmic
0 replies
5h59m

Not just that; the whole Cardano blockchain is running on Haskell, with 100% uptime and high trust. It’s a hugely impressive system, well worth studying.

jose_zap
0 replies
1d4h

We do that at our company, it's been great

bunderbunder
0 replies
1d4h

I am not prepared to hunt down the citation, but several years back I stumbled across a paper that was trying to compare the effectiveness of various languages for grinding out "domain logic-y" code. Among the ones they evaluated, Haskell came out on top in terms of both time to get the work done and correctness of the implementation.

IIRC this was testing with students, which would be both a strength and a weakness of the experimental design.

mumblemumble
8 replies
1d4h

It is. But I think that, for that purpose, I like F# even better. Even beyond getting access to the .NET ecosystem, you also get some language design decisions that were specifically meant to make it easier to maintain large codebases that are shared among developers with varying skill levels.

Lack of typeclasses is a good example. Interface inheritance isn't my favorite, but after years working as the technical lead on a Scala project I've been forced to concede that haranguing people who just want to do their job and go home to their family about how to use them properly isn't a good use of anyone's time. Everyone comes out of school already knowing how to use interfaces and parametric polymorphism, and that is fine.

chefandy
4 replies
1d4h

Anecdotally, the handful of people I've known that worked in commercial Haskell shops, after the initial honeymoon period intensified by actually finding a paying Haskell dev job, wishes they were using a more practical "happy medium" FP language. I don't know anyone that's used F# in production, but nobody I know that's worked in Elixir, erlang, or elm environments has expressed the same frustration.

tome
1 replies
1d4h

Interesting. I wonder where you met them. I've worked with tens of Haskell programmers in my career, most of whom were sad if they were required to stop working in Haskell. I've never met anyone who actively sought out a Haskell job and then subsequently wanted to stop working in Haskell.

chefandy
0 replies
20h51m

Your sample size sounds much bigger than mine.

IWeldMelons
0 replies
1d4h

Jane Street famously uses Ocaml, which is, granred, not F# but closee enough/

1-more
0 replies
1d3h

Many of my colleagues would describe themselves as taking pay cuts to write Haskell provisioned with Nix with type-safe interop with Ruby and our frontend. If you're into it, you're into it. And it has the effect of putting absolute mutants on your team.

vips7L
1 replies
1d4h

His book Domain Driven Design Made Functional is really good. It really opened my eyes on DDD.

tome
0 replies
1d3h

A book I find truly wonderful! If I was going to recommend one book about how to design software, it would be this one.

darby_nine
3 replies
1d4h

If that's what you're looking for, why not rip out most of the language? You'll end up with something that looks a lot like Elm. You'll end up with a purely deterministic program with no i/o (albeit with a kind of crappy debugging experience).

agentultra
2 replies
1d4h

Well because you need the rest of the language to make your program tell your system to do stuff.

Turns out `IO` is the most essential and useful bit of a Haskell program. That part can be left to the programmers. Haskell has a lot of facilities for making that nicer to work with as well.

I find that when I tell folks I work in Haskell full-time you can see their opinion of you change on their face. I'm not some kind of PhD genius programmer. I'm pretty middle-of-the-road to be honest.

It's just nice to have a language that makes the work of writing BLOBS straight-forward.

darby_nine
1 replies
1d4h

Well because you need the rest of the language to make your program tell your system to do stuff.

That's not necessary for business logic, though. This would presumably be embedded in infrastructure that handled i/o separately.

agentultra
0 replies
1d3h

I've heard of systems like Roc + Nea taking this to the extreme [0]. Totally a way to go.

Haskell, to some extent, can help you structure your program in this way where the business logic is just simple, plain, old functional code. You can write the data marshalling, logging, and networking layers separately. There are a few ways to tackle that in Haskell in varying levels of complexity as you would expect coming from other general-purpose programming languages.

[0] https://www.youtube.com/watch?v=zMRfCZo8eAc&t=952s

ninetyninenine
1 replies
1d4h

Haskell is just hard when you get to the advanced stuff. I mean beyond monads there’s the state monad, lenses, etc. a lot of these are not trivial to wrap your brain around. Like for Java head first design patterns I read it and I’m good. For monads it took me weeks to wrap my head around it and I still don’t understand every monad.

Yeah I get a bunch of basic apps can be modeled easily, you get unparalleled static safety but programmers will spend an inordinate amount of time figuring out mind bending algebraic patterns.

I think something like ocaml or f sharp are more down to earth.

agentultra
0 replies
1d3h

The advanced parts of most languages can get hairy. Don't mistake familiarity with complexity. Even Java has hard, dense code that is difficult to work with and learn.

I tend to stay away from the advanced parts of Haskell when writing BLOBS.

The advanced stuff is there when you need to write libraries that need generic code that works with types you haven't defined yourself. You learn it as you go when you need to.

But when I'm writing BLOBS I mostly stick to using libraries and that's pretty straight-forward.

bazoom42
0 replies
3h2m

Sounds about as plausible as “Non-programmers can draw the business logic as UML diagrams, then the code can be generated by automated tools”

axilmar
19 replies
1d8h

My question for Haskellers is how to do updates of values on a large scale, let's say in a simulation.

In imperative languages, the program will have a list of entities, and there will be an update() function for each entity that updates its state (position, etc) inline, i.e. new values are overwriten onto old values in memory, invoked at each simulation step.

In Haskell, how is that handled? do I have to recreate the list of entities with their changes at every simulation step? does Haskell have a special construct that allows for values to be overwritten, just like in imperative languages?

Please don't respond with 'use the IO monad' or 'better use another language because Haskell is not up for the task'. I want an actual answer. I've asked this question in the past in this and some other forums and never got a straight answer.

If you reply with 'use the IO monad' or something similar, can you please say if whatever you propose allows for in place update of values? It's important to know, for performance reasons. I wouldn't want to start simulations in a language that requires me to reconstruct every object at every simulation step.

I am asking for this because the answer to 'why Haskell' has always been for me 'why not Haskell: because I write simulations and performance is of concern to me'.

whateveracct
2 replies
20h58m

You can use apecs, a pretty-fast Haskell ECS for those sorts of things.

IceDane
1 replies
7h44m

"Pretty fast".. relatively speaking, considering that it's in an immutable, garbage collected language. Still woefully slow compared to anything else out there(say, bevy? which incidentally works similarly to apecs) and mostly practically unusable if the goal is to actually create a real product.

Want to just have fun? Sure.

whateveracct
0 replies
2h22m

You can create a real product with apecs lol. It is not going to block an indie game written in Haskell with it, for instance. And you could totally use it to write simulations for stuff too.

Also from the apecs README:

Fast - Performance is competitive with Rust ECS libraries (see benchmark results below)

Sounds like that "woefully slow" judgment of yours wasn't based in any real experience but rather just your opinion?

lieks
2 replies
1d7h

You... don't. You have to rely on compiler optimizations to get good performance.

Monads are more-or-less syntax sugar. They give you a structure that allows these optimizations more easily, and also make the code more readable sometimes.

But in your example, update returns a new copy of the state, and you map it over a list for each step. The compiler tries to optimize that into in-place mutation.

IMO, having to rely so much on optimization is one of the weak points of the language.

whateveracct
0 replies
2h18m

Fast immutable data structures don't rely on compiler optimizations. They just exist lol.

kreetx
0 replies
1d6h

You do, and you'll have to use do destructive updates within either ST or IO monad using their respective single variable or array types. It looks roundabouty, but does do the thing you want and it is fast.

ST and IO are "libraries" though, in the sense that they not special parts of the language, but appear like any other types.

tikhonj
1 replies
1d7h

I mean, Haskell has mutable vectors[1]. You can mutate them in place either in the IO monad or in the ST monad. They fundamentally work the same way as mutable data structures in any other garbage collected language.

When I worked on a relatively simple simulation in Haskell, that's exactly what I did: the individual entities were immutable, but the state of the system was stored in a mutable vector and updated in place. The actual "loop" of the simulation was a stream[2] of events, which is what managed the actual IO effect.

My favorite aspect of designing the system in Haskell was that I could separate out the core logic of the simulation which could mutate the state on each event from observers which could only read the state on events. This separation between logic and pure metrics made the code much easier to maintain, especially since most of the business needs and complexity ended up being in the metrics rather than the core simulation dynamics. (Not to say that this would always be the case, that's just what happened for this specific supply chain domain.)

Looking back, if I were going to write a more complex performance-sensitive simulation, I'd probably end up with state stored in a bunch of different mutable arrays, which sounds a lot like an ECS. Doing that with base Haskell would be really awkward, but luckily Haskell is expressive enough that you can build a legitimately nice interface on top of the low-level mutable code. I haven't used it but I imagine that's exactly what apces[3] does and that's where I'd start if I were writing a similar sort of simulation today, but, who knows, sometimes it's straight-up faster to write your own abstractions instead...

[1]: https://hackage.haskell.org/package/vector-0.13.1.0/docs/Dat...

[2]: https://hackage.haskell.org/package/streaming

[3]: https://hackage.haskell.org/package/apecs

whateveracct
0 replies
20h57m

apecs is really nice! it's not without its issues, but it really is a sweet library. and some of its issues are arguably just issues with ECS than apecs itself.

icrbow
1 replies
1d7h

does Haskell have a special construct that allows for values to be overwritten

Yes and no.

No, the language doesn't have a special construct. Yes, there are all kinds of mutable values for different usage patterns and restrictions.

Most likely you end up with mutable containers with some space reserved for entity state.

You can start with putting `IORef EntityState` as a field and let the `update` write there. Or multiple fields for state sub-parts that mutate at different rates. The next step is putting all entity state into big blobs of data and let entities keep an index to their stuff inside that big blob. If your entities are a mishmash of data, then there's `apecs`, ECS library that will do it in AoS way. It even can do concurrent updates in STM if you need that.

Going further, there's `massiv` library with integrated task supervisor and `repa`/`accelerate` that can produce even faster kernels. Finally, you can have your happy Haskell glue code and offload all the difficult work to GPU with `vulkan` compute.

icrbow
0 replies
1d2h

ECS library that will do it in AoS way

TLAs aren't my forte. It's SoA of course.

tome
0 replies
1d4h

I'm not sure why you say not to respond with 'use the IO monad' because that's exactly how you'd do it! As an example, here's some code that updates elements of a vector.

    import Data.Vector.Unboxed.Mutable
    
    import Data.Foldable (for_)
    import Prelude hiding (foldr, read, replicate)
    
    -- ghci> main
    -- [0,0,0,0,0,0,0,0,0,0]
    -- [0,5,10,15,20,25,30,35,40,45]
    main = do
      v <- replicate 10 0
    
      printVector v
    
      for_ [1 .. 5] $ \_ -> do
        for_ [0 .. 9] $ \i -> do
          v_i <- read v i
          write v i (v_i + i)
    
      printVector v
    
    printVector :: (Show a, Unbox a) => MVector RealWorld a -> IO ()
    printVector v = do
      list <- foldr (:) [] v
      print list
It does roughly the same as this Python:

    # python /tmp/test28.py
    # [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
    # [0, 5, 10, 15, 20, 25, 30, 35, 40, 45]
    def main():
        v = [0] * 10
    
        print(v)
    
        for _ in range(5):
            for i in range(10):
                v_i = v[i]
                v[i] = v_i + i
    
    
        print(v)
    
    if __name__ == '__main__': main()

throwaway81523
0 replies
1d5h

Well what kind of values and how many updates? You might have to call an external library to get decent performance, like you would use NumPy in Python. This might be of interest: https://www.acceleratehs.org/

rebeccaskinner
0 replies
1d3h

In imperative languages, the program will have a list of entities, and there will be an update() function for each entity that updates its state (position, etc) inline, i.e. new values are overwriten onto old values in memory, invoked at each simulation step.

In Haskell, how is that handled? do I have to recreate the list of entities with their changes at every simulation step? does Haskell have a special construct that allows for values to be overwritten, just like in imperative languages?

You don't _have to_ recreate the list each time, but that's probably where I'd suggest starting. GHC is optimized for these kinds of patterns, and in many cases it'll compile your code to something that does in-place updates for you, while letting you write pure functions that return a new list. Even when it can't, the runtime is designed for these kinds of small allocations and updates, and the performance is much better than what you'd get with that kind of code in another language.

If you decided that you really did need in-place updates, then there are a few options. Instead of storing a vector of values (if you are thinking about performance you probably want vectors instead of lists), you can store a vector of references that can be updated. IO is one way to do that (with IORefs) but you can also get "internal mutability" using STRefs. ST is great because it lets you write a function that uses mutable memory but still looks like a pure function to the callers because it guarantees that the impure stuff is only visible inside of the pure function. If you need concurrency, you might use STM and store them as MVars. Ultimately all of these options are different variations on "Store a list of pointers, rather than a list of values".

There are various other optimizations you could do too. For example, you can use unboxed mutable vectors to avoid having to do a bunch of pointer chasing. You can use GHC primitives to eek out even better performance. In the best case scenario I've seen programs like this written in Haskell be competitive with Java (after the warmup period), and you can keep the memory utilization pretty low. You probably won't get something that's competitive with C unless you are writing extremely optimized code, and at that point most of the time I'd suggest just writing the critical bits in C and using the FFI to link that into your program.

mrkeen
0 replies
1d5h

My question for Haskellers is how to do updates of values on a large scale, let's say in a simulation.

The same way games do it. The whole world, one frame at a time. If you are simulating objects affected by gravity, you do not recalculate the position of each item in-place before moving onto the next item. You figure out all the new accelerations, velocities and positions, and then apply them all.

louthy
0 replies
1d7h

In your imperative language, imagine this:

    World simulation(Stream<Event> events, World world) =>
       events.IsComplete
           ? world
           : simulation(applyEventToWorld(events.Head, world), events.Tail);

    World applyEventToWorld(Event event, World world) =>
       // .. create a new World using the immutable inputs
That takes the first event that arrives, transforms the World, then recursively calls itself with the remaining events and the transformed World. This is the most pure way of doing what you ask. Recursion is the best way to 'mutate', without using mutable structures.

However, there are real mutation constructs, like IORef [1] It will do actual in-place (atomic) mutation if you really want in-place updates. It requires the IO monad.

[1] https://hackage.haskell.org/package/base-4.20.0.1/docs/Data-...

kccqzy
0 replies
1d2h

I don't understand why you hate the IO monad so much. I mean I've seen very large codebases doing web apps and almost everything is inside the IO monad. It's not as "clean" and not following best practices, but still gets the job done and is convenient. Having pervasive access to IO is just the norm in all other languages so it's not even a drawback.

But let's put that aside. You can instead use the ST monad (not to be confused with the State monad) and get the same performance benefit of in-place update of values.

gspr
0 replies
1d8h

Use the ST monad? :)

contificate
0 replies
1d6h

I have a rather niche theory that many Hindley-Milner type inference tutorials written by Haskellers insist on teaching the error-prone, slow, details of algorithm W because otherwise the authors would need to commit to a way to do destructive unification (as implied by algorithm J) that doesn't attract pedantic criticism from other Haskellers.

For me, I stopped trying to learn Haskell because I couldn't quite make the jump from writing trivial (but neat) little self-contained programs to writing larger, more involved, programs. You seem to need to buy into a contorted way of mentally modelling the problem domain that doesn't quite pay off in the ways advertised to you by Haskell's proponents (as arguments against contrary approaches tend to be hyperbolic). I'm all for persistent data structures, avoiding global state, monadic style, etc. but I find that OCaml is a simpler, pragmatic, vehicle for these ideas without being forced to bend over backwards at every hurdle for limited benefit.

Coolbeanstoo
19 replies
1d8h

I would like to use haskell or another functional language professionally.

I try them out (ocaml,haskell,clojure,etc) from time to time and think they're fairly interesting, but i struggle to figure out how to make bigger programs with them as I've never seen how you build up a code base with the tools they provide and with someone to review the code i produce and so never have any luck with jobs i've applied to.

On the flipside I never had too much trouble figuring out how to make things with Go, as it has so little going on and because it was the first language i worked with professionally for an extended period of time. I think that also leads me to trying to apply the same patterns because I know them even if they dont really work in the world of functional languages

Not sure what the point of this comment is, but I think i just want to experience the moment of mind opening-ness that people talk about when it comes to working with these kinds of languages on a really good team

cosmic_quanta
8 replies
1d7h

I have also initially struggled with structuring Haskell programs. Without knowing anything about what you want to do, here's my general approach:

1. Decide on an effect system

Remember, Haskell is pure, so any side-effect will be strictly explicit. What broad capabilities do you want? Usually, you need to access some program-level configuration (e.g. command-line options) and the ability to do IO (networking, reading/writing files, etc), so most people start with that.

https://tech.fpcomplete.com/blog/2017/06/readert-design-patt...

2. Encode your business logic in functions (purely if possible)

Your application does some processing of data. The details don't matter. Use pure functions as much as possible, and factor effectful computations (e.g. database accesses) out into their own functions.

3. Glue everything together in a monadic context

Once you have all your business logic, glue everything together in a context with your effect system (usually a monad stack using ReaderT). This is usually where concurrency comes in (e.g. launch 1 thread per request).

---

Beyond this, your application design will depend on your use-case.

If you are interested, I strongly suggest to read 'Production Haskell' by Matt Parsons, which has many chapters on 'Haskell application structure'.

solomonb
6 replies
1d3h

1. Decide on an effect system

This shouldn't even be proposed as a question to someone new to Haskell. They should learn how monad transformers work and just use them. 90% of developers playing around effect systems would be just fine with MTL or even just concrete transformers. All Haskell effect systems should be considered experimental at this point with unclear shelf lives.

Everything else you said I agree with as solid advice!

cosmic_quanta
4 replies
1d2h

Someone truly new to Haskell shouldn't use it professionally.

Once you've learned what is necessary to, say, modify already-existing applications, you should be familiar with monads and some basic monad transformers like ReaderT.

Once you're there, I don't think 'choosing an effect system' is a perilous question. The monad transformer library, mtl, is an effect system, the second simplest one after IO.

solomonb
3 replies
1d2h

The original poster said they want to use Haskell professionally but that they are struggling to understand how to structure programs.

Once you're there, I don't think 'choosing an effect system' is a perilous question. The monad transformer library, mtl, is an effect system, the second simplest one after IO.

I'm aware of that, generally when people say "choose effect system" they mean choose some algebraic effect system, all of which (in Haskell) have huge pitfalls. The default should be monad transformers unless you have some exceptional situation.

tome
0 replies
11h12m

generally when people say "choose effect system" they mean choose some algebraic effect system

This isn't really true. Bluefin and effectful are effect systems, but not algebraic effect systems.

cosmic_quanta
0 replies
1d1h

I realize I didn't mention monad transformers at all in my original post, I only linked to them!

I should have mentioned that, as you say, monad transformers should be the default effect system choice for 99% of people

HelloNurse
0 replies
7h9m

On a software engineering level choosing such unusually deep-reaching libraries unusually soon in the development of a program is a major but uninformed commitment, a dangerous bet that more practical programming languages try to avoid imposing on the user.

tome
0 replies
11h13m

This shouldn't even be proposed as a question to someone new to Haskell. They should learn how monad transformers work and just use them. 90% of developers playing around effect systems would be just fine with MTL or even just concrete transformers. All Haskell effect systems should be considered experimental at this point with unclear shelf lives.

This is highly debatable. I would say that the effect systems effectful and Bluefin are actually significantly simpler than MTL and transformers, particularly as soon as you need to do prompt resource cleanup.

Personally I'd say that if newbies should start with naked IO and then switch to effectful or Bluefin once they've realised the downside of IO being available everywhere.

All Haskell effect systems should be considered experimental at this point with unclear shelf lives.

effectful and Bluefin are here to stay. I guarantee it. For non-IO-based effect systems (e.g. polysemy, freer-effects) I agree.

(Disclaimer: I'm the author of Bluefin)

jsbg
0 replies
1d2h

This is excellent advice that unfortunately seems to get lost in a lot of Haskell teachings. I learned Haskell in school but until I had to use it professionally I would have never been able to wrap my head around effect systems. I still think that part of Haskell is unfortunate as it can get in the way of getting things done if you're not an expert, but being able to separate pure functions from effectful ones is a massive advantage.

jerf
4 replies
1d4h

People love to talk about the upsides and the fun and what you can learn from Haskell.

I am one of these people.

People are much more reluctant to share what it is that led them to the conclusion that Haskell isn't something they want to use professionally, or something they can't use professionally. It's a combination of things, such as it just generally being less energizing to talk about that, and also some degree of frankly-justified fear of being harassed by people who will argue loudly and insultingly that you just Don't Get It.

I am not one of those people.

I will share the three main reasons I don't even consider it professionally.

First, Hacker News has a stronger-than-average egalitarian streak and really wants to believe that everybody in the world is already a developer with 15 years of experience and expert-level knowledge in all they survey from atop their accomplished throne, but that's not how the real world works. In the real world I work with coworkers who I have to train why in my Go code, a "DomainName" is a type instead of just a string. Then, just as the light bulb goes off, they move on from the project and I get the next junior dev who I have to explain it to. I'm hardly going to get to the point where I have a team of people who are Haskell experts when I'm explaining this basic thing over and over.

And, to be 100% clear, this is not their "fault", because being a junior programmer in 2024 is facing a mountain of issues I didn't face at their age: https://news.ycombinator.com/item?id=33911633 I wasn't expected to know about how to do source control or write everything to be rollback-able or interact with QA, or, well, see linked post for more examples. Haskell is another stack of requirements on top of a modern junior dev that is a hell of an ask. There better be some damn good reasons for me to add this to my minimim-viable developer for a project. I am not expressing contempt for the junior programmers here from atop my own lofty perch; I am encouraging people to have sympathy with them, especially if you also come up in the 90s when it was really relatively easy, and to make sure you don't spec out projects where you're basically pulling the ladder up after yourself. You need to have an onboarding plan, and "spend a whole bunch of time learning Haskell" is spending a lot of your onboarding plan currency.

Second, while a Haskell program that has the chef's kiss perfect architecture is a joy to work with, it is much more difficult to get there for a real project. When I was playing with Haskell it was a frequent occurrence to discover I'd architected something wrong, and to essentially need to rewrite the whole program, because there is no intermediate functioning program between where I was and where I needed to be. The strength of the type system is a great benefit, but it does not put up with your crap. But "your crap" includes things like being able to rearchitect a system in phases, or partially, and still have a functioning system, and some other things that are harder to characterize but you do a lot of without even realizing it.

I'd analogize it to a metalworker working with titanium. If you need it, you need it. If you can afford it, great. The end result is amazing. But it's a much harder metal to work with for the exact same reason it's amazing. The strength of the end part is directly reflected in the metal resisting you working with it.

I expect at a true expert level you can get over this, but then as per my first point, demanding that all my fellow developers become true experts in this obscure language is taking it up another level past just being able to work in it at all.

Finally, a lot of programming requirements have changed over the years. 10-15 years ago I could feasibly break my program into a "functional core" and an external IO system. This has become a great deal less true, because the baseline requirement for logging, metrics, and visibility have gone up a lot, and suddenly that "pure core" becomes a lot less appealing. Yes, of course, our pure functions could all return logs and metrics and whathaveyou, and sure, you can set up the syntax to the point that it's almost tolerable, but you're still going to face issues where basically everything is now in some sort of IO. If nothing else, those beautiful (Int -> Int -> Int) functions all become (Int -> Int -> LoggingMetrics Int) and now it isn't just that you "get" to use monadic interfaces but you're in the LoggingMetrics monad for everything and the advantages of Haskell, while they do not go away entirely, are somewhat mitigated, because it really wants purity. It puts me halfway to being in the "imperative monad" already, and makes the plan of just going ahead and being there and programming carefully a lot more appealing. Especially when you combine that with the junior devs being able to understand the resulting code.

In the end, while I still strongly recommend professional programmers spend some time in this world to glean some lessons from it that are much more challenging to learn anywhere else, it is better to take the lessons learned and learn how to apply them back into conventional languages than to try to insist on using the more pure functional languages in an engineering environment. This isn't even the complete list of issues, but they're sufficient to eliminate them from consideration for almost every engineering task. And in fact every case I have personally witnessed where someone pushed through anyhow and did it, it was ultimately a business failure.

ninetyninenine
2 replies
1d3h

I'd analogize it to a metalworker working with titanium. If you need it, you need it. If you can afford it, great. The end result is amazing. But it's a much harder metal to work with for the exact same reason it's amazing. The strength of the end part is directly reflected in the metal resisting you working with it.

I’d say you missed one of the main points of Haskell and functional programming in general.

The combinator is the most modular and fundamental computational primitive available in programming. When you make a functional program it should be constructed out of the composition of thousands of these primitive with extremely strict separation from IO and multiple layers of abstraction. Each layer is simply composed functions from the layer below.

If you think of fp programming this way. It becomes the most modular most reconfigurable programming pattern in existence.

You have access to all layers of abstraction and within each layer are independent modules of composed combinators. Your titanium is given super powers where you can access the engine, the part, the molecule and the atom.

All the static safety and beauty Haskell provides is actually a side thing. What Haskell and functional programming in general provides is the most fundamental and foundational way to organize your program such that any architectural change only requires you replacing and changing the minimum amount of required modules. Literally the opposite of what you’re saying.

The key is to make your program just a bunch of combinators all the way down with an imperative io shell that is as thin as possible. This is nirvana of program organization and patterns.

jerf
1 replies
1d2h

I'm well aware of functional programming as focusing on composition.

One of the reasons you end up with "refactoring the entire program because of some change" is when you discover that your entire composition scheme you built your entire program around is wrong, e.g., "Gee, this effects library I built my entire code base around to date is really nifty but also I can't actually express my needs in it after all". In a conventional language, you just build in the exceptions, and maybe feel briefly sad, but it works. It can ruin a codebase if you let it, but it's at least an option. In Haskell, you have a much bigger problem.

Now filter that back through what I wrote. You want to explain to your junior developer who is still struggling with the concept of using things other than strings why we have to rewrite the entire code base to use raw IO instead of the effects system we were using because it turns out the compilation time went exponential and we can't fix it in any reasonable amount of effort? How happy are they going to be with you after you just spent a whole bunch of time explaining the way to work with the effects system? They're not going to come away with a good impression of either Haskell or you.

tome
0 replies
11h6m

"Gee, this effects library I built my entire code base around to date is really nifty but also I can't actually express my needs in it after all"

This is why I recommend IO-based effect systems like Bluefin and effectful. If you find that you get stuck you always have the escape hatch of just doing whatever you want in IO. Maybe feel briefly sad, but it works.

(I'm the author of Bluefin)

cosmic_quanta
0 replies
1d4h

I'd analogize it to a metalworker working with titanium. If you need it, you need it. If you can afford it, great. The end result is amazing. But it's a much harder metal to work with for the exact same reason it's amazing.

What a beautiful, succinct analogy. I'm stealing this.

bedman12345
2 replies
1d7h

I’ve been working with pure functional languages and custom lisp dialects professionally my whole tenure. You get a whole bag of problems for a very subjective upside. Teams fragment into those that know how to work with these fringe tools and those who don’t. The projects using them that I worked on all had trouble with getting/retaining people. They also all had performance issues and had bugs like all other software. You’re not missing out on anything.

zelphirkalt
0 replies
10h25m

Many problems stem from people not being willing to learn another paradigm of computer programming. Of course teams will split, if some people are not willing to learn, because then some will be unable to work on certain things, while other will be able to do so.

You mention performance. However, if we look at how many Python shops there are, this can hardly be a problem. I imagine ecosystems to be a much bigger issue than performance. Many implementations of functional languages have better performance than Python anyway.

There are many reasons why a company can have issues retaining people. A shitty uninteresting product, bad management, low wages, bad culture ... Lets eliminate those and see whether they still have issues retaining devs. I suspect, that an interesting tech stack could make people stay, because it is not so easy to find a job with such a tech stack.

However, many companies want easily replaceable cogs, which FP programmers are definitely not these days. So they would rather hire low skill easily replaceable than highly skilled but more expensive workforce. They know they will not be able to retain the highly skilled, because they know their other factors are not in line for that.

MetaWhirledPeas
0 replies
5h45m

Teams fragment into those that know how to work with these fringe tools and those who don’t.

So the teams self-select to let you work with the people you want to work with? Tell me more!

rebeccaskinner
0 replies
17h20m

I’ve been using Haskell professionally off and on, along with other languages, since 2008. Professional experience certainly will help you learn some patterns, but honestly my best advice for structuring programs is to not think too hard about it.

Use modules as your basic unit of abstraction. Don’t go out of your way to make their organization over-engineered, but each module should basically do one thing, and should define everything it needs to do that thing (types, classes, functions).

Use parametric polymorphism as much as you can, without making the code too hard to read. Prefer functions and records over type classes as much as possible. Type classes that only ever have a single instance, don’t have laws, or type classes defined for unit data types are major code smells.

Don’t worry about avoiding IO, but as much as you can try to keep IO code separate from pure code. For example, if you need to read a value from the user, do some calculations, then print a message, it’s far better to factor the “do some calculations” part out into a pure function that takes the things you read in as arguments and returns a value to print. It’s really tempting to interleave logic with IO but you’ll save so much time, energy, and pain if you avoid this.

Essentially, keep things as simple as you can without getting belligerent about it. The type system will help you a lot with refactoring.

Start at the beginning. Write functions. When you see some piece of functionality that you need, use `undefined` to make a placeholder function. Then, go to your place holder and start implementing it. Use undefined to fill in bits that you need, and so on.

Fancy types are neat but it’s easy to end up with a solution in search of a problem. Avoid them until you really have a concrete problem that they solve- then embrace them for that problem (and only that problem).

You’ll refactor a lot, and learn to have a better gut feeling for how to structure things, but that’s just the process of gaining experience. Leaning into the basics of FP (pure functions, composed together) will be the path of least resistance as you are getting there.

mattgreenrocks
0 replies
1d5h

I've used Haskell professionally for two years. It is the right pick for the project I'm working on (static analysis). I'm less sold on the overall Haskell ecosystem, tooling, and the overall Haskell culture.

There are still plenty of ways to do things wrong. Strong types don't prevent that. Laziness is a double-edged sword and can be difficult to reason about.

maleldil
12 replies
1d7h

I feel like part of the problem is Haskell's extremism towards purity and immutability. I find some code easier to express with procedural/mutable loops than recursion, and I believe the vast majority of programmers. I think that one thing that makes Rust so successful is its capable type of system and use of many functional idioms, but you can use loops and mutability when it's more comfortable. And of course, the borrow checker to ensure that such mutability is sound.

pyrale
3 replies
1d7h

That's a problem no haskell user has, honestly. Your issue seems to be about getting your feet wet. Could you imagine people saying the issue with Java is its extremism towards objects and method calls?

Sure, a determined Fortran programmer can write Fortran in any language, but if they have trouble doing so, maybe the issue isn't the language.

bedman12345
0 replies
1d6h

Could you imagine people saying the issue with Java is its extremism towards objects and method calls?

I think exactly that all the time. It’s ridiculous.

That's a problem no haskell user has, honestly.

I had this problem all the time when trying to write games in Haskell. Not every subject matter decomposes into semirings. Just like not everything decomposes nicely into objects. People tried to fix this with FRP or lenses. Both are worse than imperative programming for games imo.

Joker_vD
0 replies
1d6h

That's a problem no haskell user has, honestly.

In a sense, that's true: people who do have this trouble constantly (e.g. me) very quickly cease being Haskell users. But that's hardly an argument for TFA's claim that "Haskell is probably the best choice for most programmers, especially if one cares about being able to productively write robust software, and even more so if one wants to have fun while doing it"; if anything, it's a counter-argument.

wavemode
1 replies
1d4h

I feel like part of the problem is Haskell's extremism towards purity and immutability

Eh. Elm has achieved quite a bit of success just by having good tooling and a good ecosystem. Says something about people's willingness to learn pure functional programming, if the situation is right.

I find some code easier to express with procedural/mutable loops than recursion

This is usually a familiarity problem, IMO.

I often say: people think mastering Haskell is about mastering monads. But it's actually about mastering folds.

1-more
0 replies
1d2h

the best intro to writing Haskell for me was writing Elm. The best intro to writing Elm was writing pointfree Codewars kata solutions in JS using

    with(require("ramda")) fn = pipe(…)
And yep, you end up with a lot of folds (well, reduces) where that ellipsis is. Or related functions that are really folds.

For those wondering `with` in JS is a bit like `extract` in PHP except it creates a new context right after itself rather than modify the context it finds itself in. It's super deprecated because it's inscrutable and silly except in this case and/or when you need to solve Codewars katas in a limited number of characters.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

https://www.php.net/manual/en/function.extract.php

EDIT ramda is a nice utility library in JS that supports partial application of arguments in arbitrary order https://ramdajs.com/docs/#subtract

tome
0 replies
10h59m

I find some code easier to express with procedural/mutable loops than recursion

Do you mean like this example of calculating the 5th triangular number with a procedural loop and mutable state? Haskell supports them just fine!

https://hackage.haskell.org/package/bluefin-0.0.7.0/docs/Blu...

the_af
0 replies
1d6h

I feel like part of the problem is Haskell's extremism towards purity and immutability

You missed "lazy evaluation by default" in that list ;) Those properties are kind of the definition of Haskell, so without all of them you'd have another language.

Like the sibling commenter mentions, this seems more of a "I'm unfamiliar with this" thing rather than a problem with Haskell...

gtf21
0 replies
1d5h

I find some code easier to express with procedural/mutable loops than recursion

This is what I was talking about in the section "Unlearning and relearning". While there are _some_ domains (like embedded systems) for which Haskell is a poor fit, a lot of the difficulties people have with it (and with FP in general) is that they have been heavily educated to think in a particular way about computation. That's an accident of history, rather than any fundamental issue with the programming paradigm.

enugu
0 replies
1d6h

Haskell doesn't stop you from mutation, it just makes you explicitly mark it, like types of inputs/output are explicit in statically typed languages instead of being implicit or borrowing is explicit in Rust.

Mutations becomes first class values like numbers or arays, and hence can be primitives for more complex mutations, whose types can be inferred from the types of primitive mutations.

This means that the we have compile time guarantees that certain piece of code wont change anything in certain part of the state - This function wont change anything in that part of the data.

It is no joke, though not strictly true, that Haskell has been called the world's best imperative language.

cosmic_quanta
0 replies
1d6h

I find some code easier to express with procedural/mutable loops than recursion, and I believe the vast majority of programmers

I think this comes from early and continuous exposure to some forms of programming, rather than inherent to pure functional programming.

I personally find it much easier to use recursion and other functional techniques because it composes better. This probable comes from my exposure to Haskell much earlier than most.

cies
0 replies
1d6h

This has been researched. If you were educated with FP-langs first, you'd say it's harder "to express with procedural/mutable loops".

I believe the vast majority of programmers.

Yups. We get educated with imperative langs. The majority of us do.

watt
11 replies
1d5h

The article lost me at following sentence:

A double arrow => describes constraints on the type variables used, and always come first: add1 :: Num a => a -> a describes a function which takes any type a which satisfies Num a, and returns a value of the same type.

Here, I don't understand what `Num a` syntax means. It was not defined before. And, what does "satisfies" mean? It is also not defined before it is used. (It is also never mentioned again in the article.) It is maddening to read such sloppily constructed prose. Define your terms before you use them!

ethangk
3 replies
1d5h

It just means that ‘a’ must be a Number [0]. In this context, I believe satisfies means that it implements the things defined in the ‘minimum definition’ in the link below. If you’re familiar with Go, it’s similar to something implementing an interface.

[0] https://hackage.haskell.org/package/base-4.20.0.1/docs/GHC-N...

watt
2 replies
1d5h

well why does Num then come before a ? If a :: Num would mean a is a value of type Num, why does this "satisfies" constraint does not follow the pattern?

mrkeen
0 replies
1d4h

If a :: Num would mean a is a value of type Num

`a` is the type. Num is a `class`.

Here's an example. x is an Int32 and y is an Int64. If they had type Num, then this would be valid:

  add :: Num -> Num -> Num           -- Not valid Haskell
  add x y = x + y
However it's not valid, because you can't add an Int32 and an Int64:

  add :: Int32 -> Int64 -> ?     -- Doesn't compile
  add x y = x + y
But you can add Nums together, as long as they're the same type. You indicate they're the same type by using the same type variable 'a':

  add :: a -> a -> a      -- Doesn't compile
  add x y = x + y
But now the above complains because you used (+) which belongs to Num, so you have to declare that these `a`s can (+) because they're Nums.

  add :: Num a => a -> a -> a
  add x y = x + y
And it comes out shorter than your suggestion of putting the constraints afterward:

  add :: (a :: Num) -> (a :: Num) -> (a :: Num)       -- Not valid Haskell
  add x y = x + y

housecarpenter
0 replies
1d4h

Technically, `a :: Num` would be declaring, or defining that `a` is of type `Num`. After you see `a :: Num`, you can assume from then on as you're reading the program that `a` has type `Num`; if something is incompatible with that assumption it will result in a compiler error. This is different from `Num a`, which is making the assertion that `a` is of type `Num`, but that assertion may evaluate as true or false. It's similar to how assignment is different from equality, so that most programming languages with C-style syntax make a distinction between `=` and `==`.

There's also the fact that `Num` is technically not a type, but a type class, which is like a level above a type: values are organized into types, and types are organized into classes. Though this is more of a limitation of Haskell: conceptually, type classes are just the types of types, but in practice, the way they're implemented means they can't be treated in a uniform way with ordinary types.

So that's why there's a syntactic distinction between `Num a` and `a :: Num`. As for why `Num` comes before `a`, there's certainly a reasonable argument for making it come after, given that we'd read it in English as "a is a Num". I think the reason it comes before is that it's based on the usual function call syntax, which is `f x` in Haskell (similar to `f(x)` in C-style languages, but without requiring the parentheses). `Num` is kind of like a function you call on a type which returns a boolean.

desdenova
3 replies
1d5h

This terseness is what makes Haskell so hard to approach for beginners, unfortunately.

After you went through the effort of learning the syntax, it becomes very clear what it means, but I agree that dropping punctuation between a bunch of names isn't the clearest communication.

It becomes even worse when you start using third party libraries that abuse Haskell's ability to define custom operators, so you get entire APIs defined as arcane sigil invocations instead of readable functions.

That's why I gave up the idea of using Haskell for actual programming, and just took the functional programming philosophy from it to other languages.

As for the meaning, in a more conventional (rust) syntax, it'd be similar to this:

    fn add1<A: Num>(a: A) -> A

ColonelPhantom
1 replies
23h31m

I disagree that the 'arcane sigil invocations' are necessarily a problem. Yes, they can be, but I also think they can definitely be preferable!

Naming everything as a function often leads to a problem of very deep visual nesting. For example, map-then-filter can be written as "filter p . map f" in Haskell, whereas in sigil-free languages you'd write a mess like (lambda (x) (filter p (map f x))) in Lisp or "lambda x: filter(p, map(f, x))" in Python.

Of course, function composition is a very simple case, but something like lenses are another less simple case where a library would be unusable without custom operators.

kazinator
0 replies
21h43m

Right off the bat, the problem is that filter p . map f looks like it wants to be filter, then map. Nearly all modern languages that have pipelining, whereby nested function calls are extraposed into a linear formm, go left to right.

In the Lisp or Python, it is crystal clear that the entire map expression is a constituent of the filter expression.

kazinator
0 replies
21h23m

The problem with A -> B -> C is that it could be (A -> B) -> C or A -> (B -> C).

The -> operator in C is obvious. Though it does have left to right associativity, it's largely moot because only A would be a pointer-valued expression. The B and C would have to be member names:

  A->left->value.
Even if the associativity went right to left, the semantic interpretation would have to be that the left member of A is selected, and then the value member of that.

When X -> Y means "mapping from X to Y", the associativity could be anything, and make sense that way. Moreover (X -> Y) -> Z is different from X -> (Y -> Z). One is a function which maps an X-to-Y function to Z, whereas the other is a function which maps X to a Y-to-Z function.

kazinator
0 replies
21h32m

"Satisfies" is a common math term. For instance given

  x + 3 = 4
the value of x which satisfies the equation is 1. To satisfy is to be a value which makes true some truth valued formula (such as an invocation of a predicate like blue(x) or equation like the above, or inequality or such).

Satisfiability comes up in logic; a "SAT" problem is, in a nutshell, the problem of finding the combination of true/false values of a bunch of Boolean variables, which make some formula true.

When the Haskeller says that "Num a" is something that is satisfied by a, it means that it's a kind of predicate which says a is a Num. That predicate is true of an a which is a Num.

gtf21
0 replies
1d5h

If I may, as the author of "such sloppily constructed prose" (which I think might be a little unfair as a summary of all 6.5k words):

In this syntax note, I was not trying to teach someone to write Haskell programmes, but rather to give them just enough to understand the examples in the essay. I did test it on a couple of friends to see if it gave them enough to read the examples with, but was trying to balance the aim with not making this section a complete explainer (which would have been too long).

Perhaps I got the balance wrong, which is fair enough, but I don't think it's required to define _every single_ term upfront. It's also not crucial to the rest of the essay, so "The article lost me at following sentence" feels a bit churlish.

AzzieElbab
0 replies
1d5h

Yes, this bad mathematician's lingo is really unnecessary. It means someone else wrote an implementation of an interface called Num for your `a` type. Well, it is not really an interface. The correct term is type class, but that is details.

robertlagrant
11 replies
1d7h

I, like probably many people, like the idea of Haskell, but don't need a bottom-up language tutorial. Instead, I need:

- how easy is it to make a web application with a hello world endpoint?

- How easy is it to auth a JWT?

- Is there a good ORM that supports migrations?

- Do I have to remodel half my type system because a product owner told me about this weird business logic edge case we have to deal with?

- How do I do logging?

Etc.

gtf21
5 replies
1d7h

- how easy is it to make a web application with a hello world endpoint?

If that's all you want it to do, it's very easy with Wai/Warp.

- How easy is it to auth a JWT?

We don't use JWTs, but we did look at it and Servant (which is a library for building HTTP APIs) has built in functionality for them.

- Is there a good ORM that supports migrations?

There are several with quite interesting properties. Some (like persistent) do automatic migrations based on your schema definitions. Others you have to write migration SQL/other DSL.

- Do I have to remodel half my type system because a product owner told me about this weird business logic edge case we have to deal with?

I think that's going to really depend on how you have structured your domain model, it's not a language question as much as a design question.

- How do I do logging?

We use a library called Katip for logging, but there are others which are simpler. You can also just print to stdout if you want to.

robertlagrant
4 replies
1d7h

Thank you! What I was more saying was that an article like this would do better showing some practical simple examples, that would let people do things, rather than bemoaning how Haskell is viewed in 2024.

gtf21
2 replies
1d4h

For reference (in case it's helpful), my website (where this essay is hosted) is written in Haskell and is basically a fairly simple webserver.

For the "hello world" webserver, this might be a bit instructive: https://github.com/gfarrell/gtf.io/blob/main/src/GTF/Router....

changexd
1 replies
10h5m

thank you for the repo, I've been wanting to learn haskell but I didn't really know what can I build with, i might as well build some similar things like this since I've been trying to make my own blog server, now i get a chance to learn haskell and finally get up and build this.

cptwunderlich
0 replies
6h55m

There is also [Learn Haskell by building a blog generator](https://learn-haskell.blog/) - that might be interesting to you.

gtf21
0 replies
1d5h

Oh! I hope I wasn't bemoaning too much -- that was the lead-in, but it's mostly about what I really like about the language (and had some examples but I also didn't want to write a tutorial).

valenterry
1 replies
1d

This doesn't work.

Imagine you talk to someone who has done assembly his whole life and now wants to write something in, let's say, Java.

What would you think if he asks the question in the way you did?

Sometimes, when you learn a language that is so different you really really should NOT try to assume that your current knowledge just translates.

robertlagrant
0 replies
7h29m

I'm not advocating for removing the existing articles that introduce people to Haskell.

kccqzy
0 replies
1d2h

You can't do any of that without having first understood a bottom-up introduction. There are so many web frameworks from Yesod to Scotty to Servant (these are just the ones I've used personally) but you can't use any of them without at least an understanding of the language.

justinhj
0 replies
1d

That sounds valuable too but maybe it comes after the basic concepts or you may find people immediately dismiss it. There is all kinds of extra syntax and baggage that may seem pointless at first.

mg
11 replies
1d9h

TL/DR: Haskell makes you add more meta information to the code, so that compilers can reason about it.

One of the examples in the article:

Python:

    def do_something():
        result = get_result()
        if result != "a result":
            raise InvalidResultError(result)
        return 42
Haskell:

    doSomething :: Either InvalidResultError Int
    doSomething = 
        let result = getResult
            in if result /= "a result"
                then Left (InvalidResultError result)
                else Right 42
Personally, I prefer the Python version. In my experience, the benefits of adding meta information like types and possible return values are very small. Because the vast majority of time fixing problems in software development is spent on systematic bugs and issues, not on dealing with type errors.

bmacho
4 replies
1d8h

TL/DR: Haskell makes you add more meta information to the code, so that compilers can reason about it.

Actually Haskell lets you to add more meta information to the code, similar to modern Python or TypeScript. Type information is optional. But you might want to add it, it is helpful most of the times.

In the example, doSomething implicitly depends on getResult, which doesn't show up in the type information, so the type information only tells you how you can use doSomething. To know what is doSomething, you actually have to read the code :\

gtf21
3 replies
1d7h

I'm not sure that's entirely true (I wrote the examples): the point I'm trying to make is that you can precisely describe what `doSomething` consumes and produces (because it's pure) and you don't have to worry about what some nested function might throw or some side-effect it might perform.

bmacho
2 replies
1d5h

I'm not sure that's entirely true (I wrote the examples)

Which part?

the point I'm trying to make is that you can precisely describe what `doSomething` consumes and produces (because it's pure)

I think you failed to demonstrate it, and more or less demonstrated the opposite of it: the type signature of doSomething does not show its implicit dependence on getResult.

In Haskell you can do

  foo :: Int
  foo = 5

  bar :: Int
  bar = foo + 1
(run it: https://play.haskell.org/saved/hpo3Yaef)

which your example does. In this example bar's type signature doesn't tell you anything about what bar 'consumes', and it doesn't tell you that bar depends on foo, and on foo's type. Also you have to read the body of bar, and also it is bad for code reuse.

gtf21
0 replies
1d5h

Which part?

This part: "the type information only tells you how you can use doSomething. To know what is doSomething, you actually have to read the code :\" I think we're disagreeing on something quite fundamental here, based on "it doesn't tell you that bar depends on foo, and on foo's type. Also you have to read the body of bar, and also it is bad for code reuse."

(Although I am certainly open to the idea that "[I] failed to demonstrate it".)

A few things come up here:

1. Firstly, this whole example was to show that in languages which rely on this goto paradigm of error handling (like raising exceptions in python) it's impossible to know what result you will get from an expression. The Haskell example is supposed to demonstrate (and I think it _does_ demonstrate it) that with the right types, you can precisely and totally capture the result of an expression of computation.

2. I don't think it's true to say that (if I've understood you correctly) having functions call each other is bad for code re-use. At some point you're always going to call something else, and I don't think it makes sense to totally capture this in the type signature. I just don't see how this could work in any reasonable sense without making every single function call have it's own effect type, which you would list at the top level of any computation.

3. In Haskell, functions are pure, so actually you do know exactly what doSomething consumes, and it doesn't matter what getResult consumes or doesn't because that is totally circumscribed by the result type of doSomething. This might be a problem in impure languages, but I do not think it is a problem in Haskell.

gtf21
0 replies
1d5h

In this example bar's type signature doesn't tell you anything about what bar 'consumes'

Yes, it does: `bar` in your example is an `Int`, it has no arguments. That is captured precisely in the type signature, so I'm not sure what you're trying to say.

thesz
0 replies
1d9h

the vast majority of time fixing problems in software development is spent on systematic bugs and issues, not on dealing with type errors.

You can make "systematic bugs and issues" a type errors, then deal with them as type errors.

If you can confuse between meters and seconds expressed as Double and erroneously add them, wrap them into types Meters and Seconds, autogenerate Num and other classes' implementations and voila, you cannot add Meters and Seconds anymore, you get type error.

I do that even in C++, if I use it for my personal projects. ;)

But where Haskell really shine is in effect control. You cannot open file and write into it during execution of transaction between threads. If you parse text, you also can restrict certain actions. There are many cases where you need control over effects, and Haskell gladly helps there.

mmaniac
0 replies
1d9h

Your criticism seems more general than just Python and Haskell. It's really about dynamic and static typing. That's a legitimate debate, but as far as static typing goes, Haskell has one of the best static systems around - and because of that, idiomatic Haskell is unlikely to look like the example you posted.

lkmain
0 replies
1d8h

I prefer the Haskell version. One can read it as a full sentence instead of being interrupted by the boring imperative Python version that breaks the train of thought on every line.

gtf21
0 replies
1d9h

so that compilers can reason about it

Actually this is the wrong takeaway, I think it's so that programmers can reason about it.

This isn't about type errors, it's about precisely describing a particular computational expression. In the python example, it's very unclear what `do_something` actually _does_.

catgary
0 replies
1d9h

Ok, your Haskell example is basically drawing your opponent in the wojack meme and declaring yourself the winner.

black_knight
0 replies
1d9h

The point is that the Haskell type system is an expressive way of solving systematic bugs. You can express both the data itself and the valid ways of handling it using types, which gives you a space to design yourself out of systemic problems. And when something goes awry, the strict typing and control over side effects means that you can refactor fearlessly!

Re. the example, the compiler can infer types, and you can write almost the exact same code in Haskell as in Python:

    doSomething = do
       let result = getResult
       when (result /= "a result")
            (throwError $ InvalidResultError result)
       return 42
But as it has been noted, you are unlikely to find code like this in Haskell. An element of Either type at the top level, without arguments is either always going to be Left or always going to be Right, so it is a bit pointless. Since this example is so abstract it is hard to see what the idiomatic Haskell would be, because it would depend on the particulars.

Mikhail_K
10 replies
1d8h

Haskell programs are hard to read and hard to reason about. That is the reason the language is not very practical.

The promise "if Haskell program compiles, it is probably correct" have not materialized. The most popular Haskell project Pandoc has 1000 open issues.

grumpyprole
5 replies
1d8h

This is a poor and lazy criticism of Haskell. It might be hard to reason about the memory usage and other operational behaviours of a Haskell program, but the ability to reason about semantics and correctness is far ahead of the mainstream. It actually supports equational reasoning. It has statically checked effect tracking, checked encapsulation of mutable state and much more. There is no all powerful pervasive "ambient monad" that lets code do absolutely anything.

Mikhail_K
4 replies
1d7h

It might be hard to reason about the memory usage and other operational behaviours of a

Haskell program, but the ability to reason about semantics and correctness is far ahead

of the mainstream.

For any practical program, memory usage and number of operations are part of the engineering specification and no one will deem correct a program that exceeds those specifications. So you just confirmed “impractical”, “academic” and “niche” charges.

It actually supports equational reasoning.

TLDR: to understand what a Haskell 5-liner does, you sometimes have to read a paper. Are you actually disputing “impractical” and “academic” labels, or saying that those _good_ things?

agentultra
2 replies
1d1h

For any practical program, memory usage and number of operations are part of the engineering specification and no one will deem correct a program that exceeds those specifications. So you just confirmed “impractical”, “academic” and “niche” charges.

I've encountered few C programmers who can predict what instructions will be emitted by their compiler.

Update: You might be surprised, in the presence of optimizations, how similar the code emitted by gcc and GHC can be for similar programs.

Fewer still those who can specify their pre- and post-conditions and loop invariants in predicate calculus in order prove their implementation is correct.

Most people wing it and rely on past experience or the wisdom of the crowd. What I like to call, programming by folk lore. Useful for a lot of tasks, I use it all the time, but it's not the only way.

The nice thing about Haskell here is that, while there is a lot you cannot prove (termination, etc... please verification friends, understand I'm generalizing here), you can write a sufficient amount of your specification and reason about the correctness of the implementation in the same language.

This has a nice effect: you can write the specification of your algorithm in Haskell. It won't be efficient enough for use at first. However you can usually apply some basic algebra to transform the program you know is correct into one that is performant without changing the meaning of the program.

Mikhail_K
1 replies
10h25m

I've encountered few C programmers who can predict what instructions will be emitted by their compiler.

That's an irrelevancy. Predicting those specific instructions does not preclude one from making reasonably correct judgements about a program's performance.

It is a fact that reasoning about performance of Haskell program is virtually impossible, unless you're an active ghc developer, and that's why the language remains unused for practical problems. Apart from buggy pandoc and few blockchain scams, that is.

agentultra
0 replies
6h4m

That's simply not true. You can use the same tools used to reason about performance in time as we do for nearly every program. Predicting memory performance is harder due to optimizations and how untrained Haskell developers have a hard time spotting where there code is leaving unevaluated thunks on the heap. However the memory profiling tools are there and are great at catching them so in practice, as it is in C++ and many other languages, it's a pain but not a huge deal.

As for practical problems, I dunno. I work in Haskell full-time at a company that isn't doing block-chains. And I stream myself working in Haskell once a week on pretty practical things. I've made a couple of small games, a PostgreSQL logical replication client, have been learning different algorithms. All pretty practical to me.

grumpyprole
0 replies
1d7h

For any practical program, memory usage and number of operations are part of the engineering specification

And yet if I read a C++ program, I have no idea with just a local inspection where, if any, the allocations are happening. Reasoning about operational behaviour is not exactly a solved problem in other languages either.

TLDR: to understand what a Haskell 5-liner does, you sometimes have to read a paper.

You have to understand the syntax and the semantics and genuinely know what you are doing. This is no different to any other programming language. It would require a whole paper to explain JavaScripts equality operator! However, Haskell does has one distinct advantage, the abstractions often come from maths and are very widely applicable. These abstractions will still be around in 10 years time.

kreyenborgi
3 replies
1d8h

Haha those issues are things like "LaTeX to HTML conversion: \\label and \\ref work for figures but not equations" which I mean how on earth is that related to what language pandoc was created in ... I'm guessing the number of issues are because of pandoc's popularity and insane scope.

Mikhail_K
2 replies
1d7h

So you're saying that pandoc is correct in academic, but not practical sense?

kreyenborgi
1 replies
1d3h

No.

Mikhail_K
0 replies
10h31m

And yet, you do. Not in those exact words, but that's what you're saying amounts to.

fbn79
9 replies
1d9h

We need a language with Haskell syntax, Rust memory management and Typescript toolchain/ecosystem :))

swiftcoder
3 replies
1d9h

I feel like you've specifically picked the worst part of each language here.

Haskell's syntax, like many FP syntaxes, is inscrutable on first acquaintance. There's a reason hybrid languages like Elixir thrive...

Rust's memory management is a great boon for a close-to-the-metal language, but if all your types are immutable you don't actually need/want to deal with the borrow checker.

Typescripts toolchain and ecosystem are... ok, at best? I'd give a solid pitch for the Rust ecosystem having reproduced the best parts thereof (and there is still room for improvement even so)

fbn79
2 replies
1d7h

Haskell have a garbage collector (and it have a lot or work to do). So even if you are in the context of immutability, if you don't want a gc, you still need to take care of memory yourself using RAII like Rust or any other low level technique. About syntax for me haskell is beautiful. But maybe it's just my love for having type notation aside from function declaration and not mixed.

swiftcoder
1 replies
22h41m

Garbage collectors can really be very efficient in languages that are both strongly-typed and truly immutable.

The type system means you don't have to worry about folks hiding pointers in arbitrary pointer-sized integers, so you know all the roots ahead of time.

Immutability means you can only ever create references in one direction (i.e from new objects to old objects), and you can't ever create cycles.

This lets you do fun shit like a mark&sweep garbage collector in a single pass (rather than the usual two) - and if you have process isolation guarantees (a la Erlang), you don't necessarily have to suspend execution while it runs. Or maybe a generational collector where the generations are entirely implicit.

fbn79
0 replies
6h13m

The GC could be very smart and optimized, but at the end... "Any sufficiently complicated program in a garbage-collected language contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of malloc and free". https://news.ycombinator.com/item?id=22802451

joelthelion
2 replies
1d9h

I'd argue the syntax is the worst part of haskell. In particular, the lack of object notation for accessing fields (e.g. car.doors) is particularly frustrating.

I still love the language, BTW.

rebeccaskinner
0 replies
16h52m

There is an extension that lets you do this now (OverloadedRecordDotSyntax) but truthfully I think it’s a really bad idea. The (.) operator already has a very concrete meaning in Haskell, and record dot notation means that you suddenly need to care about the specific details of how values are calculated. Field accessor functions are much better imo even if they seem a little odd.

gtf21
0 replies
1d9h

You can have that syntax if you want it via `OverloadedRecordDot`.

I actually really like the syntax as it makes it easy to write DSLs which are actually just Haskell functions.

rebeccaskinner
0 replies
16h55m

Haskell has linear types now, which can give you something similar to rusts affine types. The library ecosystem for it isn’t very mature yet though.

empath75
0 replies
1d5h

I think you just want Rust.

You can write extremely haskell-like code with Rust.

Here's the first example in rust:

    fn safe_head<T>(list: &[T]) -> Option<&T> {
        match list {
            [first, ..] => Some(first),  
            [] => None,                  
        }
    }

    fn print_the_first_thing(my_list: &[String]) {
        match safe_head(my_list) {
            Some(something) => println!("{}", something),
            None => println!("You don't have any favourite things? How sad."),
        }
    }

    fn main() {
        let my_favourite_things = vec!["raindrops on roses".to_string(), "whiskers on kittens".to_string()];
        let empty_list: Vec<String> = vec![];

        print_the_first_thing(&my_favourite_things);
    
        print_the_first_thing(&empty_list);
    }
(of course Rust has a lot of helper functions that avoid all that verbosity -- you can do the whole thing in one expression, if you want)

    println!(
        "{}",
        my_favourite_things.get(0).map_or(
            "You don't have any favourite things? How sad.".to_string(),
            |something| something.to_string()
        )
    );

sesm
6 replies
1d3h

Haskell is an experiment on having laziness at language level. This experiment clearly shows, that laziness on language level is a bad idea.You can get all the benefits of laziness at standard library level, as illustrated by Clojure and Twitter Storm using it in production.

All the other FP stuff (union types, etc) existed before Haskell in non-lazy FP languages.

kccqzy
2 replies
1d3h

Laziness is but one mistake in Haskell. It should not prevent you from using other parts of the language that are wonderful. There's a reason Mu exists, which is to take Haskell and make it strict by default: there are plenty of good things about Haskell even if you consider laziness to be a mistake.

(Of course a small minority of people don't consider laziness as a mistake as it enables equational reasoning; let's not go there.)

tome
1 replies
11h5m

Having used Mu I concluded that Haskell got function laziness correct. (Data type laziness is a different issue, but that can be solved by `StrictData`).

kccqzy
0 replies
5h13m

The problem with StrictData is that you need to convince every library in your dependency graph to switch to it, or provide strict versions of the data structures. Common container types like Map and Set do this. Your typical library implementing a domain-specific functionality does not.

whateveracct
0 replies
20h59m

This experiment clearly shows, that laziness on language level is a bad idea.

This is quite the claim. I know plenty of experienced and productive Haskellers who disagree with this (myself included)

iainmerrick
0 replies
1d1h

Right, I was surprised I had to scroll down here so far to see the first mention of laziness; it's the core feature of Haskell (copied from Miranda so researchers had a non-proprietary language to build their work on).

From everything I've ready about people's experiences with using Haskell for large projects, it sounds like lazy evaluation unfortunately adds more problems than it removes.

agentultra
0 replies
1d3h

There’s a strong case that laziness should be the default: https://m.youtube.com/watch?v=fSqE-HSh_NU

I’m not sure I’m experienced enough in PLT enough to have a strong opinion myself.

However, from experience, laziness has a lot of advantages both from a program construction and performance perspective.

me_vinayakakv
6 replies
1d1h

I was looking into the pattern matching example in the article with `Either` type. If we need to unwrap and check for all the cases one by one would it become a callback hell?

I was going through a Scala codebase at work that uses `Future`s and `map`ing and `flatMap`ing them. Sometimes the callbacks went 5-6 levels deep. Is there a way to "linearlize" such code?

I come from JS/TS background and have not much experience with pufe functional languages. But I love how TS handles discriminated unions - if we handle a branch and `return` early, that branch is removed from the union for the subsequent scope, and I was wondering if something of that sort can be achieved in Haskell/Scala.

tel
2 replies
1d1h

In Haskell, that's usually that's done using `do` syntax.

    do
      a <- somePartialResult
      b <- partialFunction1 a
      c <- partialFunction2 a b
      return c
where we assume signatures like

    somePartialResult : Either<A, Error>
    partialFunction1 : A -> Either<B, Error>
    partialFunction2 : A -> B -> Either<C, Error>
this overall computation has a signature Either<C, Error>. The way it works is that the first failing computation (the first Either that's actually Left-y) will short-circuit and become the final result value. Only if all of the partial computations succeed (are Right-y) will the final result by Right(c).

In Haskell we don't have an early return syntax like `return` and function scope. Instead, we construct something equivalent using `do` syntax. This can be a little weightier than `return`, but the upside is that you can construct other variants of things like early returns that can be more flexible.

me_vinayakakv
1 replies
16h31m

Nice! Would it be possible to transform an error to something else using this syntax?

Or, should we resort to a method of `Either` that transforms its `Left` in that case?

tel
0 replies
15h59m

Unfortunately, no. Or, rather, I'm sure there's a way to make it happen although that's not typical practice. Typically you'd resort to mapping the left sides of your eithers so that the error types match.

Rust offers a similar facility (though specialized to just handle a couple kinds of error handling) using its `?` syntax. This works essentially identically to the do syntax above, but also includes a call to transform whatever error type is provided into the error type of the function return.

Note that in Rust (a) this technique only, today, works at function boundaries and (b) will always be explicitly annotated since all functions require an explicit type. This helps a bit over Haskell's more general approach as it provides some additional data to help type inference along.

That said, if you were interested, it's likely possible to emulate something very similar to Rust's technique in Haskell, too.

But I don't think I've ever seen that. It just doesn't feel as stylish in Haskell. The From/Into traits define a behavior that's much more pervasive than most type classes in Haskell. It works well for Rust, but is I think less compelling to the Haskell community.

TJSomething
1 replies
1d

I've been out of the Scala game for a few years, but I would use a for comprehension with the cats EitherT monad transformer.

https://typelevel.org/cats/datatypes/eithert.html

    def divisionProgramAsync(inputA: String, inputB: String): EitherT[Future, String, Double] =
      for {
        a <- EitherT(parseDoubleAsync(inputA))
        b <- EitherT(parseDoubleAsync(inputB))
        result <- EitherT(divideAsync(a, b))
      } yield result

valenterry
0 replies
1d

Yeah, but Usually we just use something like ZIO nowadays. So the code becomes:

    def divisionProgramAsync(inputA: String, inputB: String): IO[String, Double] =
      for {
        a      <- parseDoubleAsync(inputA)
        b      <- parseDoubleAsync(inputB)
        result <- divideAsync(a, b)
      } yield result
(the annoying wrapping/unwrapping isn't necessary with ZIO here)

You can also write this shorter if you want:

    def divisionProgramAsync(inputA: String, inputB: String): IO[String, Double] =
      for {
        (a, b) <- parseDoubleAsync(inputA) <*> parseDoubleAsync(inputB)
        result <- divideAsync(a, b)
      } yield result

jose_zap
0 replies
1d1h

Yes, it is possible to linearize it. You can, for example use do notation:

    result <- do
        a <- someEitherValue
        b <- anotherEitherValue
        return (doStuff a b)
In the above example the do notation will unwrap the values as an and b, but if one of the results is Left, the computation is aborted, returning the Left value.

This is one just of the many techniques available to make error checking linear.

imoverclocked
6 replies
1d7h

My biggest gripe with Haskell, especially when dealing with lower level code, is that there is no implicit enforcement of dealing with error states. I like golang far more in this regard. All of the “if error” guards may be ugly but they sure impose a culture of dealing with problems that will arise.

I’ve come across plenty of Haskell code that just expects a happy path all of the time and can’t deal with any other situation. That’s great for POC work but horrible in production.

gtf21
4 replies
1d7h

I write about this at some length in the essay, perhaps you can help me by telling me why the section on "Make fewer mistakes" _doesn't_ satisfy?

YuukiRey
2 replies
1d2h

You wrote

Haskell solves the problem of the representation of computations which may fail very differently: explicitly through the type system.

In my experience this is very hit and miss. Some libraries use exceptions for lots of error states that in Go would be a returned error value. I'm therefore left to decipher the docs (which are often incomplete) to understand which exceptions I can except and why and when.

Last library I remember is https://hackage.haskell.org/package/modern-uri

From their docs: > If the argument of mkURI is not a valid URI, then an exception will be thrown. The exception will contain full context and the actual parse error.

The pit of success would be if every function that can fail because of something reasonable (such as a URI parser for user supplied input) makes it a compile time message (warning, error, whatever you prefer) if I fail to consider the error case. But there's nothing that warns me if I fail to catch an exception, so in the end, in spite of all of Haskell's fancy type machinery, in this case, I'm worse off than in Golang.

tome
0 replies
10h53m

Some libraries use exceptions for lots of error states that in Go would be a returned error value

Yes. This is a very very bad aspect of the design of many Haskell libraries. They just throw away the whole point of doing Haskell.

gtf21
0 replies
10h30m

Some libraries use exceptions for lots of error states that in Go would be a returned error value.

This just seems like bad libraries, I'd agree that this is bad and sort of defeats the point. I haven't actually encountered this with any libraries I've used, and we tend to avoid MonadThrow / Catch except in particular circumstances.

in this case, I'm worse off than in Golang.

Having (unfortunately) had to write some Golang, I don't think this is true -- I've encountered plenty of code in Golang in which it seems idiomatic to return things like empty strings and empty objects instead of error values which, I think, it's still possible to mishandle.

Perhaps this can be summarised as: you can still write bad Haskell, but I don't think it's particularly idiomatic looking at the libraries I've spent most of my time using, and the machinery you are provided allows you to do much, much better.

weebull
0 replies
1d6h

I think one of the big takeaways from Haskell for me was that errors don't always need to be explicitly handled. Sometimes returning a safe sentinel value is enough.

For example, if the function call returns some data collection, returning an empty collection can be a safe way to allow the program to continue in the case of something unexpected. I don't need to ABORT. I can let the program unwind naturally as all the code that would work on that collection would realise there's nothing to do.

Debugging that can be a pain, but traces and logging tend to fix that.

HelloNurse
0 replies
1d7h

And also horrible for the typical functional programmer that likes clever "solutions" but hates the "boring" parts of actual good software.

skybrian
5 replies
22h39m

As far as I've heard, Haskell's type system doesn't normally prove functions to be total; they can diverge. This fine, though, because for ordinary programming, a proof of totality isn't a useful guarantee. You care how long programs actually take to run, not whether they would theoretically finish eventually.

It's only when proving theorems that a mathematical proof of totality matters, and there are specialized languages for that.

For most people, we test in order to make a scientific claim, that we tried running it, and it worked for the inputs we tried, and completed in a reasonable amount of time.

This is true of property testing and even model-checking; in simple cases, sometimes an exhaustive test can be done, but they don't actually prove statements outside the bounds used when testing.

staunton
2 replies
22h1m

It's perfectly feasible to have proofs about programs together with a guaranteed upper bound on (something like) the "number of processor instructions it will take" (given known finite bounds for all inputs).

Of course, just like any other system that allows correctness proofs, it wouldn't be nearly useful enough to justify the effort for all but a negligible number of applications. That's at least until the levels of effort required are significantly reduced.

skybrian
1 replies
21h35m

Yes, a theoretical calculation like that would be useful as an estimate. But theoretical performance on ideal machines is only loosely related to performance on real machines under real conditions. That's true of testing, too. Benchmark performance varies even between runs.

So there's still going to be a theoretical math versus science and engineering divide.

Another perspective is that we have a useful division of concerns. Static checking is useful to find some kinds of errors. API's help to ensure that certain things don't change in a new version, so that performance improvements are less likely to break callers.

Depending on the domain, leaving some things like performance and size limits deliberately unspecified in API's seems like more of a feature than a bug? Stricter interfaces aren't always an improvement.

staunton
0 replies
11h22m

leaving some things like performance and size limits deliberately unspecified in API's seems like more of a feature than a bug

In rare cases there might be exceptions. Hard real time applications and constant time cryptography come to mind.

Regardless, I didn't mean for such proofs to be part of an API or any kind of interface. It's just a guarantee you would get about your program. E.g., "it never times out", or "the worst case data throughout is X" (in whatever hardware model the proof assumes, which in theory could be made very close to the actual hardware).

mrkeen
1 replies
12h3m

As far as I've heard, Haskell's type system doesn't normally prove functions to be total; they can diverge.

This is true. You can write a function which is the natural numbers, and I wouldn't want the type system to preclude it.

  nats :: [Integer]
  nats = [1..]
However, I would love to be able to opt in to declaring functions total on a case-by-case basis, like Idris can.

This fine, though, because for ordinary programming, a proof of totality isn't a useful guarantee. You care how long programs actually take to run, not whether they would theoretically finish eventually.

This wouldn't be the point of the guarantee for me. Haskell functions already give you assurances that you've not made any mistakes:

* You haven't accidentally introduced null, or missed any null checks. * You haven't written code which will do a different thing tomorrow than it did today. * Your function won't race, nor will it cause other functions to race.

If you have a function Foo->Bar, and you give it a Foo, you've all-but-proved that you'll get a valid Bar back. How can you screw that up? By diverging. I'm not trying to put a bound on a long-running function, I just want the type system to make sure the function isn't sucking on its own tailpipe.

skybrian
0 replies
2h23m

Yes, it would detect a certain class of mistakes where you put an infinite loop or infinite recursion in your code. But I find these are pretty rare, and there would still be other mistakes where on some inputs, it would take a year to run.

psychoslave
5 replies
9h53m

programming is not maths, and that anything that smells of maths should be excised

Math is not academic gibberish though, and actually often the hard part to make a great conceptual thing work finely in the wild is to give it a more approachable form than the ridiculously arcan soup of greek letters and made up symbols sprinkled with generous quantity of opaque acronyms and words shrinked into trigrams.

Not to say CS/IT industry always shines at making all these better, and it can actually be even more "enthusiast" with acronym jargon in my experience.

No one communicate perfectly, sure starting with me. :D

xigoi
3 replies
9h49m

Do you think mathematicians deliberately use Greek letters and “made up symbols” (whatever that even means) to make their work less accessible? Or could there possibly be a different reason?

psychoslave
1 replies
8h55m

No, at least I think that generally this is not like a voluntary conscious move from individual. Back in university, I remember how often my teacher would not be able to inform me about why this or that symbol was used, they were just reproducing what they had been introduced to without questioning its history and the relevance to keep it forward. Now to be fair, it seems that most people never care about that kind of "details" and are happy to just apply the formula as expected by whoever will assess your performance, get the degree and move forward in their career.

Of course there are a few people out there who do deliberately select an obscure symbol just to put some esoteric vibe on the topic. But I believe that’s definitely not the norm. Lambda as selected by church is almost there, should I believe https://math.stackexchange.com/a/2095748/85628 And in any case I by far prefer "anonymous function" as a term and prefer languages that doesn’t use `lambda` as keyword to implement them (looking at you Python!). Of course, just one symbol is not a big deal, but when each academic out there feels like they need to also introduce their special symbols to feel as special as the ones they admire, we end up with a mess of irrelevantly large number of symbols that don’t really bring a significant addition in term of expressiveness but do contribute to make things harder to grasp.

https://en.wikipedia.org/wiki/Mathematical_operators_and_sym... for what I have in mind about "made up symbols".

kstrauser
0 replies
4h13m

I'm starting this comment with the letter I. I don't know who decided that we needed a symbol for that particularly combination of sounds, or why they decided it would sometimes look like a vertical line with optional top and bottom bars and other times look like a shorter vertical line with a dot over it. That all seems incredibly arbitrary. In fact, every symbol I'm using to express this idea is incredibly arbitrary. Ever really look at a comma? A question mark is a curly, funny thing.

And yet, we still came to a consensus that these symbols have these meanings, completely arbitrary as they may be. I don't think + or ∪ or ≥ or λ is any more obscure than I or , or ?. I would much rather write "A⇒B" than "A implies B" 100 times on a page. Besides being shorter, using a symbol reinforces that we're referring to a strict mathematical interpretation and not the vagaries of ambiguous English.

TL;DR if you don't hassle your English teacher on why we use ? to ask a question, and it doesn't stop you from learning English, don't pester your math teacher to explain the etymology of λ. The history of those languages is a different subject from the practice of them.

stoperaticless
0 replies
9h23m

To seem cool and get laid.

djtango
0 replies
9h39m

For me there is an ebb and flow between Maths (theory) and practice. Humans have long been able to achieve things with loose and intuitive understanding of what they're doing - this usually outpaces theory for a long time while theory struggles to catch up but once it does it tends to unlock a lot more new and interesting things.

You can choose examples like "sun rises in the east" or "things fall down to the ground" or even things like fire to see how we were long able to do useful things before having a deep fundamental understanding of what they were. And while trying to understand them, we had a whole bunch of wrong ideas along the way all while the practitioners got on with their lives.

We also see this in physiological spaces like sports and health. Just the other day I watched a video about how Chopin had some deep intuitive understandings about the anatomy of the hand that was contrarian to his peers but proved to be far ahead of his time.

With programming we have a lot of different theory to dip into: information theory, computer science, systems theory, category theory etc. some of these ideas have been slowly chugging along in academia but the early days of computing were built with dirty for loops, goto and hand crafted assembly or even less. We still managed to put people on the moon with that nevertheless and automated a bunch of commerce and put stuff on the internet - long before most devs had heard of an ML type system

But the ideas from Scheme gave us Ruby Python and JS which unlocked a whole new wave of programming and arguably made it accessible to more people increasing humanity's "productivity" in some sense. I'm sure people who believe in static typing would disagree but YouTube, Instagram and Github were built in these dynamic languages...

scotty79
4 replies
1d9h

Unlearning and relearning

Interesting how, when encountering Rust, I didn't have to unlearn reference based programming, just learn value based programming. Somehow pure functional language advocates insist you are doing things wrong and need to unlearn it first. Kind of reminds me of a sect that promises you great things if only you work hard on leaving all your prior life behind.

gtf21
3 replies
1d9h

I think there are more fundamental differences between functional and imperative programming paradigms (or, perhaps, declarative and imperative programming styles?) than between passing by reference and passing by value (after all, variables are just references, filesystems have links, it just doesn't seem that unfamiliar).

I have definitely seen people struggle to wrap their head around declaring expressions representing what they want to compute when they are very used to imperative control flow like mutating some state while iterating through a loop.

Kind of reminds me of a sect that promises you great things if only you work hard on leaving all your prior life behind.

I think this is sort of saying "hey this one thing looks like this other thing I don't like, therefore it must carry all the same problems". Perhaps we can call it "the duck type fallacy", but I don't think it's true to say that "anything which tries to change paradigm" is equivalent to cults.

chii
1 replies
1d8h

definitely seen people struggle to wrap their head around...

I think it's a top-down vs bottoms-up approach to solving a problem.

Most people actually think in terms of top-down. They break a problem down into smaller sub-problems, which they then either break down more or has done sufficient breaking down to solve it.

I think functional style of thinking would make you do a bottoms-up approach to problem solving. You will create a very small solution (function) to solve a very trivial version/type of the problem, then repeat for each small thing. Once you have sufficient number of basic functions written, you can assemble them into solving a bigger problem.

nineteen999
0 replies
17h2m

You can take either approach in nearly any other programming language. I disagree that functional programming has any particular advantage here.

scotty79
0 replies
1d3h

after all, variables are just references

That's the whole point of Rust, that they are not that. They are named, sized memory slots that values can be moved into or moved out of.

I have definitely seen people struggle to wrap their head around ...

The struggle comes from being forced to use recursion where it doesn't make things easier to express but harder. Then they remember it all compiles down to machine code that just uses iteration fueled by bare metal equivalent of goto-s and the struggle feels pointless.

Imagine somebody took away your stack so when you want to do recursion, you'd be forced to roll your own stack every time. You'd struggle too.

jes5199
4 replies
1d1h

in general, I am in favor of language features that make it easy to prove that common errors will not happen. Conversely, I am against language features that make it easy to make new classes of errors that are hard to reason about.

Haskell manages to do a lot of both. The kinds of problems I ran into in my Haskell error were much, much weirder than the problems I run into in other environments - things that when I explain them to other programmers they often don't even believe me.

On balance, for me, the new problems were worse than the old problems, but your mileage may vary.

alxmng
3 replies
19h13m

This is my experience as well. Referential transparency and immutability have many advantages, with few disadvantages (if any). Type checking is great as a way to enforce constraints. However, nominal types create unnecessary incompatibility and endless type shuffling every time you want to make even simple changes. I maintain a web app written in Haskell and there’s 3 or 4 different types for URLs in the codebase, even though there’s no real difference between them. Nominal typing is terrible for code reuse via third-party modules. So many hours wasted wrapping types or shuffling between them.

A functional language with a simple set of structural types would be the sweet spot for me. Clojure is probably the closest to this.

joshlemer
1 replies
13h56m

What kinds of things led to multiple types of URL's in the same codebase?

alxmng
0 replies
55m

When I first started the project, URLs needed certain constraints enforced in my business logic. So I thought "Great, let me create a type for URLs". This is the type that parses URLs from user input and gets marshaled/unmarshalled to the DB.

Then I needed to ingest RSS feeds. So I found a library that handled that for me. Except that library uses another type for URLs. Uh oh. What should I do? I could change my URL type to be a wrapper type around both types, or write code to convert between the types. I chose to convert. Now I'm writing code to shuffle between the types where this RSS parsing module is used.

Then I needed to make HTTP requests. So I pulled in a library to handle HTTP requests. Of course, that library uses another type for URLs (from another library it depends on). Great. Now I have 3 types for a URL.

Then I needed to parse XML... and you know where this story is going.

So now my codebase has many different URL types.

The type-a-holics will say: "This is actually good! Each implementation of the URL type might have slightly different constraints, and the type system makes this all explicit. You should be grateful you spend half of your development time fiddling with types. The fact that `unpack . decodeUtf8` is littered around your codebase isn't code smell, it's the splendor of a type system that's saving you from yourself. You should learn to love the fact that you have to deal with String, Text, and ByteString and 4 URL types to fetch and parse an RSS feed. Otherwise your software would be full of bugs! Silly developer."

One day I finally woke up from this type nonsense. There's integers, rationals, strings, lists, and maps. The end.

birdfood
0 replies
14h47m

I think I'm at the same conclusion. I basically want ocaml but with structural / compile time duck typing of all types (I know about objects but they don't seem widely used). And some sort of reflection mechanism to cover 80% of the cases where you'd reach for ppx / macros (i.e. database interface code gen of types).

0xTJ
3 replies
1d7h

While the article is interesting, I find the layout of this website infuriating. A narrow strip of text, each line holding ~a dozen words makes it so much longer vertically than it needs to be, on desktop. I ended up using Inspect Element to change the width from 600px to 1200px, to fill up the most comfortable reading area on-screen.

gtf21
2 replies
1d7h

Sorry to hear that. I built it that way because I prefer reading narrower columns of text (maybe because I read a lot of magazines and newspapers, who knows).

0xTJ
1 replies
1d5h

Fair enough, if that's what you prefer than you might as well, it's more of a personal preference. My main monitor is an ultra-wide, so it ends up using less than 20% of the total width. Though I can see it being tough to have a good solution that works everywhere.

YuukiRey
0 replies
1d

The general recommendation is the keep the measure of the page fairly narrow since studies show that reading text that is very wide is harder than reading a narrower column of text. So looking at the layout from a best practices point of view the author made the right call.

sevensor
2 replies
1d7h

Not a word about laziness? This is at this point the most interesting thing about Haskell. As many others have pointed out, the type system has hugely influenced other languages, but laziness by default, for everything, seems like an equally big deal, and equally hard to get one’s head around.

whateveracct
0 replies
20h56m

You don't even notice the laziness most of the time. Even if you are benefiting from it.

gtf21
0 replies
1d5h

Nope: I think the laziness aspect is very interesting, but it's not something that makes Haskell (for me) a great programming language. Or, at least, it's not in my list of the top reasons (it is in there somewhere).

nazka
2 replies
1d

Is it still true that writing optimized Haskell is extremely hard? Since Haskell is GC, can I write code as fast as say Java or Go?

rebeccaskinner
0 replies
16h57m

I don’t think optimization in Haskell is much different from any other language. In most cases naive Haskell is pretty fast and memory efficient, but there are some patterns that will make it more or less so. There are a handful of common patterns you can learn that are idiomatic and will generally result in faster or more memory efficient code, and some common libraries that you can use that are more efficient.

There are also some common patterns for less idiomatic but more performant code that you can use as a first pass when you need to optimize things further. Usually that’s sufficient, but when you need to go even further with optimization then it can get hard. Every language can. In Haskell, it usually means starting by dumping core and seeing what the compiler is doing, and getting familiar with the different passes the compiler makes. As a last resort you can also just write in C and use the ffi.

kevindamm
0 replies
1d

This varies by use case and how much / which extensions you're including in your Haskell source (and, to a lesser extent, which libraries being included in the equivalent Java or Go source).

I haven't taken a lot of measurements and my production code is biased to Go, C++, Python and Java, with most of my Haskell experience being side projects and toys for learning from, and writing a type-unifier for a production project. I can summarize what I learned but I would be interested in seeing better measurements.

First, though, let's be more specific about what you mean by "writing code as fast," which I think should be refined to "time spent writing code" + "time compiling code" + "time spent in language runtime" + "time spent debugging / reading code". Depending on your project, and who if anyone is collaborating, each of these might be more or less important. Sometimes runtime speeds dwarf the needs of development or even debugging time. Sometimes compilation speeds afford the quick feedback loop that contributes to flow in development time.

Within that framing, Haskell can excel at development time with small teams and limited scopes. It affords writing a domain-specific language within the code, including very custom operators, and this carries the risk of overburdening with complexity, and strongly proportional to the number of people on the team. Things can get complex fast and it can contribute some to compilation time if there is a lot of recursive complexity to the type system. But if the source is organized well and doesn't need to be very dynamic, this may not be much of a concern. As an underlying engine for very dynamic data inputs and sufficiently complex numerical analysis or IO management as its primary purpose, it would probably do well.

The packaging system (Hackage) is pretty good, and that benefits the development time considerably. Adding some modules may impede compilation times, for much of the same reason as above, the type inference can become expensive. And undisciplined source management can lead to a lot of type implementations that are near copies of each other. Obviously this also ties into the reading/debugging time, too.

For runtime, though, yes Haskell can be competitive with bare-metal C implementations. I think there have been a few papers written about that going back a decade or so.

jesse__
2 replies
1d2h

The thing that always amuses me when I read articles like this is that the things they point out as differentiating the language are usually, at best, small time-savers.

1. the lack of nullable types

I very rarely write these bugs, and when I do I can typically fix them in 5 minutes. This is because I typically do (2) in my projects, which does largely eliminate this error.

2. representations of “failable” computations

Basically any modern language can do this. It might not be 100% as ergonomic as it is in Haskell, but it also isn't a large source of bugs in my experience.

3. pattern matching and completeness checks

Okay, these are nice and ergonomic in Haskell. Other languages get pretty close. Again, not a source of time-consuming bugs.

4. the avoidance of “primitive obsession”

The example he gave for this is innanely contrived, and the bug would likely take a small amount of time to fix, even for a junior. Admittedly, this is a nice convenience feature and I would love having types for different color spaces, or radians vs degrees, but at the end of the day I spend basically 0% of my time on bugs like this, so it's barely worth mentioning.

You know what I'd like a language to help me with? Keeping track of inter-data dependencies so I don't have to litter my code with a million assertions to make sure the sub-type of the sum type I'm working with is correct. Or giving me a way to express structural dependencies between pieces of code when writing multithreaded programs. Like saying "hey, this render pass has to happen after 'DoEntitySimulation' has completed" .. or whatever. I'm not aware of any language that even tries to do that, although I think Bungie wrote something like this in their engine for Destiny2.

And metaprogramming. I ended up writing my own metaprogramming language because none of the ones I tried could even do the basics of what I wanted in an ergonomic way, and be runtime-performant.

For reference, my language of choice is typically C++99. Maybe I'm not the intended target audience.

valcron1000
1 replies
23h55m

This is actually a very good comment. Haskell has had these features since early ~2000s and it was a major competitive advantage in the language space, but today I would argue that if you're using a modern language then they don't stand out as much. Nevertheless, not all popular languages provide all the above mentioned features and in case they implement them it's usually in a compromised fashion. For example, nullable types are still an open issue in Java, while C# and Typescript provide easy ways to circumvent them, a lot of times by accident (the main issue is that they're mostly annotations, not runtime checks).

On the other hand, you mention several features which Haskell provides, usually through libraries that can only be implemented due to the features provided by the base language:

- Refinement types [1] allow to add runtime invariants to existing types in an ergonomic fashion, or you can go even further and use something like LiquidHaskell[2] to enforce properties at compile time.

- For multithreaded programs, the existence of STM[3] allows to to write mutable variables which are safe to use across threads. Very few languages offer something like this.

- For structural dependencies, you can apply the techniques of "ghost of departed proofs"[4]. Personally I don't like to go that route since programming becomes an act of "proving" rather than "doing" but I appreciate the fact that you can encode it if you want/need to.

- Metaprogramming in Haskell is not as ergonomic as in a LISP yet you have the full access to the language through TemplateHaskell[5]. A more constrained form is available as QuasiQuotations that allow you, for example to write a regex[6] string and have it compiled alongside the rest of the code.

There are other features that I personally think are still far ahead from the competition, like lenses[7] to traverse nested data, the async[8] package for ergonomic concurrent programs, effect systems[9] for more granular dependency injection, immutability by default to avoid corrupting state, full type inference, top level functions and values (I can't believe the amount of times I find myself missing them in OOP languages like Java and C#), among others.

---

[1] https://hackage.haskell.org/package/refined [2] https://ucsd-progsys.github.io/liquidhaskell/ [3] https://hackage.haskell.org/package/stm [4] https://hackage.haskell.org/package/gdp [5] https://serokell.io/blog/introduction-to-template-haskell [6] https://hackage.haskell.org/package/regexqq [7] https://hackage.haskell.org/package/lens [8] https://hackage.haskell.org/package/async [9] https://hackage.haskell.org/package/effectful

jesse__
0 replies
19h42m

This is actually a very good comment.

Thank you, I appreciate that. And that you took the time to write such a complete response.

At the end of the day, Haskell is just way too slow (at runtime) for me to consider using, so debating that these features satisfy my requirements is purely an academic exercise. One which I don't have much interest in doing.

I am glad to hear that you seem to enjoy using Haskell, and I hope it continues to bring you joy :)

grumpyprole
0 replies
1d8h

And he got it completely wrong, as more and more Haskell features end up in industrial programming languages. Oracle's Java architect even publically stated he was influenced by Haskell.

TacticalCoder
0 replies
1d8h

It's a classic and Steve Yegge hasn't been nice to Haskell with that one. As a Java programmer I used to love an even older blog making fun of Java and its ecosystem called, IIRC, "The bile blog". It was trashy, mean, using lots of swear words and it was really good.

ackfoobar
2 replies
11h10m

I believe Haskell is worth learning.

But I don't want to spend any more time near purely functional programmers in my professional life, so I will spend some time to unfairly nitpick the article.

a lot of the new features ... are either inspired by, or at least more robustly implemented in, Haskell.

This is like some programmers, who don't know much about the history of programming languages, saying Java is stealing features from Kotlin (a language I enjoy). In both cases the inspiration usually comes from ML (a language family which includes OCaml - a language I deeply admire).

Type classes (first implemented in Haskell) are neat though. I heard that Java is considering adopting them.

Let’s imagine we represented these as plain old strings ... `Theatre String String -- venue name and event name respectively` `checkForSeats :: String -> String -> IO [Seat]`

If your language has named arguments, and sum type cases are records where you have to name your fields, using plain strings is just fine, and probably more ergonomic than wrapping the strings in another type.

`Either AddressParseError ValidAddress`

Real world procedures usually involves many steps which can fail in many ways. The HM type system does not have subtypes. To stuff all possible failures into the `Left` case of `Either` you have to wrap all possible failure types into a sum type, possibly with multiple layers of nesting (huge PITA).

concurrent Haskell

For all the hassle with the IO monad, you cannot offload your understanding of memory model to the compiler when you use an `IORef`.

Maybe most your problems are embarrassingly parallel and STM is fast enough for the rest. Maybe.

exactly encode the effects we want a function to be permitted to perform

https://degoes.net/articles/no-effect-tracking "Effect Tracking Is Commercially Worthless"

See "Tagless-Final Effect-Tracked Java™" for a chuckle.

small sample program

The `Money` type is monomorphic, and `Functor`s are higher-kinded. You get the error message `Expected kind ‘* -> *’, but ‘Money’ has kind ‘*’`

ParetoOptimal
1 replies
6h2m

To stuff all possible failures into the `Left` case of `Either` you have to wrap all possible failure types into a sum type, possibly with multiple layers of nesting (huge PITA).

Is this not inherent complexity? Is there some language or approach you believe makes this easier? Does it provide the same or similar correctness?

ackfoobar
0 replies
1h49m

Untagged union types.

1oooqooq
2 replies
1d1h

What's one big web facing application which handles modern authentication protocols?

whateveracct
1 replies
20h56m

mercury.com ?

yakshaving_jgt
0 replies
12h49m

For a bit more context, iirc mercury is at ~2m lines of Haskell over ~10k modules.

reidrac
1 replies
1d4h

I like haskell. Actually, let me rephrase that: I like GHC2021.

And I have found that's one of the tricky bits with Haskell, together with the language being in very active development, wich makes upgrading your compiler a thing.

tome
0 replies
11h38m

Actually, changes to the compiler hardly ever break anything. I recently upgraded four compiler versions in a row and apart from a bug and some warnings the compiler didn't force any changes at all. It's primarily changes to libraries that introduce churn.

https://h2.jaguarpaw.co.uk/posts/ghc-8.10-9.6-experience-rep...

(DeepSubsumption wouldn't have been a problem if we'd specified Haskell2010, as we should have.)

kreyenborgi
1 replies
1d8h

I've used Haskell for a decade or so, and tooling has improved immensely, with ghcup and cabal sandboxing and HLS now being quite stable. Maybe I've been lucky, but I haven't found much missing in the library ecosystem, or maybe I just have a lower threshold for using other languages when I see something is easier to do with a library from Python or whatever (typically nlp stuff). The one thing I still find annoying about Haskell is compile times. For the project itself, one can do fast compiles during development, but say you want to experiment with different GHC versions and base libraries, then you have to wait forever for your whole set of dependencies to compile (or buy some harddrives to store /nix on if you go that route). And installing some random Haskell program from source also becomes a drag due to dependency compile times (I'm always happy when I see a short dependency tree). Still, when deps are all compiled, it really is a fun language to program in.

transpute
0 replies
22h46m

> ghcup and cabal sandboxing

Would you recommend using cabal or stack to package Haskell components in a Yocto layer, for sandboxed, reproducible, cross-compiled builds that are independent of host toolchains?

drdrey
1 replies
1d2h

The Python example feels like a strawman. You can write Python in a way that gets you most of the benefits of the Haskell version if you wanted to:

    @dataclass
    class InvalidResultError:
        result: str

    def do_something() -> int | InvalidResultError:
        result = get_result()
        return 42 if result == "a result" else InvalidResultError(result)
Using a dataclass like this seems a little overkill, but it does get you something close to `Either`.

I also don't understand the argument against nullable types: you could return `int | None` here, which would be the same as treating a `Maybe` (you have to explicitly consider the case where there is no value)

rbonvall
0 replies
17h26m

The most important benefit is not that you CAN unwrap a value, but rather that you CANNOT NOT do it.

devit
1 replies
18h23m

I think Haskell is fundamentally a bad design, because there is no reason to not have dependent types and totality checking in such a language, and also laziness is bad as it makes memory usage unpredictable and potentially asymptotically broken.

Basically Rust is much better at producing efficient code with zero abstraction cost (while still doing a decent job at controlling mutation and having an expressive non-dependent type system) and having a large package ecosystem, and Lean, Agda and Idris are much better at being theoretically perfect languages while sacrificing code efficiency, so why use Haskell?

mrkeen
0 replies
12h2m

I think (s/Haskell/Rust) is fundamentally a bad design, because there is no reason to not have dependent types and totality checking in such a language

adamddev1
1 replies
22h34m

I want Haskell with the tooling, DX, and packages of TypeScript.

yakshaving_jgt
0 replies
12h50m

I don’t. I don’t think gradual typing is enough to cover for the unprincipled mess that I’ve observed in the JS world over the past 15 years.

wredue
0 replies
1d5h

Good question

Runtime immutability is brain dead

Haskell doesn’t “solve maintainability” even remotely. Most people leaving Haskell say maintainability is worse

Haskell doesn’t solve parallelism as Haskell devs claim

Haskell is not beautiful

And that’s literally every selling point. Why choose it?

ur-whale
0 replies
2h14m

Since the syntax is quite distant from C-like syntax

THIS - and none of the other things cited in the article - is the main reason why Haskell is getting no traction in the real world.

The syntax is an abomination, and worse, IMO pretty much unnecessary. The ideas of the language are interesting, and could be expressed in a much more familiar syntax which would be a huge win for the language (although the convoluted hoops you have to jump through when you truly and actually need to mutate data would still be a sheer nightmare).

Haskell's syntax is exactly what keeps it niche and academic, except for the happy few who have managed to twist their brain into being able to parse that grotesque syntax.

semiinfinitely
0 replies
1d2h

I love Haskell but I hate using it.

semiinfinitely
0 replies
1d2h

Haskell had a large impact on the design of JAX which is probably the future of ML development frameworks.

revskill
0 replies
12h59m

It is all about monad.

rednafi
0 replies
55m

Still “impractical, academic, and niche”

komali2
0 replies
18h23m

All mainstream, general purpose programming languages are (basically) Turing-complete, and therefore any programme you can write in one you can, in fact, write in another. There is a computational equivalence between them. The main differences are instead in the expressiveness of the languages, the guardrails they give you, and their performance characteristics (although this is possibly more of a runtime/compiler implementation question).

Yes, as I like to tell project managers, everything can be implemented.

... But the Google drive sdk is available in python, java, and node.

Tooling, deployment, library ecosystem, and talent pool are just as important when choosing a language or framework as performance, syntax, or guardrails. Unless you have unlimited budget!

igouy
0 replies
1d3h

Awaiting part 2 — Why not to use Haskell?

hugodan
0 replies
21h49m

I'll tell you why, it is simply the best refactoring experience out there. This.

fsckboy
0 replies
2h6m

tangential haskell question (I don't know haskell at all)

does haskell have any facility for inspecting the "lazy evaluation queue" in a human readable way? I'm doing some binary-level math-as-logic calculations and I'd like to look at the patterns that emerge deep down, what cancels out and can be optimized away: think of it this way, I want to multiply AxB, where in bits thats (a0 a1 a2...) x (b0 b1 b2...) which turns into a bunch of a0 OR b0 , a0 XOR b0, (don't forget the carries :) etc, which after a few operations in an expression would be really hairy looking but completely straightforward.

so, I can write a little program to do that in any language and print out what happens (lisp/scheme would be a good one). If a wrote a little program like that in Haskell, would it afford me any extra "free" options for inspecting what's going on? where if a0 and b3 where unknown, they'd be variables, but if I had actual values for them they could be evaluated away.

maybe i should ask this as its own thread...

dbacar
0 replies
1d6h

Why not?

Vosporos
0 replies
1d7h

At work we use Haskell, and have been for around 10 years. It's a delight for iterations and refactoring, as the solid foundations it bring relieve you of spending your time writing tests checking for rogue nils or undefined.

IceDane
0 replies
7h35m

I started writing Haskell in something like.. 2008 or so. Before that, I was an avid low-level programmer that sneered at high level languages that held your hand.

I quickly realized the folly of my ways, however, and became what I would call a Haskell zealot. It was my favorite and most-used programming language, and I even wrote it professionally very early in my career.

As my career progressed and I had to use other languages, Haskell quickly lost its shine. It's just not a very practical language, no matter what they like to claim from their academic ivory tower. I wasn't some sort of novice. I could speak fluent lens and was well-versed in arcane type theoretic concepts, wrote entire data pipelines using recursion-schemes and what have you.

The problem with Haskell it's just way too far out on the spectrum of purity, I think. Everything is way too cumbersome because the language is so rigid. Building real software in Haskell, while trying to make use of its advanced features, is sort of like wanting to do basic arithmetic for your budget, but starting out by proving that mathematics actually make sense. Naturally, you don't have to build your software like that, and it seems like a lot of actually practical haskell apps out there aren't written like this.

Nevertheless, Haskell fundamentally changed the way I view programming and I learned many incredibly useful things, which I have taken with me. Try to keep your functions pure and small. Use static types, and lean on the type system to help you prove that you have covered all cases. Stuff like that.