Wow, this is everything I want from a new Go!
Having worked on multiple very large Go codebases with many engineers, the lack of actual enums and a built-in optional type instead of nil drive me crazy.
I think I'm in love.
Edit: Looks like last commit was 7 months ago. Was this abandoned, or considered feature complete? I hope it's not abandoned!
An Enum type has to be on the core Go team's radar by now. It's got to be tied with a try/catch block in terms of requested features at this point (now that we have generics).
No one wants try/catch/exception in Go.
Comments like this are what drives me away from Go; comments that enforce a particular belief about how or what features you should or should not use/introduce in your PL. Talking in absolutes is so far removed from a logical arguments and from good engineering. I would appreciate if anyone could recommend a language like Go (static, strong typed, not ancient, good tooling) with a friendly community, that won’t ostracize the non-believers. Zig?
Go is a very opinionated language from it's inception. We could probably argue for all eternity about code formatting, for instance. But Go went and set it in stone. Maybe it's part of good engineering to keep things simple and not allow hundreds of ways to do something. Maybe the people who use Go are the ones who just want to write and read simple and maintainable code and don't want it to be cluttered with whatever is currently the fashion.
You could look at Lisp. It's kind of the opposite of Go in this regard. You can use whatever paradigm you like, generate new code on the fly, use types or leave them. It even allows you to easily extend the language to your taste, all the way down to how code is read.
But Lisp might violate your set of absolutes.
It baffles me that so many developers are unable to use pre-comit hooks for their code formatting tools, that exist since the 1990's, to the point go fmt became a revelation.
That's hardly the point. The point is that there is a single format for the language itself and you don't have to argue about spaces vs tabs vs when to line break, whether you want trailing commas and where to put your braces.
You can format on save or in a pre commit hook. But that the language has a single canonical format makes it kind of new.
Yes, because there is no one in the room able to configure the formating tool for the whole SCM.
A simple settings file set in stone by the CTO, such a hard task to do.
The fact that is even a novelty unaware of, only confirms the target group for the language.
And then you have a 100 companies with 100 CTOs resulting in 100 different styles.
With Go there is only one style everywhere.
Most people only care about the code of their employer.
Many shops have to write and submit patches to upstream projects. Some shops have to maintain their own "living fork" version of an upstream project.
Yeah, and apparently use Notepad, since they are unable to have a configuration file for formatting.
Very few employers do 100% of the code in-house, everyone uses libraries and code from the internet.
Which will have a different style you need to contend with.
But with Go every single sane piece of code you find will be formatted with gofmt and will look mostly the same.
It does seem hard thing to do. Working over dozens of enterprise shops in last 15 years I have not see such setting done or dictated at all. So whole codebase used to be mishmash of person styles.
A clear management failure then.
Any CTO who is aware of the impact that having an incoherent programming style can have on employee productivity, is likely going to arrive at the conclusion that the most efficient way to set such policy is to "outsource" it to the programming language, by requiring projects to use an opinionated language.
Then again, any such CTO is likely also going to be someone who tends to think about things like "the ability to hire developers already familiar with the language to reduce ramp-up time" — and will thus end up picking a common and low-expressivity opinionated language. Which usually just means Java. (Although Golang is gaining more popularity here as well, as more coding schools train people in it.)
It is going to be a very clueless CTO, if they aren't aware of tooling that is even older than themselves.
IMHO it's not about the standards in your company, it's more about being able to parse any random library on GitHub etc with your eyeballs.
I use compilers and IDEs for that.
True.
This is part of the story that Rob Pikes uses to justify how opinionated Go is, but it's a bit stupid given that most language do fine and I've never seen any debates about the code formatting after the very beginning of a project (where it's always settled quickly in the few case where it happens in the first place).
The real reason why Go is opinionated is much more mundane: Rob is an old man who think he has seen it all and that the younger folks are children, and as a result he is very opinionated. (remember his argument against syntax coloring because “it's for babies” or something).
It's not bad to be opinionated when designing a language, it give some kind of coherence to it (looking at you Java and C++) but it can also get into the way of users sometimes. Fortunately Go isn't just Rob anymore and isn't impervious to changes, and there is finally generics and a package manager in the language!
Seriously, if you feel patronised by how someone designs a programming language, it might be best to move on. It's obviously not for you. Especially when you feel compelled to bad faith assumptions and ageism over it.
For those who want to feel the wind of coding freedom blow through their hair, I can recommend to spend some time learning Lisp. It offers the most freedom you can possibly have in a programming language. It might enlighten you in many other ways. It won't be the last language you learn, but it might be the most fruitful experience.
Most of people who tend to brag about Lisp's (Common Lisp) superiority, never actually used it. It is not as impressive as many legends claim.
Can you name a language that provides more freedoms? I used Lisp as an example for that side of the spectrum because I'm familiar with it, having used it for many years in the past. But maybe there are better examples.
What kind of "freedom", precisely, are you talking about? Freedom to write purely functional programs? Well, then you need Haskell or Clojure at least. Freedom to write small, self sufficient binaries? Well you need C or C++ then. CL is a regular multiparadigm language with a rich macro system, relatively good performance but nonexistent dependency management, too unorthodox OOP, with no obvious benefits compared to more modern counterparts, and a single usable implementation (SBCL). If I want s-expressions based language I can always choose Scheme or Clojure, if I need modern flexible multiparadigm language I'd use Scala
All of them. You can do imperative, functional, and oop programming in lisp. As for small libraries, it’s because cruft is an actual hindance in lisp. It’s like unix tools, you can do a lot of stuff with them, but a more integrated tool that do one thing better will fare worse in others. A big library brings a rigid way of thinking to lisp flexible model. Dependency management? Think of it like the browser runtime, where you can bring the inspector up and do stuff to the pages. It’s a different devloment models where you patch a live system. And with the smaller dependency model, you may as well vendor the libraries if you want reproductibility. Unorthodox OOP? CLOS is the best oop model out there.
The thing is that Common Lisp has most of what current programming languages are trying to implement. But it does require learning proper programming and being a good engineer.
Rob Pike... and Ken Thompson, and Robert Grisemer.
Firstly, Ken Thompson is a master at filtering out unnecessary complexities and I highly rate his opinion of the important and unimportant things.
Secondly, the Go team were never against generics, the three early designers agreed the language needed generics but they couldn't figure out a way to add it orthogonally.
Go has gone on to be very successful in cloud and networked applications (which it was designed to cater for), which lends credit to the practicalities of what the designers thought as important, HN sentiments notwithstanding.
This is a PR statement that has been introduced only after Go generics landed, for years generics were dubbed “unnecessary complexity” in user code (Go has had generics from the beginning but only for internal use of the standard library).
Well, given that the k8s team inside Google developed their own Go dialect with pre-processing to get access to generics, it seems that its limitations proved harmful enough.
The main reason why Go has been successful in back-end code is the same as the reason why any given language thrive in certain environments: it's a social phenomenon. Node.js has been even more successful despite JavaScript being a far from perfect language (especially in the ES 5 era where Node was born), which shows that you cannot credit success to particular qualities of the language.
I have nothing against Go, it's a tool that does its job fairly well and has very interesting qualities (fast compile time, self-contained binaries, decent performance out of the box), but the religious worship of “simplicity” is really annoying. Especially so when it comes in a discussion about error handling, where Go is by far the language which makes it the most painful because it lacks the syntactic sugar that would make the error as return value bearable (in fact the Go team was in favor of adding it roughly at the same time as generics, but the “simplicity at all cost” religion they had fostered among their users turned back against them and they had to cancel it…).
70% of cloud tools on CNF are built with Go; Kubernetes is just one of many. Also, since Kubernetes was originally started as a Java project you should consider whether the team was trying to code more with Java idioms than with Go ones.
Nodejs has been more successful than Go in cloud?
Typical Gate keeping the gate keepers of simplicity and pretty sure you code 23.5 hours a day on Haskell
I have never done any real programming in Java itself, but the parts of Java world that I learned while writing some Clojure circa 2015 felt pretty coherent. Now I'm curious what I missed.
No, they don't. Most languages turn dealing with code formatting, into an externality foisted upon either:
• the release managers (who have to set up automation to enforce a house style — but first have to resolve interminable arguments about what the given project's house style should be, which creates a disincentive to doing this automation); or
• the people reviewing code in the language.
In most languages, in a repo without formatting automation, reviewers are often passed these stupid messy PRs that intermingle actual semantic changes, with (often tons of) random formatting touch-ups — usually on just the files touched by the submitter's IDE.
There's a constant chant by code reviewers to "submit formatting fixes as their own PR if there are any needed" — but nobody ever does it.
Golang 1. fixes in place a single "house style", removing the speedbump in the way of automating formatting; and 2. pushes the costs of formatting back to the developer, by making most major formatting problems (e.g. unneeded imports) into compile-time errors — and also, building a formatter into the toolchain where it can be relied upon to be present and so used in pre-commit hooks, guaranteeing that the code in the repo never gets out of sync with proper formatting.
"Getting in the way of the users" is the right thing to do, when "the users" (developers) fan in 1000:1 with code reviewers and release managers, who have to handle any sloppiness they produce.
(Coincidentally, this is analogous to other Google thinking about scaling compute tasks. Paraphrasing: "don't push CPU-bottlenecked workloads from O(N) mostly-idle clients, out to O(log N) servers. Especially if the clients are just going to sit there waiting for the servers' response. Get as much of the compute done on the clients as possible; there's not only far more capacity there, but blocking on the client blocks only the person doing the heavy task, rather than creating a QoS problem." Also known as: "the best 'build server' is your own workstation. We gave you a machine with an i9 in it for a reason!")
Thanks for your response AnonymousPlanet. I agree there is value in the pursuit of a minimal set of features in a PL which brings many benefits. And of course the opposite - an overly feature packed and/or extensible PL as a core feature has tradeoffs. Over this range of possibilities my preference probably falls somewhere in the middle.
I see an effect where the languages whose primary goal is a particular set language design choices (such as strict memory safety over all else) grow a cult following that enforces said design choices. Maybe in the pursuit of an opinionated language, even if the designers are reasonable at the language's inception, the community throws out logic and "opinionated" becomes a in-group out-group tribal caveman situation.
I think you've got this backward. It's not that the particular choices are important. It's a thing happening on a higher meta level than that.
Some programming languages are, by design intent, "living" languages, evolving over time, with features coming and going as the audience for the language changes.
"Living" languages are like cities: the population changes over time; and with that change, their needs can shift; and they expect the language to respond with shifts of its own. (For example: modern COBOL is an object-oriented language. It shifted to meet shifting needs of a new generation of programmers!)
If you were able to plot the different releases of a living language in an N-dimensional "language-design configuration space", these releases would appear to arbitrarily jump around the space.
Other languages, though, are, by their design intent, "crystallized" languages — with each component or feature of the language seeing ever-finer refinement into a state of (what the language maintainers consider) perfection; where any component that has been "perfected" sees no further work done, save for bugfixes. For such languages, creating a language in this way was always the designers' and maintainers' goal — even before they necessarily knew what they were creating.
"Crystallized" languages are like paintings: there was an initial top-down rough vision for the language (a sketch); and at some early point, most parts of the design were open for some degree of change, when paint was first being applied to canvas. But as the details were filled in in particular areas, those areas became set, even varnished over.
If you plot the successive releases of a crystallized language in design configuration space, the releases would appear to converge upon a specific point in the space.
The goal with a crystallized language is to explore the design space to discover some interesting point, and then to tune into that interesting point exactly — to "perfect" the language as an expression of that interesting concept. Every version of the language since the first one has been an attempt to find, and then hit, that point in design space. And once that point in design space is reached, the language is "done": the maintainers can go home, their jobs complete. Once a painting says what it "should" say, you can stop painting it!
If a crystallized language is small, it can achieve this state of being entirely "finished", its spec set in stone, no further "core maintainer" work needed. Some small crystallized languages are even finished the moment they start. Lua is a good example. (As are most esolangs — although they're kind of a separate thing, being created purely for the sake of "art", rather than sitting at the intersection of "work of art" and "useful tool" as crystallized languages do.)
But large crystallized languages do exist. They seek the same fate as small crystallized languages — to be "finished" and set in stone, the maintainers' job complete. They just rarely get there, because of how large a project it is to refine a language design.
You might intuitively associate a "living" language with democratic control, and a "crystallized" language with a Benevolent Dictator For Life (BDFL) figure with an artistic vision for the language. But this is not necessarily true. Python was a "living" language even when it had a BDFL. And Golang is a "crystallized" language despite its post-1.0 evolution being (essentially) directed by committee.
---
The friction you're describing, comes from developers who are used to living languages, trying to apply the same thinking about "a language changing to serve the needs of its users" to a crystallized language.
Crystallized languages do not exist primarily to serve their users. They exist to be expressions of specific points in design space, quintessences of specific concepts. Those points in design space likely are useful (esolangs excluded); but the expectation is that people who don't find that point in design space useful, should choose a different language (i.e. a different point in design space) that's more suited to their needs, rather than attempting to shift the language's position in design space.
Adding a bridge is a sensible proposal for a city. You can get entire streets of buildings torn down to make way for the bridge, if the need is there. But adding a bridge is not not a sensible proposal for a (mostly-finished) painting. If you want "this painting but with a bridge in it", that's a different painting, and you should seek out that painting. Or paint it yourself. Or paint the bridge on some kind of transparent overlay layer, and hang that overlay in front of the painting.
Conveniently for this discussion, Borgo here is exactly an example of a language that's "someone else's painting, but now with the bridge I wanted painted on an overlay in front of it." :)
Best comment in the thread. I think defining certain languages as "crystallised", rather than "set", explains well the underlying structure has taken a specific shape based on specific tenets. Well said.
We have completely lost the plot by assuming that just because there are disagreements on somethings then any choice is equally as good as any other. Go is opinionated and it is opinion is wrong.
Not having exceptions (but then having them anyway through panic, but whatever) is a choice - but the other reasonable alternative is the Maybe monad. What Go did is not reasonable. I might be okay if they had been working on getting monads in, but they haven't.
I have a specific hatred for Go because it seems perfectly suited to make me hate programming: it has some really good ideas like fast compile time speeds, being able to cross-compile on any platform and being a systems language without headers.
But then I try to write code in it and so. much. boilerplate.
You're reading way too much into what the parent poster said. He just correctly stated the overall sentiment of the community.
That said, suggesting adding exceptions to Go is about as reasonable as adding a GC to Zig. How much effort would you spend arguing against someone bringing that up as a serious proposal?
> That said, suggesting adding exceptions to Go is about as reasonable as adding a GC to Zig.
Suggesting the addition of exceptions to Go is as reasonable as suggesting the addition of loops to Rust. Which is to say that it already has exceptions, and always has. Much of the language's design is heavily dependent on the presence of exceptions.
Idioms dictate that you probably shouldn't use exceptions for errors (nor should you in any language, to be fair), but even that's not strictly adhered to. encoding/json in the standard library famously propagates errors using the exception handlers.
Go doesn't use exceptions as a primary way of handling errors, which is what we're talking about here. Pedantry is not welcome.
It doesn't not use exception handlers as a primary way of handling errors either, though. Go doesn't specify any error handling mechanism. Using exception handlers is just as valid as any other, and even the standard library does it, as noted earlier.
The only error-related concept the Go language has is the error type, but it comes with no handling semantics. Which stands to reason as there is nothing special about errors. Originally, Go didn't even have an error type, but it was added as a workaround to deal with the language not supporting cyclic imports.
Your pedantry is hilarious and contradictory.
You're absolutely technically correct, in the "spherical cow in a vacuum" sense. In reality though, essentially all Go code out there handles errors through the pattern of checking if the error in a `(value, error)` tuple returned from a function is `nil` or not. That is what the discussion here is about - the way errors are handled in a language in practice, not in theory. Therefore, pedantry.
Basically, discussions have context and I have no intention of prepending 10 disclaimers to every statement I make to preemptively guard against people interpreting my comments as absolutes in a vacuum.
That's a lot of pedantry you've got there for someone who claims it is not welcome. Rules for thee, not for me?
But, if you'd kindly come back to the topic at hand:
> That is what the discussion here is about - the way errors are handled in a language in practice, not in theory.
While I'm not entirely convinced that is accurate, I will accept it. Now, how does:
- "That said, suggesting adding exceptions to Go is about as reasonable as adding a GC to Zig."
Relate to that assertion? What does "suggesting adding exceptions to Go" have to do with a common pattern that has emerged?
It’s not simply a common pattern. It is a way of doing things in the community. The stdlib uses it, the libraries use it, and if you do not use it, people will not use your software.
Okay, but how does that relate to what was said?
You’re just complaining because the compiler isn’t complaining
I would be happy if they would add in compiler some thing that'll allow to ignore error return values and in this case compiler would just throw exception^Wpanic from that point. I think it even makes sense for go purists, like you need to handle errors or get panicked. And I'd just mostly ignore errors and will have my sweet exceptions.
> I think it even makes sense for go purists, like you need to handle errors or get panicked.
You think wrong. Go preaches that zero values should be useful, with means in the common (T, error) scenario, T should always be useful even if there is also an error state. Worst case, you will get the zero value in return, which is still useful. This means that the caller should not need to think about the error unless it is somehow significant to the specific problem they face. They are not dependent variables.
I understand where you are coming from as in other languages it would be catastrophic to ignore errors, but that's not how Go is designed. You cannot think of it like you do other languages, for better or worse.
You can get pretty close with generics:
(https://go.dev/play/p/NnrZ30TflDI);)
The problem with go exceptions (panics) is that they are a second class citizen. Leading to ignorance of using “defer”-statement when needed, opening up risk of leaving mutexes and other critical handlers open.
Then it shouldn't have added exceptions in the first place. But exceptions in go exist and so does exception (un)safety, and denial only leads to buggy code. I cannot count how many times I've seen exception unsafe code in go exactly because everyone keeps ignoring them
What would you use when you actually have an exception, then? Exception and exception handlers are a useful construct, but, like everything else, become problematic when used outside of their intended purpose.
Just because exception and error both start with the letter "e" does not mean they are in any way related. They have very different meanings and purposes.
I'd say Scala.
It has its flaws, but the latest version (Scala 3) is really really good. The community is open to different styles of programming - from using it as a "better Java" to "pure functional programming like in Haskell".
Scala is the perfect example of why you want to limit expressivity. It's seems so cool and awesome at first, but then you have to support a code base with other engineers and you quickly come to the view that go's limited expressivity is a blessing.
Hilariously I was using a gen AI (phind) and asked it to generate some scala code and it no joke suggested the code in both implematic scala and in java style, and all you had to do is look at it and you could see java style was 1000X easier to read/maintain.
Well, flexibility has its price. And yeah, if you need to work in a team that uses a very different style, then you won't like it.
On the other hand, if you carefully select your team or work alone, then this is not a problem at all.
Btw, there isn't really "one" idiomatic scala style - therefore I tend to believe that you are not familiar with the language and the community.
That is their point. There's too many styles.
What "point" is that though? That's like saying "there is too many programming languages".
Too many styles "in Scala," specifically. The point is that some people (not me) prefer to use something restrictive because it keeps everyone on the team from getting too clever with code and making it unreadable. The little bit of extra typing is worth the easier time reading because that's most of what you'll be doing as a coder.
As opposed to an expressive language with powerful macros. Another person could hop on the team and write something that only makes sense to them, and now you have to understand their half-baked DSL.
The burden is on you, in an expressive language, to have a style guide for your team or enforce a style through code reviews. Whereas Golang just has that built-in and obviously is more than capable for writing production software.\
This is a criticism often levied against Scala because you can do pretty much any paradigm in it, and there's lots of disagreements over when to do what paradigm.
Fair enough.
Aside from "not ancient" Java has everything you want! I'd consider the best tooling (Intellij), static, strongly typed, has enums now (sealed interfaces), composeable error handling, null safety with new module flags, etc. Not sure about the community, but the maintainers I've worked with seemed nice enough. I imagine the community has a lot less ego than rust/go due to the general perception of the language.
What does Java offer that dotnet does not?
libraries, and I thank God I don't have to work on Windows and thus don't know how smart Visual Studio Enterprise is, but IJ is world class in the number of bugs it'll catch
I would argue that .NET is better than Java, unless Java has gotten something like Linq since last I used it (which is entirely) possible.
For those who do not know: .NET is cross platform, MS has official documentation on how to deploy it in Docker, and it is MIT licensed.
And if you want to deploy a backend for your webapp the tersenes can now rival Flask - plus the compiler can cross compile it to any supported platform, even in a form that works without .NET installed.
And of course it has Jetbrains support through Rider.
C# is much better at doing systems-programming adjacent tasks than either Java or Go, as it actually exposes all the necessary features for this: C structs, actual, proper generics, fast FFI, etc.
You have the same PL preferences as me. I haven't tried Rust yet, but Kotlin, modern C#, and F# all fit your requirements. Kotlin is closest because it uses the enormous Java ecosystem.
I haven't had time to really try to write anything in it, but https://gleam.run/ looks really good too. Like Elm for backend + frontend!
"+ frontend" only if you squint really hard, I think
Suggesting try/catch indicates that you have virtually no experience using Go. You're standing on the side lines yelling stupid/non-sensical feature requests and getting upset when you're not taken seriously.
Adding exceptions to golang doesn't make any sense for a very simple reason: they're already there. The fact that they're called differently doesn't change anything, panics walk like exceptions, swim like exceptions and they quack like exceptions.
I read somewhere something to the effect of this: Some languages solve deficiencies in the language by adding more features to the language. This approach is so common, it could be considered the norm. These are languages like C++, Swift, Rust, Java, C#, Objective C, etc. But two mainstream languages take different approach: C and Go strongly prefer to solve deficiencies in the language simply by adding more C and Go code. One of the effects of this preference is that old (or even ancient in the case of C) codebases tend to look not that different from new codebases, which as one might imagine can be quite beneficial in certain cases. There is a reasonable argument to be made that at least some of the enduring success of C has to do with this approach.
Why would you not enforce a particular belief about how a language should be designed? There are languages designed around being able to do anything you want at any time regardless of if it makes sense, and then you end up with everyone using their own fractured subset of language features that don't even work well together. Not every language needs to be the same feature slop that supports everything poorly and nothing well.
Because
repeated all over the place, is the epitome of productivity!I would have to go through my comment history for exact numbers. In analyzing a real, production service written in Go where multiple dozens of contributors over hundreds of thousands of lines over several years, "naked" if-err-return-err made up less than 5% of error handling cases and less than 1% of total lines. Nearly every error handling case either wrapped specific context, emitted a log, and/or emitted a metric specific to that error handling situation.
If you do naked if-err-return-err, you are likely doing error handling wrong.
Also known as The Apple Answer.
Plenty of The Go Way arguments apply to software we were writing from the dawn of computing until the 1990's, and there are plenty of reasons why, with exception of Go (pun intended), the industry has moved beyond that.
I nearly wrote "you are holding it wrong" to nod to that quote. But it is really true - most errors in long running services are individual and most applications I've worked in ignore this when (ab)using exceptions.
In our Go codebases, the error reporting and subsequent debugging and bug fixing is night and day from our Python, Perl, Ruby, PHP, Javascript, and Elixir experiences.
The one glaring case where this is untrue is in our usage of Helm which, having been written in Go, I would expect better error handling. Instead we get, "you have an error on line 1; good luck" - and, inspired, I looked at their code just now. Littered with empty if-err-return-err blocks - tossing out all that beautiful context, much like an exception would, but worse.
https://github.com/search?q=repo%3Ahelm%2Fhelm%20%20if%20err...
The same applies for literally any language if you care about error handling except it's way more ergonomic to do. Why do go users try to pass off the lack of language features as as if that's the reason why they care about writing quality code?
to do the same as Go in Python is way more verbose and less ergonomic because you would wrap each line in a try catch.
I can't speak for all Go users, but what I have seen is that the feature set in Go lends itself to code that handles errors, and exceptions simply don't -- I can say this because I've worked in a dozen different production systems for each of perl, python, js, elixir, and php -- I'm left believing that those languages _encourage_ bad error handling. Elixir is way cool with pattern matching and I still find myself wishing for more Go-like behavior (largely due to the lacking type system, which I hear they are working to improve).
I've not used Rust which apparently is the golden standard in the HN sphere
Wrapping a line in a try catch is equivalent to the go error check routine. Should be roughly the same amount of lines if you care about that sort of thing.
There's bad programmers everywhere. Writing if err != nil { return err } is the same as not handling exceptions (they just bubble up).
Maybe you think this because go shoves the exceptions front and center in your face and forces you to deal with them. I suppose it can be a helpful crutch for beginners but it just winds up being annoying imo.
It actually is when debugging, because it makes control flow explicit.
In JS, for example, people don‘t even know which functions could throw exceptions, and just ignore them, most of the time. Fast to write and looks nice, but is horrible quality and a nightmare to debug.
Ever heard of these funny tools called debuggers and exception breakpoints?
sane error handling in go is more productive any day.
Calling if boilerplate sane is an oxymoron.
It's a tradeoff. Alternatives like exception handling make control flow less obvious and shorthands like the ? operator lock you into a specific return type (Result<T,E> in the case of Borgo or Rust).
Let’s hope Go never gets try/catch exceptions
the day the go codebase throws random panics is the day I quit the company.
So you quit the day encoding/json was written?
You are probably thinking about (proto)reflect.
No, I am thinking of encoding/json. It uses Go's exception handlers to pass errors around, much like the code above.
There is evil in this world and then there's ... this :D
ESBuild, one of my favourite Go projects, uses panics to handle try/catch exceptions.
The syscall/js package [0] throws panics if something goes wrong, rather than returning errors.
Go already has try/catch exceptions. We just don't use them most of the time because they're a really bad way of handling errors.
[0] https://pkg.go.dev/syscall/js
It's 2024. We need an effective means for error propagation and the battle-tested solutions are try/catch exceptions or Optional types. Go's error handling would have been great in the 1970's, it's not so great now some 50 years later.
Sooo what’s the new 2025 way?
Objectively wrong.
They didn't mean literally 0 people.
Having the exceptions support does not mean the code will be scattered with try/catches - it is not used for the code flow, but ensuring no error slips silently. And when the exception is thrown, the stack trace is captured so you can get to the code and debug.
panic/recover/error?
I am so tired of reading Java/C++/Python code that just slaps try/catch around several lines. To some it might seem annoying to actually think about errors and error handling line by line, but for whoever tries to debug or refactor it's a godsend. Where I work, try/catch for more than one call that can throw an exception or including arbitrary lines that don't throw the caught exception, is a code smell.
So when I looked at Go for the first time, the error handling was one of the many positive features.
Is there any good reason for wanting try/catch other than being lazy?
sounds good on paper, but seeing "if err!=nil" repeated million times in golang codebases does not create positive impression at all
Yes but the impression is largely superficial. The error handling gets the job done well enough, if crudely.
The ability to quickly parse, understand and reason about code is not superficial, it is essential to the job. And that is essentially what those verbose blocks of text get in the way of.
As an experienced Go dev, this is literally not a problem.
Golang code has a rhythm: you do the thing, you check the error, you do the thing, you check the error. After a while it becomes automatic and easy to read, like any other syntax/formatting. You notice if the error isn't checked.
Yes, at first it's jarring. But to be honest, the jarring thing is because Go code checks the error every time it does something, not because of the actual "if err != nil" syntax.
Just because you can adapt to verbosity does not make it a good idea.
I've gotten used to Javas getter/setter spam, does that make it a good idea?
Moreover, don't you think that something like Rusts ? operator wouldn't be a perfect solution for handling the MOST common type of error handling, aka not handling it, just returning it up the stack?
VERSUSI personally have mixed feelings about this. I think a shortcut would be nice, but I also think that having a shortcut nudges people towards using short-circuit error handling logic simply because it is quicker to write, rather than really thinking case-by-case about what should happen when an error is returned. In production code it’s often more appropriate to log and then continue, or accumulate a list of errors, or… Go doesn’t syntactically privilege any of these error handling strategies, which I think is a good thing.
This. Golang's error handling forces you to think about what to do if there's an error Every Single Time. Sometimes `return err` is the right thing to do; but the fact that "return err" is just as "cluttered" as doing something else means there's no real reason to favor `return err` instead of something slightly more useful (such as wrapping the err; e.g., `return fmt.Errorf("Attempting to fob trondle %v: %w", trondle.id, err)`).
I'd be very surprised if, in Rust codebases, there's not an implicit bias against wrapping and towards using `?`, just to help keep things "clean"; which has implications not only for debugging, but also for situations where doing something more is required for correctness.
Well we are in a discussion thread about a language that does just that :)
I see two issues with the `?` operator:
1. Most Go code doesn't actually do
but rather that is, the error gets annotated with useful context.What takes less effort to type, `?` or the annotated line above?
This could probably be solved by enforcing that a `?` be followed by an annotation:
...but I'm not sure we're gaining much at that point.2. A question mark is a single character and therefore can be easy to miss whereas a three line if statement can't.
Moreover, because in practice Go code has enforced formatting, you can reliably find every return path from a function by visually scanning the beginning of each line for the return statement. A `?` may very well be hiding 70 columns to the right.
For the first point, there are two common patterns in rust:
1. Most often found in library code, the error types have the metadata embedded in them so they can nicely be bubbled up the stack. That's where you'll find `do_a_thing().map_err(|e| Error::FileOpenError { file, user, e })?`, or perhaps a whole `match` block.
2. In application code, where matching the actual error is not paramount, but getting good messages to an user is; solutions like anyhow are widely used, and allow to trivially add context to a result: `do_a_thing().context("opening file")?`. Or for formatted contexts (sadly too verbose for my taste): `do_a_thing().with_context(|| format!("opening file {file} as user {user}"))?`. This will automatically carry the whole context stack and print it when the error is stringified.
Overall, what I like about this approach is the common case is terse and short and does not hinder readability, and easily gives the option for more details.
As for the second point, what I like about _not_ easily seeing all return paths (which are a /\? away in vim anyways), is that special handling stands out way more when reading the file. When all of the sudden you have a match block on a result, you know it's important.
Actually this is precisely same cadence as in good old C. As someone who writes lots of low-level code, I find Go's cadence very familiar and better than try-catch.
The idea that error handling is "not part of the code" is silly though. My impression of people that hate Go's explicit error handling is that they don't want to deal with errors properly at all. "Just catch exceptions in main and print a stack trace, it's fine."
Rust's error handling is clearly better than Go's, but Go's is better than exceptions and the complaints about verbosity are largely complaints about having to actually consider errors.
I'm honestly asking as someone neutral in this, what is the difference? What is the difference between building out a stack trace yourself by handling errors manually, and just using exceptions?
I have not seen anyone provide a practical reason that you get any more information from Golangs error handling than you do from an exception. It seems like exceptions provide the best of both worlds, where you can be as specific or as general as you want, whereas Golang forces you to be specific every time.
I don't see the point of being forced to deal with an "invalid sql" error. I want the route to error out in that case because it shouldn't even make it to prod. Then I fix the SQL and will never have that error in that route again.
The biggest difference is that you can see where errors can happen and are forced to consider them. For example imagine you are writing a GUI app with an integer input field.
With exception style code the overwhelming temptation will be to call `string_to_int()` and forget that it might throw an exception.
Cut to your app crashing when someone types an invalid number.
Now, you can handle errors like this properly with exceptions, and checked exceptions are used sometimes. But generally it's extremely tedious and verbose (even more than in Go!) and people don't bother.
There's also the fact that stack traces are not proper error messages. Ordinary users don't understand them. I don't want to have to debug your code when something goes wrong. People generally disabled them entirely on web services (Go's main target) due to security fears.
Is it? In my experience it's very short, especially considering you can catch multiple errors. Do my users really need a different error message for "invalid sql" vs "sql connection timeout?" They don't need to know any of that.
I would say there's not a proper error message to derive from explicitly handling sql errors. Certainly not a different message per error. I would rather capture all of it and say something like "Something went wrong while accessing the database. Contact an admin." Then log the stack trace for devs
Sure, I just don't think it's that significant. Humans don't read/parse code character-by character, we do it by recognizing visual patterns. Blocks of `if err != nil { }` are easy to skip over when reading if needed.
I agree, though I was really surprised to learn this when reading Go code. Much easier to skip over than I was expecting it to be
I find that knowing where my errors may come from and that they are handled is essential to my job and missing all that info because it is potentially in a different file altogether gets in the way
Okay, but other than exceptions, whats the alternative?
The ? Operator in Rust?
Good point.
I only briefly tried Rust and was turned off by the poor ergonomics; I don't think (i.e. open to correction) that the Rust way (using '?') is a 1:1 replacement for the use-cases covered by Go error management or exceptions.
Sometimes (like in the code I wrote about 60m ago), you want both the result as well as the error, like "Here's the list of files you recursively searched for, plus the last error that occurred". Depending on the error, the caller may decide to use the returned value (or not).
Other times you want an easy way to ignore the error, because a nil result gets checked anyway two lines down: Even when an error occurs, I don't necessarily want to stop or return immediately. It's annoying to the user to have 30 errors in their input, and only find out about #2 after #1 is fixed, and #3 after #2 is fixed ... and number #30 after #29 is fixed.
Go allows these two very useful use-cases for errors. I agree it's not perfect, but with code-folding on by default, I literally don't even see the `if err != nil` blocks.
Somewhat related: In my current toy language[1], I'm playing around with the idea of "NULL-safety" meaning "Results in a runtime-warning and a no-op", not "Results in a panic" and not "cannot be represented at all in a program"[2].
This lets a function record multiple errors at runtime before returning a stack of errors, rather than stack-tracing, segfaulting or returning on the first error.
[1] Everyone is designing their own best language, right? :-) I've been at this now since 2016 for my current toy language.
[2] I consider this to be pointless: every type needs to indicate lack of a value, because in the real world, the lack of a value is a common, regular and expected occurrence[3]. Using an empty value to indicate the lack of a value is almost certainly going to result in an error down the line.
[3] Which is where there are so many common ways of handling lack of a value: For PODs, it's quite popular to pick a sentinel value, such as `(size_t)-1`, to indicate this. For composite objects, a common practice is for the programmer to check one or two fields within the object to determine if it is a valid object or not. For references NULL/null/nil/etc is used. I don't like any of those options.
It is a 1:1 replacement.
I think you're thinking of the case when you have many results, and you want to deal with that array of results in various ways.
This is one such way, but there are others - https://doc.rust-lang.org/rust-by-example/error/iter_result....
This doesn't handle every case out there, but it does handle the majority of them. If you'd like to do something more bespoke, that's an option as well.
More than just that, Result in general also prevents from accessing the value when there is an error and accessing an error when there is a value.
This may be a crazy/dumb take, but would it be so wrong to allow code outside the function to take the wheel and do a return? Then you could define common return scenarios and make succinct calls to them. Use `returnif(err)` for the most typical, boilerplate replacement, or more elaborate handlers as needed.
golang keyboard ftw https://pbs.twimg.com/media/DCIF7-2W0AEAv9c.jpg
Yes, it's the ability to unwind the stack to an exception handler without having to propagate errors manually. Go programs end up doing the exact same thing as "try/catch around multiple lines" with functions that can return an error from any point, and every caller blindly propagating the error up the stack. The practice is so common that it's like semicolons in Java or C, it just becomes noise that you gloss over.
The difference is that all code paths are explicitly spelled out and crucially that the programmer had to consider each path at the time of writing the code. The resulting code is much more reliable than what you end up with exceptions.
Do you really do that in practice, or do you just blindly go 'if err != nil return nil, err'?
Because fundamentally the function you called can return different errors at any point so if you just propagate the error the code paths are in fact not spelled out at all because the function one above in the hierarchy has to deal with all the possible errors two calls down which are not transparent at all.
In Go, no one really blindly returns nil, err. People very clearly think about errors—if an error may need to be actioned on up the stack, people will either create a named error value (e.g., `ErrInvalidInput = errors.New(“invalid input”)` or a named error type that downstream users can check against. Moreover, even when propagating errors many programmers will attach error context: `return nil, fmt.Errorf(“searching for the flux capacitor `%s`: %w”, fluxCap, err)`. I think there’s room for improvement, but Go error handling (and Rust error handling for that matter) seem to be eminently thoughtful.
Coming from dotnet, I rather like the Go pattern as you've described it. I would normally catch and error and then write out a custom message with relevant information, anyway, and I hate the ergonomics of the try{}catch(Exception ex){} syntax. And yes, it is tempting to let the try block encompass more code than it really should.
I don't see how it's possible to do it blindly unless the code gets autogenerated. If you're typing the `if err != nil` then you've clearly understood that an error path is there.
There's no requirement for the calling function to handle each possible type of error of the callee. It can, as long as the callee properly wrapped the error, but it's relatively rare for that to be required. Usually the exact error is not important, just that there was one, so it gets handled generically.
Their point was writing `if err != nil return nil, err` is the same thing that stack traces from exceptions do, but with even less information. And if that's most of a Golang codebases error handling, it's not a compelling argument against exceptions.
Go programs generally do not “blindly prepare the error up the stack”. I’ve been writing Go since 2011 and Python since 2008, and for the last ~decade I’ve been doing DevOps/SRE for a couple of places that were both Go and Python shops. Go programs are almost universally more diligent about error handling than Python programs. That doesn’t mean Go programs are fre from bugs, but there are far, far fewer of them in the error path compared to Python programs.
This matches my experience _hard_; there is simply no comparison in practice. Go does it better nearly every time
The huge volume of boilerplate makes the code harder to read, and annoying to write. I like go, and I don’t want exceptions persay, but I would love something that cuts out all the repetitive noise.
The example in the article is a good one. Result and Optional as first class sum types
That just changes the boilerplate from if's to match's.
See the example with the `?` operator: https://github.com/borgo-lang/borgo?tab=readme-ov-file#error...
The main benefits of a Result type are brevity and the inability to accidentally not handle an error.
Yes, but that isn't necessarily a feature of option types. Is it the case that similar sugar for the tiresome Go pattern couldn't achieve similar benefits?
Perhaps, but there have been several proposals along those lines and nobody seems capable of figuring out a sensible implementation.
A funny drawback of the current Go design that a Result type would solve is the need to return zero values of all the declared function return types along with the error: https://github.com/golang/go/issues/21182.
exactly.. yes, I understand why ? is neat from a type POV since you specifically have to unwrap an optional type whereas in Go you can ignore a returned error (although linters catch that) - so at the end of the day it's just the same boilerplate, one with ? the other with err != nil
This has not been my experience. It doesn’t make the code harder to read, but it forces you to think about all the code paths—if you only care about one code path, the error paths may feel like “noise”, but that’s Go guiding you toward better engineering practices. It’s the same way JavaScript developers felt when TypeScript came along and made it painful to write buggy code—the tools guide you toward better practices.
That may be superficially true but don’t forget our brain is structured to optimize every repetitive work or some boilerplates, we can basically use “strcpy” and “string_copy” we are so used to all these that even if repeated a billion times it can be processed fast
I agree, I don't really understand everyone's issue with err != nil.. it's explicit, and linters catch uncaught errors. Yes the ? operator in Rust is neat, but you end up with a similar issue of just matching errors throughout your code-base instead of doing err != nil..
The problem is that you're forced to have four possible states
1. err != nil, nondefault return value
2. err != nil, default return value
3. err == nil, nondefault return value
4. err == nil, default return value
when often what you want to express only has two: either you return an error and there's no meaningful output, or there's output and no error. A type system with tuples but no sum types can only express "and", not "or".
I mean, I wish Go had sum types, but this really isn’t a problem in practice. Every Go programmer understands from day 0 that you don’t touch the value unless the error is nil or the documentation states otherwise. Sum types would be nice for other things though, and if it gets them eventually it would feel a little silly to continue using product types for error handling (but also it would be silly to have a mix of both :/).
this is true, but not a problem. Go's pattern of checking the error on every return means that if an error is returned, that is the return. Allowing routines to return a result as well as an error is occasionally useful.
Yeah, also you almost always need to annotate errors anyway (e.g., `anyhow!`), so the ? operator doesn’t really seem to be buying you much and it might even tempt people away from attaching the error context.
In a hot path it’s often beneficial to not have lots of branches for error handling. Exceptions make it cheap on success (yeah, no branches!) and pretty expensive on failure (stack unwinding). It is context specific but I think that can be seen as a good reason to have try catch.
Now of course in practice people throw exceptions all the time. But in a tight, well controlled environment I can see them as being useful.
This is true but the branch isn't taken unless there's an error in Go.
Given that the Go compiler emits the equivalent of `if (__unlikely(err != nil)) {...}` and that any modern CPUs are decently good at branch prediction (especially in a hot path that repeats), I find it hard to believe that the cost would be greater than exceptions.
You can print or log the stack trace of the exception in python.
You generally need to skip all lines that the exception invalidates. That's why it's a block or conditional.
Try-blocks with ~one line are best practice on code based I have worked with. The upside is that you can bubble errors up to the place where you handle them, AND get stack traces for free. As a huge fan of Result<T, E>, I have to admit that that's a possible advantage. But maybe that fits your definition of lazy :).
It's the best strategy for short running programs, or scripts if you will. You just write code without thinking about error handling at all. If anything goes wrong at runtime, the program aborts with a stacktrace, which is exactly you want and you get it for free.
For long-running programs you want reliability, which implies the need to think about and explicitly handle each possible error condition, making exceptions a subpar choice.
The issue is that it's more or less impossible to graft onto the language now. You could add enums, but the main reason why people want them is to fix the error handling. You can't do this without fracturing the ecosystem.
Why do you think so? Maybe I'm an odd case, but my main use case for enums is for APIs and database designs, where I want to lock down some field to a set of acceptable values and make sure anything else is a mistake. Or for state machines. Error handling is manageable without enums (but I love Option/Result types more than Go's error approach, especially with the ? operator).
> my main use case for enums is for APIs and database designs, where I want to lock down some field to a set of acceptable values and make sure anything else is a mistake
Then what you are really looking for is sum types (what Rust calls enums, but unusually so), not enums. Go does not have sum types, but you can use interfaces to archive a rough facsimile and most certainly to satisfy your specific expectation:
Enums and sum types seem to be related. In the code you wrote, you could alternatively express the Hot and Cold types as enum values. I would say that enums are a subset of sum types but I don't know if that's quite right. I guess maybe if you view each enum value as having its own distinct type (maybe a subtype of the enum type), then you could say the enum is the sum type of the enum value types?
> Enums and sum types seem to be related.
They can certainly help solve some of the same problems. Does that make them related? I don't know.
By definition, an enumeration is something that counts one-by-one. In other words, as is used in programming languages, a construct that numbers a set of named constants. Indeed you can solve the problem using that:
But, while a handy convenience (especially if the set is large!), you don't even need enums. You can number the constants by hand to the exact same effect: I'm not sure that exhibits any sum type properties. I guess you could see the value as being a tag, but there is no union.Unfortunately, this:
Isn't really a good workaround when lacking an enumeration type. The compiler can't complain when you use a value that isn't in the list of enumerations. The compiler can't warn you when your switch statement doesn't handle one of the cases.Refactoring is harder - when you add a new value to the enum, you can't easily find all those places that may require logic changes to handle the new value.
Enums are a big thing I miss when writing Go, compared to when writing C.
> Isn't really a good workaround when lacking an enumeration type.
Enumeration isn't a type, it's a numbering construct. Literally, by dictionary definition. Granted, if you use the Rust definition of enum then it is a type, but that's because it refers to what we in this thread call sum types. Rust doesn't support "true" enums at all.
> The compiler can't complain when you use a value that isn't in the list of enumerations.
Well, of course. But that's not a property of enums. That's a property of value constraints. If Go supported value constraints, then it could. Consider:
Then the compiler would complain. Go lacks this in general. You also cannot define, say, an Email type: Which, indeed, is a nice feature in other languages, but outside of what enums are for. These are separate concepts, even if they can be utilized together.> Enums are a big thing I miss when writing Go, compared to when writing C.
Go has enums. They are demonstrated in the earlier comment. The compiler doesn't attempt to perform any static analysis on the use of the use of the enumerated values because, due to not having value constraints, "improper" use is not a fatal state[1] and Go doesn't subscribe to warnings, but all the information you need to perform such analysis is there. You are probably already using other static analysis tools to assist your development. Go has a great set of tools in that space. Why not add an enum checker to your toolbox?
[1] Just like it isn't in C. You will notice this compiles just fine:
No, it isn't, unlike C, in which it is. The C compiler can actually differentiate between an enum with one name and an enum with a different name.
There's no real reason the compiler vendor can't add in warnings when you pass in `myenum_one_t` instead of `myenum_two_t`. They may not be detecting it now, but it's possible to do so because nothing in the C standard says that any enum must be swappable for a different enum.
IOW, the compiler can distinguish between `myenum_one_t` and `myenum_two_t` because there is a type name for those.
Go is different: an integer is an integer, no matter what symbol it is assigned to. The compiler, now and in the future, can not distinguish between the value `10` and `MyConstValue`.
Actually, it doesn't compile "just fine". It warns you: https://www.godbolt.org/z/bn5ffbWKs
That's about as far as you can get from "compiling just fine" without getting to "doesn't compile at all".
And the reason it is able to warn you is because the compiler can detect that you're mixing one `0` value with a different `0` value. And it can detect that, while both are `0`, they're not what the programmer intended, because an enum in C carries with it type information. It's not simply an integer.
It warns you when you pass incorrect enums, even if the two enums you are mixing have identical values. See https://www.godbolt.org/z/eT861ThhE ?
> No, it isn't, unlike C, in which it is.
Go on. Given:
What is missing in the first case that wouldn't allow you to perform such static analysis? It has a keyword to identify initialization of an enumerated set (iota), it has an associated type (E) to identify what the enum values are applied to, and it has rules for defining the remaining items in the enumerated set (each subsequent constant inherits the next enum element).That's all C gives you. It provides nothing more. They are exactly the same (syntax aside).
> It warns you
Warnings are not fatal. It compiles just fine. The Go compiler doesn't give warnings of any sort, so naturally it won't do such analysis. But, again, you can use static analysis tools to the same effect. You are probably already using other static analysis tools as there are many other things that are even more useful to be warned about, so why not here as well?
> enum in C carries with it type information.
Just as they do in Go. That's not a property of enums in and of themselves, but there is, indeed, an associated type in both cases. Of course there is. There has to be.
Type information. The only type info the compiler has is "integer".
That's not a type.
It still only has the one piece of type information, namely "integer".
That's not type information
No. C enums have additional information, namely, which other integers that type is compatible with. The compiler can tell the difference between `enum daysOfWeek` and `enum monthsOfYear`.
Go doesn't store this difference - `Monday` is no different in type than `January`.
Maybe, but the warning tells you that they types are not compatible. The fact that the compiler tells you that the types are not compatible means that the compiler knows that the types are not compatible, which means that the compiler regards each of those types as separate types.
Of course you can redirect the warning to /dev/null with a flag, but that doesn't make the fact that the compiler considers them to be different types go away.
Whether you like it or not, C compilers can tell the difference between `Monday` and `January` enums. Go can't tell the difference between `Monday` and `January` constants. How can it?
Nobody said it was. Reaching already? As before, enums are not a type, they are a numbering mechanism. Literally. There is an associated type in which to hold the numbers, but that's not the enum itself. This is true in both C and Go, along with every other language with enums under the sun.
Sure, just as in Go:
> Go doesn't store this difference - `Monday` is no different in type than `January`.Are you, perhaps, mixing up Go with Javascript?
> How can it?
By, uh, using its type system...? A novel concept, I know.
Enums are exactly sums of unit types (types with only one value).
Traditionally, enums have been a single number type with many values (initialized in a counted one-by-one fashion).
Rust enums are as you describe, as they accidentally renamed what was historically known as sum types to enums. To be fair, Swift did the same thing, but later acknowledged the mistake. The Rust community doubled down on the mistake for some reason, now gaslighting anyone who tries to use enum in the traditional sense.
At the end of the day it is all just 1s and 0s. If you squint hard enough, all programming features end up looking the same. They are similar in that respect, but that's about where the similarities end.
annoyingly go can't have proper sum types, as the requirement for a default value for everything doesn't make any sense for sum types
Couldn't the zero value be nil? I get that some types like int are not nil-able, but the language allows you to assign both nil and int to a value of type any (interface{}), so I wonder why it couldn't work the same for sum types. i.e. they would be a subset of the `any` type.
Said "requirement" is only a human construct. The computer doesn't care.
If the humans choose to make an exception for that, it can be done. Granted, the planning that has taken place thus far has rejected such an exception, but there is nothing about Go that fundamentally prevents carving out an exception.
You can just default to the first variant, no?
The thing is, these don't add much on their own. You'd have to bring in pattern matching and/or a bunch of other things* that would significantly complicate the language.
For example, with what's currently in the language, you could definitely have an option type. You'd just be limited to roughly an api that's `func (o Option[T]) IsEmpty() bool` and `func (o Option[T]) Get() T`. And these would just check if the underlying point is nil and dereference it. You can already do that with pointers. Errors/Result are similar.
A `try` keyword that expands `x := try thingThatProducesErr()` to:
Might be more useful in go (you could have a similar one for pointers).* at the very least generic methods for flat map shenanigans
Using an Option instead of a pointer buys you the inability to forget to check for nil.
Just need to make sure the Option exposes the internal value only through:
Accessing the value is then forced to look like this: Thus, there's no possibility of an accidental nil pointer dereference, which I think is a big win.A Result type would bring a similar benefit of fixing the few edge cases where an error may accidentally not be handled. Although I don't think it'd be worth the cost of switching over.
How is that better than
?Of course, this is often left out, but you can just as easily do:
So this is just not true:It's better because you do not need to remember to check for nil, the compiler will remind you every time by erroring out until you handle the second return value of `option.Get()`.
Unfortunately it gets brought up pretty much every time in these discussions.
Deliberate attempts to circumvent safety are not part of the threat model. The goal is prevention of accidental mistakes. Nothing can ultimately stop you from disabling all safeties, pointing the shotgun at your foot and pulling the trigger.
When enums make it from the language to the db, things are now brittle and it only takes one intern to sort the enums alphabetically to destroy the look up relations. An enum look up table helps, but now they are not enums in the language.
https://www.postgresql.org/docs/current/datatype-enum.html
Then wrap appropriately. Something like sqlc will actually generate everything you need.
I would like to have proper stack traces. With that the error handling in go would be fixed.
You can emit a stack trace anytime you like
Depends what you mean by 'enums' exactly, but now that generics has been added, a small change would be to allow interfaces defined via type disjunction to be used as concrete types:
That doesn't solve all the use cases for enums / sum types, but it would be useful.I just want regular enums, that would solve the problems that result from using the current status quo.
How the fuck do you release a language without enums?
Can someone help me understand why enums are needed? They only seem like sugar for reducing a few lines while writing. What cannot be achieved without them or what is really a pain point they solve? Maybe it is hard to have a constant with type information?
The original enum are just enumerated integer constants.
What people want "the ability to express enums with an associated value", I think we should invent a new term.
We did: Sum types.
The term you're looking for is ADT - Algebraic Data Types
https://en.wikipedia.org/wiki/Algebraic_data_type
You'll have to ask the Rust community. Rust lacks enums. Go, however, most definitely has enums (and exceptions too!).
It's funny how people who have clearly never even looked at Go continually think they are experts in it. What causes this strange phenomena?Go has a way to implement enums - I'll give you that. Rust does have enums though: https://doc.rust-lang.org/book/ch06-01-defining-an-enum.html
They can have values like sum types, or not.
> Rust does have enums though
It does not. You'll notice that if you read through your link. A tag-only union is not the same thing, even if you can find some passing similarities.
If you mean it has enums like the Democratic People's Republic of Korea has a democracy, then sure, it does have enums in that sense. I'm not sure that gives us anything meaningful, though.
If we're being honest, sum types are the better solution. Enums are hack for type systems that are too limited for anything else. They are not a feature you would add if you already have a sufficient type system. It's not clear why Rust even wants to associate itself with the hack that are enums, but your link shows that its author has a fundamental misunderstanding of what enums even are, so it may be down to that.
To be fair, Swift made the same mistake, but later acknowledged it. It is interesting that Rust has chosen to double down instead.
Why not? Seems to function the same way.
To be honest I haven’t seen a single programming language that has decent enums.
With that I mean fundamental and fool proof functions for to/from a string, to/from an int, exhaustive switch cases, pattern matching, enumerating.
Seems like something that wouldn’t be too hard but everybody always fails on something.
> I haven’t seen a single programming language that has decent enums.
There's not much you can do with an enumeration. It's just something that counts one-by-one.
A useful tool when you have a large set of constants that you want to number, without having to manually sit there 0, 1, 2, 3... But that's the extent of what it can offer.
> With that I mean fundamental and fool proof functions for to/from a string, to/from an int, exhaustive switch cases, pattern matching, enumerating.
While a programming language may expose this kind of functionality, none of these are properties of enums. They are completely separate features. Which you recognize, given that you are able to refer to them by name. Calling these enums is like calling if statements functions because if statements may be layered with functions.
You can enumerate constants. There is syntax to implicitly assign the integer values, just use iota as a value.
because you can just declare a custom type and have constants that use the custom type.
I've never needed either.
Try/catch is super confusing because the catch is often far away from the try. And in Python I just put try/catch around big chunks of code just in case for production.
I think Go is more stable and readable because they force you not to use the lazy unreadable way of error handling.
Enums I honestly never used in Go also not the not-type-safe ones.
But I'm also someone who used interfaces in Go maybe I think 4 times only in years and years of development.
I just never really need all those fancy things.
I think what this comment is missing is any sort of analysis of how your experience maps to the general go user, and an opinion on while you've never needed either whether you think it could have provided any benefit when used appropriately.
For example, and option type with enums combined can ensure return values are checked by providing a compile time error if a case is missing (as expressed in the first few examples of the readme).
I know it can, the compiler can do one more "automatic" unit test based on the type checking system.
But they decided not to add enums because it conflicted and overlapped too much with interfaces.
I just want to add "my" experience that personally, yes maybe you can argue enums are nice, but I never missed them in Go.
I personally agree with the Go team on how they argue and for me it would be a step back if they listened to the herd that does not take all sides of the story into consideration but just keeps pushing enums.
Try/catch is just a really bad thing all "hacky solution" alarm bells go off for me if you want to change error handling to giant try/catch blocks.
I'm very curious now about how it might conflict and/or overlap with interfaces.
To reach the goal of an enumeration type (and all the strong type-checking that that brings with it), enums could look as simple as:
And I don't see how that conflicts or overlaps with interfaces.I think something like when a variable type in an enum was an interface it would destroy the galaxy or something, not 100% sure, would have to look it up... 1 sec.
Here you Go: https://go.dev/doc/faq#variant_types
Hah :-)
Not quite the same: Variants are a constrained list of types. Enums are a constrained list of values.
Let's assume that I agree with the reasoning for not having a constrained list of types.
It still doesn't tell me why we can't have a constrained list of values.
I largely agree with your sentiment. Go’s simplicity is what makes it such a useful tool for me. It’s worth protecting, and that means setting a very high bar for proposals that add new things to the language.
However, 2 things I would be enthusiastic about if it got included in the language: - having ‘?’ As syntactic sugar for ‘if(err != nil) …’. Would make code more easily readable, and I think that is a benefit for programmers trying to keep things simple. - Sum Types. I’ve had a few cases where this would’ve been very useful. I consider the ‘var state customtype = iota’ a bit too easy to make mistakes with (eg exhaustive checking of options).
Like generics, I hope that when that happens, they take a very deliberate approach on doing it.
Your comment could have been a nice opinion that proves to a drive-by reader that needs can differ drastically between programmers.
But you ruined it with "fancy things" which shows offhand disregard and disrespect.
A question like "what do you need these features for?" would have been a better contribution to the forum.
I actually really have a disrespect for them. I'm in a constant fight against developers that want to translate code in almost the same code but "only using language features from the Advanced book".
I also wanted to add that I used inheritance only ONCE in all my years of writing Python in all other millions of lines of code inheritance was not the best solution.
This is my daily struggle as a CTO. People using waaayy too many "fancy" features of languages making it totally unreadable and unmaintainable.
It's their ego they want to show off how many complex language features they know. And it's ruining my codebases.
It's one thing to want your devs to produce readable code -- as a former CTO I also spent significant effort in teaching people that -- but it's completely another to be a curmudgeon and directly disregard valuable programming tools like the sum types.
Not sure why you are conflating both. Also inheritance was known to be the wrong tool for the job at least 15 years ago, maybe even 20. Back then people wrote Java books that said "prefer composition over inheritance" so your analogy didn't really land.
Everyone who uses sum types in production code agrees they reduce bugs.
Maybe it's time for you to retire.
While I have no particular beef with Rust deciding to call its sum types "enum", to refer to this as the actual enum is a bit much.
Enumerated types are simply named integers in most languages, exactly the sort you get with const / iota in Go: https://en.wikipedia.org/wiki/Enumerated_type
Rather than the tagged union which the word represents in Rust, and only Rust. Java's enums are close, since they're classes and one can add arbitrary behaviors and extra data associated with the enum.
Haxe also has Enums which are Generalized Algebraic Data Types, and they are called "enums" there as well: https://haxe.org/manual/types-enum-using.html
Very well then: Rust is not the only one to call a variant type / tagged union an enum. It's a nice language feature to have, whatever they decide to call it.
It remains a strange choice to refer to this as the true enum, actual enum, real enum, as has started occurring since Rust became prominent. If that's a meaningful concept, it means a set of numeric values which may be referred to by name. This is the original definition and remains the most popular.
Rust is targeting both users who know the original definition as well as people who don’t. Differentiating between real enums and sum types means the language gets another keyword for a concept that overlaps.
From a PL theory perspective, enum denotes an enumerable set of values within a type. It just happens that sums slot in well enough with that.
But the instances of a sum type aren't enumerable unless all of its generic parameters are enumerable.
Checked the definition. An enum is defined as a set of named constants. Id argue that a set by definition needs to be constrained. If it lacks the constraints/grouping id argue it no longer is a set.
I didn't read GP as saying "Actual enums are what Rust has", I read it more as "Go doesn't have actual enums", where "enum" is a type that is constrained to a specified set of values, which is what all mainstream non-Rust languages with enums call "Enums".
I mean, even if Rust never existed, the assertion "Go doesn't have actual enums" is still true, no?
That's an interpretation I hadn't considered, mostly because Borgo has Rust-style tagged unions which it also calls enums. The statement wouldn't have caught my attention if I'd read it in that light, but while I'm here, I don't mind opining.
"Does Go have enumerated values" seems much like "does Lua have objects". Lua doesn't have `class` or anything like it, it has tables, metatables, and metamethods. But it makes it very easy to use those primitives to create a class and instance pattern, including single inheritance if desired, and it even offers special syntax for defining and calling methods with `:` and `self`. If I had to deliver a verdict, I would say that the special syntax pushes it over the line, and so yeah: with some caveats, Lua has objects.
Same basic thing with Go. One may define a custom integer type, and a set of consts using that type with `iota`, to get something which behaves like plain old small-integer enums. It's possible to undermine this, however, by defining more values of this type, which makes this pattern weaker than it could be, but in a way which is similar to the enums found in C.
Ultimately, Go provides iota, and making enums is the intended purpose of it. If you search for "enums in Go" you'll find many sources describing an identical pattern. So, like `self` and `:` in Lua, I'd say that `iota` in Go means that it has an enumerated type.
But if someone wanted to say "Go doesn't even have enums, you have to roll your own, other languages have an enum keyword", I have a different opinion than that first clause, but there's nothing factually wrong with any of it. I find this sort of "where's the threshold" question less interesting than most.
Swift enums support union types as well, and are also very useful.