return to table of content

Go: What we got right, what we got wrong

w10-1
205 replies
20h5m

I really, really appreciate key people taking the time for retrospectives. It makes a huge difference to people now who want to make a real difference.

But I'm not sure Rob Pike states clearly enough what they got right (IMO): they managed the forces on the project as well as the language, by:

- Restricting the language to its target use: systems programming, not applications or data science or AI...

- Defining the language and its principles clearly. This avoids eons of waste in implementing ambiguity and designing at cross-purposes.

- Putting quality first: it's always cheaper for all concerned to fix problems before deploying, even if it's harder for the community or OS contributors or people waiting for new features.

- Sharing the community. They maintained strict control over the language and release and core messaging, but they also allowed others to lead in many downstream aspects.

Stated but under-appreciated is the degree to which Google itself didn't interfere. I suspect it's because Go actually served its objectives and is critical to Google. I wonder if that could be true today for a new project. It's interesting to compare Dart, which has zero uptake outside Flutter even though there are orders of magnitude more application code than systems code.

Go was probably the key technology that migrated server-side software off Java bloatware to native containers. It dominates back-end infrastructure and underlies most of the web application infrastructure of the last 10 years. The benefit to Google and the community from that alternative has been huge. Somehow amidst all that growth, the team remained small and kept all its key players.

Will that change?

the_duke
101 replies
18h31m

- Restricting the language to its target use: systems programming, not applications or data science or AI...

Go has a GC and a very heavy runtime with green threads, leading to cumbersome/slow C interop.

It certainly isn't viable as systems programming language , which is by design. That's an odd myth that has persisted ever since the language called itself as such in the beginning. They removed that wording years ago I think.

It's primarily a competitor to Java et al, not to C or Rust, and you see that when looking at the domains it is primarily used in, although it tends to sit a bit lower on the stack due to the limited nature of the type system and the great support for concurrency.

sapiogram
44 replies
18h12m

I totally agree that Go is best suited outside of systems programming, but to me that always seemed like a complete accident - its creators explicitly said their goal was to replace C++. But somehow it completely failed to do so, while simultaneously finding enormous success as a statically typed (and an order of magnitude faster) alternative to Python.

unscaled
13 replies
12h40m

It's not at "an accident" and Go didn't "somehow" fail to replace C++ at its systems programming domain. The reason why go failed to replace C and C++ is not a mystery to anyone: Mandatory GC and a rather heavyweight runtime.

When the performance overhead of having a GC is less significant than the cognitive overhead of dealing with manual memory management (or the Rust borrow checker), Go was quite successful: Command line tools and network programs.

Around the time Go was released, it was certainly touted by its creators as a "systems programming language"[1] and a "replacement for C++"[2], but re-evaluating the Go team's claims, I think they didn't quite mean in the way most of us interpreted them.

1. The Go Team members were using "systems programming language" in a very wide sense, that include everything that is not scripting or web. I hate this defintion with passion, since it relies on nothing but pure elitism ("Systems language are languages that REAL programmers uses, unlike those "Scripting Languages"). Ironically, this usage seems to originate from John Ousterhout[3], who is himself famous for designing a scripting language (Tcl).

Ousterhout's definition of "system programming language" is: Designed to write applications from scratch (not just "glue code"), performant, strongly typed, designed for building data structures and algorithms from scratch, often provide higher-level facilities such as objects and threads.

Ousterhout's definition was outdated even back in 2009, when Go was released, let alone today. Some dynamic languages (such as Python with type hints or TypeScript) are more strongly typed than C or even Java (with its type erasure). Typing is optional, but so it is in Java (Object), and C (void*, casting). When we talk about the archetypical "strongly typed" language today we would refer to Haskell or Scala rather than C. Scripting languages like Python and JavaScript were already commonly used "for writing applications from scratch" back in 2009, and far from being ill-adapted for writing data structures and algorithms from scratch, Python became the most common language that universities are using for teaching data structures and algorithms! The most popular dynamic languages nowadays (Ruby, Python, JavaScript) all have objects, and 2 out of 3 (Python and Ruby) have threads (although GIL makes using threads problematic in the mainstream runtimes). The only real differentiator that remains is raw performance.

The widely accepted definition of a "systems language" today is "a language that can be used to systems software". Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.

During the first years of Go, the Go language team was confronted on golang-nuts by people who wanted to use go for writing systems software and they usually evaded directly answering these questions. When pressed, they would admit that Go is not ready for writing OS kernels, at least not now[4][5][6], but GC could be disabled if you want to[7] (of course, there isn't any way to free memory then, so it's kinda moot). Eventually, the team came to a conclusion that disabling GC is not meant for production use[8][9], but that was not apparent in the early days.

Eventually the references for "systems language" disappeared from Go's official homepage and one team member (Andrew Gerrand) even admitted this branding was a mistake[10].

In hindsight, I think the main "systems programming task" that Rob Pike and other members at the Go team envisioned was the main task that Google needed: writing highly concurrent server code.

2. The Go Team members sometimes mentioned replacing C and C++, but only in the context of specific pain points that made "programming in the large" cumbersome with C++: build speed, dependency management and different programmers using different subsets. I couldn't find any claim that go was meant as a general replacement for C and C++ anywhere from the Go Team, but the media and the wider programming community generally took Go as a replacement language for C and C++.

When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers. For the rest of the industry, Java was (and perhaps still is) the most popular language for this task, with some companies opting for dynamic languages like Python, PHP and Ruby where performance allowed.

Go was a great fit for high-concurrency servers, especially back in 2009. Dynamic languages were slower and lacked native support for concurrency (if you put aside Lua, which never got popular for server programming for other reasons). Some of these languages had threads, but these were unworkable due to GIL. The closest thing was frameworks Twisted, but they were fully asynchronous and quite hard to use.

Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.

The mass migration of dynamic language programmers to Go was surprising for the Go team, but in hindsight it's pretty obvious. They were concerned about performance, but didn't feel like they had a choice: Java was just too complex and Enterprisey for them, and eeking out performance out of Java was not an task easy either. Go, on the other hand, had the simplest deployment model (a single binary), no need for fine tuning and it had a lot of built-in tooling from day one ("gofmt", "godoc", "gotest", cross compilation) and other important tools ("govet", "goprof" and "goinstall" which was later broken into "go get" and "go install") were added within one year of its initial release.

The Go team did expect server programs to be the main use for Go and this is what they were targeting at Google. They just missed that the bulk of new servers outside of Google were being written in dynamic languages or Java.

The other "surprising use" of Go was for writing command-line utilities. I'm not sure if the original Go team were thinking about that, but it is also quite obvious in hindsight. Go was just so much easier to distribute than any alternative available at the time. Scripting languages like Python, Ruby or Perl had great libraries for writing CLI programs, but distributing your program along with its dependencies and making sure the runtime and dependencies match what you needed was practically impossible without essentially packaging your app for every single OS and distro out there or relying on the user to be a to install the correct version of Python or Ruby and then use gem or pip to install your package. Java and .NET had slow start times due to their VM, so they were horrible candidates even if you'd solve the dependency issues. So the best solution was usually C or C++ with either the "./configure && ./make install" pattern or making a static binary - both solutions were quite horrible. Go was a winner again: it produced fully static binaries by default and had easy-to-use cross compilation out of the box. Even creating a native package for Linux distros was a lot easier, so all you add to do is package a static binary.

[1]: https://opensource.googleblog.com/2009/11/hey-ho-lets-go.htm...

[2]: https://web.archive.org/web/20091114043422/http://www.golang...

[3]: https://users.ece.utexas.edu/~adnan/top/ousterhout-scripting...

[4]: https://groups.google.com/g/golang-nuts/c/6vvOzYyDkWQ/m/3T1D...

[5]: https://groups.google.com/g/golang-nuts/c/BO1vBge4L-o/m/lU1_...

[6]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/NH0j...

[7]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/M9r1...

[8]: https://groups.google.com/g/golang-nuts/c/qKB9h_pS1p8/m/1NlO...

[9]: https://github.com/golang/go/issues/13761#issuecomment-16772...

[10]: https://go.dev/talks/2011/Real_World_Go.pdf (Slide #25)

zogrodea
5 replies
11h7m

I'm impressed. That's the most thorough and well-researched comment I've seen on Hackernews, ever. Thank you for taking the time and effort in writing it up.

neonsunset
2 replies
10h24m

It compares NuGet with Maven calling the former cumbersome. It's a tell of gaps in research made but also a showcase of overarching problem where C# is held back by people bundling it together with Java and the issues of its ecosystem (because NuGet is excellent and on par with Cargo crates).

unscaled
1 replies
6h47m

NuGet was only released in 2010, so I wasn't really referring to it. I was referring to Maven (the build tool part, not the Maven/Ivy dependency management part, which was quite a breeze) the build system and MSBuild. Both of which required wrangling with verbose XML and understanding a lot of syntax (or letting the IDE spew out everything for you and then get totally lost when you need to fix something or go beyond what the IDE UI allows you to do). If anything, MSBuild was somewhat worse that Maven, since the documentation was quite bad, at least back then.

That being said, I'm not sure if you've used NuGet in its early days of existence, but I did, and it was not a fun experience. I remember that I the NuGet project used to get corrupted quite often and I had to reinstall everything (and back then, there was no lockfile if my memory serves me right, so you'd be getting different versions).

In terms of performance, ASP.NET (not ASP.NET Core) was as bad as contemporary Java EE frameworks, if not worse. You could make a high performance web server by targeting OWIN directly (like you could target the Servlet API with Java), but that came later.

I think you are the one who are bundling things together here: You are confusing the current C#/.Net Core ecosystem with the way it was back in the .Net 4.0/Visual Studio 2008-era. Windows-centric, very hard to automate through CLI, XML-obsessed and rather brittle tooling.

C# did have a lot of good points over Java back then (and certainly now): Less verbose language, better generics (no type erasure), lambda expressions, extensions methods, LINQ etc. Visual Studio was also a better IDE than Eclipse. I personally chose C# over Java at the time (when I could target Windows), but I'm not trying to hide the limits it had back then.

neonsunset
0 replies
5h44m

Fair enough. You are right and apologize for rather hasty comment. .NET in 2010 was a completely different beast and an unlikely choice in the context. It would be good for the industry if the perception of that past was not extrapolated onto the current state of affairs.

unscaled
0 replies
7h47m

Thank you! I really appreciate it, since it did take a while writing this ;)

lenova
0 replies
10h34m

I agree. As someone unfamiliar with Go's history, that was incredibly well written. It felt like I cathartically followed Go's entire journey.

mike_hearn
5 replies
8h19m

Unfortunately I have to quibble a bit, although bravo for such a high effort post.

> When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers

I worked at Google from 2006-2014 and I wouldn't agree with this characterisation, nor actually with many of the things Rob Pike says in his talk.

In 2009 most Google web servers (by unique codebase I mean, not replica count) were written in Java. A few of the oldest web servers were written in C++ like web search and Maps. C++ still dominated infrastructure servers like BigTable. However, most web frontends were written in Java, for example, the Gmail and Accounts frontends were written in Java but the spam filter was written in C++.

Rob's talk is frankly somewhat weird to read as a result. He claims to have been solving Big Problems that only Google had, but AFAIK nobody in Google's senior management asked him to do Go despite a heavy investment in infrastructure. Java and C++ were working fine at the time and issues like build times were essentially solved by Blaze (a.k.a. Bazel) combined with a truly huge build cluster. Blaze is a command line written in ... drumroll ... Java (with a bit of C++ iirc).

Rob also makes the very strange claim that Google wasn't using threads in its software stack, or that threads were outright banned. That doesn't match my memory at all. Google servers were all heavily multi-threaded and async at that time, and every server exposed a /threadz URL on its management port that would show you the stack traces of every thread (in both C++ and Java). I have clear memories of debugging race conditions in servers there, well before Go existed.

> The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought.

Google didn't use any of those frameworks. It also didn't use regular Java build systems or dependency management.

At the time Go was developed Java had both the throughput-optimized parallel GC, and also the latency optimized CMS collector. Two years after Go was developed Java introduced the G1 GC which made the tradeoff more configurable.

I was on-call for Java servers at Google at various points. I don't remember GC being a major issue even back then (and nowadays modern Java GC is far better than Go's). It was sometimes a minor issue requiring tuning to get the best performance out of the hardware. I do remember JITC being a problem because some server codebases were so large that they warmed up too slowly, and this would cause request timeouts when hitting new servers that had just started up, so some products needed convoluted workarounds like pre-warming before answering healthy to the load balancer.

Overall, the story told by Rob Pike about Go's design criteria doesn't match my own recollection of what Google was actually doing. The main project Pike was known for at Google in that era was Sawzall, a Go-like language designed specifically for logs processing, which Google has phased out years ago (except in one last project where it's used for scripting purposes and where I heard the team has now taken over maintenance of the Sawzall runtime, that project was written by me, lol sorry guys). So maybe his primary experience of Google was actually writing languages for batch jobs rather than web servers and this explains his divergent views about what was common practice back then?

I agree with your assessment of Go's success outside of Google.

unscaled
2 replies
7h5m

In 2009 most Google web servers (by unique codebase I mean, not replica count) were written in Java. A few of the oldest web servers were written in C++ like web search and Maps. C++ still dominated infrastructure servers like BigTable. However, most web frontends were written in Java, for example, the Gmail and Accounts frontends were written in Java but the spam filter was written in C++.

Thank you. I don't much about the breakup of different services by language in Google circa 2009, so your feedback helps me put things in focus. I knew that Java was more popular than the way Rob described it (in his 2012 talk[1], not this essay), but I didn't know how much.

I would still argue that like replacing C and C++ in server code was the main impetus for developing Go. This would be a rather strange impetus outside of big tech company like Google, which was writing a lot of C++ server code to begin with. But it also seems that Go was developed quite independently of Google's own problems.

Rob also makes the very strange claim that Google wasn't using threads in its software stack, or that threads were outright banned. That doesn't match my memory at all.

I can't say anything about Google, but I also found that statement baffling. If you wanted to develop a scalable network server in Java at that time, you pretty much had to use threads. With C++ you had a few other alternatives (you could develop a single threaded server using an asynchronous library Boost ASIO for instance), but that was probably harder than dealing with deadlocks race conditions (which are still very much a problem in Go, the same way they are in multi-threaded C++ and Java).

Google didn't use any of those frameworks. It also didn't use regular Java build systems or dependency management.

Yes, I am aware of that part, and it makes it clearer for me Go wasn't trying to solve any particular problem with the way Java was used within Google. I also think Go win over many experienced Java developers who already knew how to deal with Java. But it did offer a simpler build-deployment-and-configuration story than Java, and that's why it attracted many Python and Node.js where Java failed to do so.

Many commentators have mentioned better performance and fewer errors with static typing as the main attraction for dynamic language programmers coming to Go, but that cannot be the only reason, since Java had both of these long before Go came to being.

At the time Go was developed Java had both the throughput-optimized parallel GC, and also the latency optimized CMS collector. Two years after Go was developed Java introduced the G1 GC which made the tradeoff more configurable.

Frankly speaking, GC was more minor problem for people coming from dynamic language. But the main issue for this type of developer, is that the GC in Java is configurable. In practice most of the developers I've worked with (even seasoned Java developers) do not know how to configure and benchmark Java GC, which is quite an issue.

JVM Warmup was and still is a major issue in Java. New features like AppCDS help a lot to solve this issue, but it requires some knowledge, understanding and work. Go solves that out of the box, by foregoing JIT (Of course, it loses other important optimizations that JIT natively enables like monomorphic dispatch).

[1] https://go.dev/talks/2012/splash.article

mike_hearn
1 replies
6h51m

The Google codebase had the delightful combination of both heavily async callback oriented APIs and also heavy use of multithreading. Not surprising for a company for whom software performance was an existential problem.

The core libraries were not only multi-threaded, but threaded in such a way that there was no way to shut them down cleanly. I was rather surprised when I first learned this fact during initial training, but the rationale made perfect sense: clean shutdown in heavily threaded code is hard and can introduce a lot of synchronization bugs, but Google software was all designed on the assumption that the whole machine might die at any moment. So why bother with clean shutdown when you had to support unclean shutdown anyway. Might as well just SIGKILL things when you're done with them.

And by core libraries I mean things like the RPC library, without which you couldn't do anything at all. So that I think shows the extent to which threading was not banned at Google.

smarterclayton
0 replies
3h26m

As an aside:

This principle (always shutdown uncleanly) was a significant point of design discussion in Kubernetes, another one of the projects that adapted lessons learned inside Google on the outside (and had to change as a result).

All of the core services (kubelet, apiserver, etc) mostly expect to shutdown uncleanly, because as a project we needed to handle unclean shutdowns anyway (and could fix bugs when they happened).

But quite a bit of the software run by Kubernetes (both early and today) doesn’t always necessarily behave that way - most notably Postgres in containers in the early days of Docker behaved badly when KILLed (where Linux terminates the process without it having a chance to react).

So faced with the expectation that Kubernetes would run a wide range of software where a Google-specific principle didn’t hold and couldn’t be enforced, Kubernetes always (modulo bugs or helpful contributors regressing under tested code paths) sends TERM, waits a few seconds, then KILLs.

And the lack of graceful Go http server shutdown (as well as it being hard to do correctly in big complex servers) for many years also made Kube apiservers harder to run in a highly available fashion for most deployers. If you don’t fully control the load balancing infrastructure in front of every server like Google does (because every large company already has a general load balancer approach built from Apache or nginx or haproxy for F5 or Cisco or …), or enforce that all clients handle all errors gracefully, you tend to prefer draining servers via code vs letting those errors escape to users. We ended up having to retrofit graceful shutdown to most of Kube’s server software after the fact, which was more effort than doing it from the beginning.

In a very real sense, Google’s economy of software scale is that it can enforce and drive consistent tradeoffs and principles across multiple projects where making a tradeoff saves effort in multiple domains. That is similar to the design principles in a programming language ecosystem like Go or orchestrator like Kubernetes, but is more extensive.

But those principles are inevitably under communicated to users (because who reads the docs before picking a programming language to implement a new project in?) and under enforced by projects (“you must be this tall to operate your own Kubernetes cluster”).

foobazgt
1 replies
6h48m

This. I worked at Google around the same time. Adwords and Gmail were customers of my team.

I remember appreciating how much nicer it was to run Java servers, because best practice for C++ was (and presumably still is) to immediately abort the entire process any time an invariant was broken. This meant that it wasn't uncommon to experience queries of death that would trivially shoot down entire clusters. With Java, on the other hand, you'd just abort the specific request and keep chugging.

I didn't really see any appreciable attrition to golang from Java during my time at Google. Similarly, at my last job, the majority of work in golang was from people transitioning from ruby. I later learned a common reason to choose golang over Java was over confusion about the Java tooling / development workflow. For example, folks coming from ruby would often debug with log statements and process restarts instead of just using a debugger and hot patching code.

mike_hearn
0 replies
6h41m

Yeah. C++ has exceptions but using them in combination with manual memory management is nearly impossible, despite RAII making it appear like it should be a reasonable thing to do. I was immediately burned by this the first time I wrote a codebase that combined C++ and exceptions, ugh, never again. Pretty sure I never encountered a C++ codebase that didn't ban exceptions by policy and rely on error codes instead.

This very C oriented mindset can be seen in Go's design too, even though Go has GC. I worked with a company using Go once where I was getting 500s from their servers when trying to use their API, and couldn't figure out why. I asked them to check their logs to tell me what was going wrong. They told me their logs didn't have any information about it, because the error code being logged only reflected that something had gone wrong somewhere inside a giant library and there were no stack traces to pinpoint the issue. Their suggested solution: just keep trying random things until you figure it out.

That was an immediate and visceral reminder of the value of exceptions, and by implication, GC.

pjmlp
0 replies
10h25m

Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.

Android GPU debugger, USB Armory bare metal unikernel firmware, Go compiler, Go linker, bare metal on maker boards like Arduino and ESP32

Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.

Usually a problem only for those that refuse to actually learn about Java and .NET ecosystems.

Still doing great after 25 years, now being copied with the VC ideas to sponsor Kubernetes + WASM selling startups.

fishywang
6 replies
16h17m

its creators explicitly said their goal was to replace C++

so nowadays when we say "c++" we mostly mean the works should be replaced by rust, but back then, it's not like that.

I would argue that go successfully replaced c++ in specific domains (network, etc.), and changed your perspective on what "c++" means.

zozbot234
5 replies
16h13m

That's nothing new, Java successfully replaced C++ in enterprise code in the mid-to-late 1990s. Because it was safe from memory bugs.

lanstin
4 replies
15h53m

Mid 2000s in my experience. And not because it was safe from memory bugs so much as safe from memory leaks. Still had plenty of NPEs.

justinclift
1 replies
11h35m

Java kind of gets around the memory leak problem by allocating all of the leak up front for the JVM. ;)

cryptos
0 replies
11h8m

I'm a JVM guy, but this is a good one :-)

pjmlp
0 replies
10h29m

NPE isn't a memory corruption bug.

kaba0
0 replies
5h46m

Those are safe.

And it’s not like Go didn’t just copy nulls (plus even has shitty initialization problems now, e.g. with make!)

randomdata
5 replies
15h2m

Except it did replace C++ in the domains it claimed it would replace C++ in. It made clear from day one that you wouldn't write something like a kernel in it. It was never trying to replace every last use of C++.

You may have a point that Python would have replaced C++ in those places instead if Go had never materialized. It was clear C++ was already on the way out, with Python among those starting to push into its place around the time Go was conceived. Go found its success in the places where Python was also trying to replace C++.

jeswin
1 replies
14h31m

You may have a point that Python would have replaced C++ in those places instead if Go had never materialized.

I don't think Python was starting to occupy C++ space; they have entirely different abilities. Of course, I am also glad it didn't happen.

randomdata
0 replies
14h14m

I don't think so either, but as we move past that side tangent and return to the discussion, there was the battle of the 'event systems'. Node.js was also created in this timeframe to compete on much the same ground. And then came Go, after which most contenders, including Python, backed down. If you are writing these kinds of programs today, it is highly likely that you are using Go, Node.js, or some language that is even newer than Go (e.g. Rust or Elixir). C++ isn't even on the consideration list anymore.

bcrosby95
1 replies
14h41m

What domains are those? It seems to mostly be an alternative to what people have use(d) Java or C# for.

randomdata
0 replies
14h40m

The original Go announcement spells it all out pretty nicely.

unscaled
0 replies
12h7m

I'm not sure what you were meaning by "it".

The main domain the original team behind Go were aiming at was clearly network software, especially servers.

But there was no consensus whether kernel could be a goal one day. Rob Pike originally thought Go could be a good language for writing kernels, if they made just a few tweaks to the runtime[1], but Ian Lance Taylor didn't see real kernels ever being written in Go[2]. In the pre-release versions of Go, Russ Cox wrote an example minimalistic kernel[3] that can directly run Go (the kernel itself is written in C and x86 Assembly) - it never really went beyond running a few toy programs and eventually became broken and unmaintained so it was removed.

[1]: https://groups.google.com/g/golang-nuts/c/6vvOzYyDkWQ/m/3T1D...

[2]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/NH0j...

[3]: https://github.com/golang/go/tree/weekly.2011-01-12/src/pkg/...

threeseed
4 replies
14h38m

(and an order of magnitude faster) alternative to Python

Python is increasingly an easy to use wrapper over low-level C/C++ code.

So in many use cases it is faster than Go.

justinclift
3 replies
11h33m

Than pure Go code, sure. But not really faster than Go code that's a wrapper over the same low-level C/C++ code.

Merovius
1 replies
10h21m

That depends. C function call overhead for Go is quite large (it needs to allocate a larger stack, put it on its own thread and prevent pre-emption) and possibly larger than for CPython, which relies on calling into C for pretty much everything it does, so obviously has that path well-optimized.

So I wouldn't be surprised if, for some use cases, Python calling C in a tight loop could outperform Go.

gwd
0 replies
7h55m

So I wouldn't be surprised if, for some use cases, Python calling C in a tight loop could outperform Go.

I don't have experience with Python, but I can definitely say switching between Go and C is super slow. I'm using a golang package which is a wrapper around SQLite: at some point I had a custom function written as a call-back to a Go function; profiling showed that a huge amount of time was spent in the transition code marshalling stuff back and forth between Go and C. I ended up writing the function in C so that the C sqlite3 library could call it directly, and it sped up my benchmarks significantly, maybe 5x. Even though sqlite3 is local, I still end up trying to minimize requests and data shipped out of the database, because transferring data in and out is so expensive.

(And if you're curious, yes I have considered trying to use one of the "pure go" sqlite3 packages; in large part it's a question of trust: the core sqlite3 library is tested fantastically well; do I trust the reimplementations enough not to lose my data? The performance would have to be pretty compelling to make it worth the risk.)

I think in general discouraging CGo makes sense, as in the vast majority of cases a re-implementation is better in the long run; so de-prioritizing CGo performance also makes sense. But there are exceptions, particularly for libraries where you want functionality to be identical, like sqlite3 or Qt, and there the CGo performance is a distinct downside.

iainmerrick
0 replies
10h21m

Do you have an example of that? What I’ve heard over and over in comments here is that a) C interop in Go is slow, and b) Go devs discourage using it.

(Java is a similar story in my experience.)

In Python, (b) at least is definitely not true.

bborud
4 replies
8h26m

It may help to understand the context. At the time Go was created you could choose between three languages at Google: Python, C++ and Java.

Well, to be honest, if you chose Python you were kind of looked down on as a bit of a loser() at Google, so there was really two languages: C++ and Java.

Weeeell, to be honest, if you chose Java you would not be working on anything that was really performance critical so there was really just one language: C++.

So we wrote lots and lots of servers in C++. Even those who strictly speaking didn't have to be very fast. That wasn't the nicest experience in the world. Not least because C++ is ancient and the linking stage would end up being a massive bottleneck during large builds. But also because C++ has a lot of sharp edges. And while the bro coders would never admit that they were wasting time looking over their shoulder, the grown ups started taking this seriously as a problem. A problem major enough to warrant having some really good people look into remedying that.

So yes, at Google, Go did replace lots of C++ and did so successfully.

(

) Yes, that sentiment was expressed. Sometimes by people whose names would be very familiar to you.

sebastianz
2 replies
7h27m

At the time Go was created you could choose between three languages at Google: Python, C++ and Java.

Out of curiosity, what languages can you currently choose from at Google?

ncruces
0 replies
3h15m

Just you guess: Python, C++ and Java… and Go.

Or JavaScript, Dart, Objective-C, Swift, Rust; even C#. But then it depends on the problem domain. Google it huge, so it depends. And that's even if you pick Python, C++, Java or Go. So your team will already have it decided for you.

bborud
0 replies
3h8m

I haven't worked there for a long time so I wouldn't know. I don't even know if they are still as disciplined about what languages are allowed and how those languages are used.

Can someone still at Google chime in on this?

bborud
0 replies
1h52m

(sorry about the asterisk causing everything to be in itallics. forgot about formatting directives when adding footnote)

Thaxll
2 replies
14h58m

And yet most popular tools written in Go used to be written in C++, Kubernetes, Databases and the like.

lmm
1 replies
12h59m

Kubernetes mostly displaced tools written in Ruby (Puppet, Chef, Vagrant) or Python (Ansible, Fabric?). While a lot of older datastores are written in C++, new ones that were started post-2000ish tended to be written in Java or similar.

Thaxll
0 replies
3h25m

Kuberentes has nothing to do with Ruby / Python from your example it's far more complex and needs performance, what you described are not what k8s is doing.

Kubernetes is the equivalent of Borg /Omega at Google which is written in C++.

usrusr
1 replies
17h31m

Go appears to be made with radical focus on a niche that isn't particularly well specified outside the heads of its benevolent directorate for life. Opinionated to the point of "if you use Go outside that unnamed niche you got no-one to blame but yourself". Could almost be called a solution looking for a problem. But it also appears to be quite successful at finding problem-fit, no doubt helped by the clarity of that focus. They've been very open about what they consider Go no to be or ever become. Unlike practically every other language, they all seem to eventually fall into the trap of advertising themselves with what boils down to "in a pinch you could also use it for everything else".

It's quite plausible that before Go, its creators would have chosen C++ for problems they consider in "The Go Niche". That would be perfectly sufficient to declare it a C++ replacement in that niche. Just not a universal C++ replacement.

pjmlp
0 replies
5h44m

Consider this, the authors have fixed some of the Plan 9 design errors including the demise of Alef, by creating Inferno and Limbo (yeah it was a response to Java OS, but still).

Where C is only used for the Inferno kernel, Limbo VM (with a JIT) and little else like Tk bindings, everything else in Inferno is written in Limbo.

Replace Limbo with AOT compiled Go, and that is what systems programming is in the minds of UNIX, Plan 9 and Inferno authors.

kaba0
0 replies
5h47m

So, it’s Java 1.2, but worse. Cool contribution!

Merovius
0 replies
10h26m

its creators explicitly said their goal was to replace C++

I think that is a far clearer goal if you look at C++ as it is used inside Google. If you combine the Google C++ style guide and Abseil, you can see the heritage of Go very clearly.

wredcoll
12 replies
18h13m

Man, arguments about the definition of "systems programming" are almost as much fun as the old "dynamic" vs "static" language wars.

fractalb
5 replies
11h53m

IIRC, Google tried to use Go for their Fuchsia TCP stack and then backtracked. Not a systems programming language for sure.

pjmlp
4 replies
10h30m

Sure it backtracked, because the guy pushing for Go left the team, and the rest is history.

Is writing compilers, linkers, IoT and bare metal firmware systems programming?

raggi
3 replies
8h24m

I worked on Fuchsia for many years, and maintained the Go fork for a good while. Fuchsia shipped with the gvisor based (go) netstack to google home devices.

The Go fork was a pain for a number of reasons, some were history, but more deeply the plan for fixing that was complicated due to the runtime making fairly core architectural assumptions that the world has fd's and epoll-like behavior. Those constraints cause challenges even for current systems, and even for Linux where you may not want to be constrained by that anymore. Eventually Fuchsia abandoned Go for new software because folks hired to rewrite the integration ran out of motivation to do so, and the properties of the runtime as-written presented atrocious performance on a power/performance curve - not suitable for battery based devices. Binary sizes also made integration into storage constrained systems more painful, and without a large number of components written in the language to bundle together the build size is too large. Rust and C++ also often produce large binaries, but they can be substantially mitigated with dynamic linking provided you have a strong package system that avoids the ABI problem as Fuchsia does.

The cost of crossing the cgo/syscall boundary remains high, and got higher over the time that Fuchsia was in major development due to the increased cost of spectre and meltdown mitigations.

The cgo/syscall boundary cost shows up in my current job a lot too, where we do things like talk to sqlite constantly for small objects or shuffle small packets of less than or equal to common mtu sizes. Go is slow at these things in the same way that other managed runtimes are - for the same reasons. It's hard to integrate foreign APIs unless the standard library already integrated them in the core APIs - something the team will only do for common use cases (reasonably so, but annoying when you're stuck fighting it constantly). There are quite a few measures like this where Go has a high cost of implementation for lower level problems - problems that involve high frequency integration with surrounding systems. Go has a lower cost of ownership when you can pass very large buffers in or out of the program and do lots of work on them, and when your concurrency models fit the channel/goroutine model ok. If you have a problem that involves higher frequency operations, or more interesting targets, you'll find the lack of broader atomics, the inability to cheaply or precisely schedule work problematic.

pjmlp
2 replies
8h13m

All valid reasons, however as proven by USB Armory Go's bare metal unikernel, had the people behind Go's introduction stayed on the team, battling for it, maybe those issues would have been sorted out with Go still in the picture, instead of a rewrite.

Similar to Longhorn/Midori versus Android, on one side Microsoft WinDev politics managed to kill any effort to use .NET instead of COM/C++, on the other side Google teams collaborated to actually ship a managed OS, nowadays used by billions of people across the world.

On both cases, politics and product management vision won over relevance of the related technical stacks.

I always take with a grain of salt why A is better than B, only on technical matters.

raggi
1 replies
2h9m

I see you citing usb armory a lot, but I haven’t yet seen any acknowledgement that it too is a go fork. Not everything runs on that fork, some things need patching.

It’s interesting that you raise collaboration points here. When Russ was getting in go modules design he reached out and I made time for him giving him a brain dump of knowledge from working on ruby gems for many years and the bundler introduction into that ecosystem and forge deprecation/gemcutter transition, plus insights from having watched npm and cargo due to adjacencies. He took a lot of notes and things from that showed up in the posts and design. When fuchsia was starting to rumble about dropping go I reached out to him about it, hoping to discuss some of the key points - he never got back to me.

pjmlp
0 replies
1h2m

It is written in TamaGo, originally developed by people at F-Secure.

I don't see the issue it being a fork, plenty of languages have multiple implementations, with various degrees of plus and minus.

As for the rest, thanks for the sharing the experience.

kibwen
4 replies
17h52m

> the old "dynamic" vs "static" language wars.

We used to argue about dynamic vs static languages. We still do, but we used to, too.

ysofunny
1 replies
16h30m

then there's this https://en.wikipedia.org/wiki/Dynamic_programming

which has nothing to do with types nor variables but with algorithm optimization

jfoutz
0 replies
16h3m

The trick is to throw memory at it. if memoization helps, that’ll work without the memory hit!

tharne
0 replies
3h2m

RIP Mitch, you were one of the greats.

klausnrooster
0 replies
12h34m

:) ...Mitch

pohl
0 replies
13h34m

I don’t recall anything but a single definition of the term until Google muddied the waters.

opportune
10 replies
16h29m

Seconding this. Go also has some opinionated standard libraries (want to differentiate between header casings in http requests because your clients/downstream services do? Go fuck yourself!) and shies you away from doing hacky, ugly, dangerous things you need in a systems language.

It’s absolutely an applications language.

aatd86
8 replies
14h3m

Headers are case-sensitive?

opportune
4 replies
11h45m

Only per the HTTP spec, and this is the same misunderstanding that the golang developers have. Because it's so common to preserve header casing as requests traverse networks in the real world, many users' applications or even developers' APIs depend on header casing whether intentionally or not. So if you want to interact with them, or proxy them, you probably can't use Go to do so (ok, actually you can, but you have to go down to the TCP level and abandon their http request library).

Go makes the argument that they can format your headers in canonicalized casing because casing shouldn't matter per the HTTP spec. That's fine for applications I guess, though still kind of an overreach given they have added code to modify your headers in a particular way you might not want to spend cycles on - but unacceptable for a systems language/infrastructure implementation.

unscaled
2 replies
10h24m

I thin you wanted to say that headers are not case sensitive according to the HTTP spec, but some clients and servers do treat header names as case-sensitive in practice.

What Go does here is kinda moot nowadays, since HTTP/2.0 and HTTP/3.0 force all header names into lower-case, so they would also break non-conformant clients and servers.

opportune
1 replies
9h57m

That is in fact what I meant to say, and I thought I said it. Anyway, HTTP/1.1 is still in use a lot of places.

I think most people here don’t have any experience building for the kind of use cases I’m considering here (imagine a proxy like Envoy, which btw does give you at least the option to configure header casing transformations). When you have customers that can’t be forced to behave in a certain way up/down stream, you have to deal with this kind of stuff.

unscaled
0 replies
6h32m

The Go standard library is probably being too opinionated here, but it's in line with the general worse-is-better philosophy behind Go: simplicity of implementation is more important than correctness of interface. In this case, the interface can even be claimed to be correct (according to the spec), but it cannot cover all use-cases.

If my memory serves me right, we did use Traefik at work in the past, and I remember having this issue with some legacy clients, which didn't expect headers to be transformed. Or perhaps the issue was with Envoy (which converts everything to lowercase by default, but does allow a great deal of customization).

aatd86
0 replies
6h46m

Wait, are the headers canonicalized if you retrieve them from r.Header where r is a request?

I mean, if the safest is to conform to the html spec, there should be an escape hatch for the rarer cases easier than going all the way to the tcp level?

akoboldfrying
1 replies
13h40m

No, and that wasn't the claim being made. The claim being made was that there can be engineering value in preserving the case of existing headers.

Example: An HTTP proxy that preserves the case of HTTP headers is going to cause less breakage than one that changes them. In a perfect world, it would make no difference, but that isn't the world we live in.

aatd86
0 replies
9h8m

Are you sure they are discarded and unrecoverable? Can't that be simply recovered by using textproto.MIMEHeader and iterating over the header map?

Seems that it could be a middleware away, I don't see the big deal if so.

jrockway
0 replies
13h45m

They aren't, but because you can send Foo-Bar as fOo-BaR on the wire, someone somewhere depends on it. People don't read the specs, they look at the example data, and decide that's how their program works now.

Postel's Law allows this. A different law might say "if anything is invalid or weird, reject it instantly" and there would be a lot less security bugs. But we also wouldn't have TCP or HTTP.

troupo
0 replies
13h29m

and shies you away from doing hacky, ugly, dangerous things you need in a systems language.

But... You end up doing hacky and ugly things all the time because Go is such a restricted language with so many opinions about what should and should not be done. Generics alone...

randomdata
6 replies
18h12m

> It certainly isn't viable as systems programming language

It is perfectly viable as a systems programming language. Remember, systems are the alternative to scripts. Go is in no way a scripting language...

You must be involved in Rust circles? They somehow became confused about what systems are, just as they became confused about what enums are. That is where you will find the odd myths.

leoc
4 replies
17h39m

It’s all admittedly a somewhat handwaving discussion, but in ‘systems programming’ ‘systems’ is generally understood to be opposite to ‘applications’, not ‘scripts’.

randomdata
3 replies
17h34m

All software is application. That’s what software is for!

bmicraft
2 replies
16h59m

I wouldn't consider a driver an application

randomdata
0 replies
16h56m

application: the action of putting something into operation

What's a driver if not something that carries out an action of putting something (an electronic device, typically) into operation?

kazinator
0 replies
11h27m

We live in an age in which a PC running an OS, which has drivers in it, is something that can be done by Javascript in a browser.

cdogl
0 replies
18h6m

Indeed - I’ve seen this refrain about “systems programming” countless times. I’m not sure how one can sustain the argument that a “system” is only an OS kernel, network stack or graphics driver.

ysofunny
5 replies
16h31m

let's just pretend that when go lang people say "systems programing" they mean smething closer to "network (systems) programming" which is where go shines the brightest

kernal
4 replies
14h33m

And yet Google replaced the go based network stack in Fuchsia with rust for performance reasons.

pjmlp
1 replies
10h23m

After the guy responsible for it left the team.

ikiris
0 replies
9h42m

This sounds more like for perf reasons that for performance reasons.

kramerger
0 replies
4h55m

Hold on a minute.

You are confusing the network stack (as in OS development) and network applications. Go is the undisputed king of backend, but no reasonable person has ever claimed its a good choice for OS development.

howenterprisey
0 replies
13h45m

I understand ysofunny's comment to have meant basically microservices/contemporary web backend.

AnimalMuppet
5 replies
15h10m

For people of Pike's generation, "systems programming" means, roughly, the OS plus the utilities that would come with an OS. Well, Go may not be useful for writing the OS, but for the OS-level utilities, it works just fine.

jayd16
4 replies
10h45m

Has it found success in OS-level utilities? What popular utilities are written in Go?

ziotom78
0 replies
9h45m

Not sure these are really popular, but I cannot resist advertising a few utilities written in Go that I regularly use in my daily workflow:

- gdu: a NCDU clone, much faster on SSD mounts [1]

- duf: a `df` clone with a nicer interface [2]

- massren: a `vidir` clone (simpler to use but with fewer options) [3]

- gotop: a `top` clone [4]

- micro: a nice TUI editor [5]

Building this kind of tools in Go makes sense, as the executables are statically compiled and are thus easy to install on remote servers.

[1]: https://github.com/dundee/gdu

[2]: https://github.com/muesli/duf

[3]: https://github.com/laurent22/massren

[4]: https://github.com/xxxserxxx/gotop

[5]: https://github.com/zyedidia/micro

pjmlp
0 replies
10h22m

Being self hosted?

Gokrazy userspace?

gVisor?

nolist_policy
0 replies
6h49m

Docker and Podman.

konart
0 replies
7h16m

Not sure what should be counted as OS-level.

Is docker cli and OS level? What about lazygit? chezmoi? dive? fzf?

Actually many popular utilities are written in Go

zozbot234
4 replies
18h6m

Early versions of Rust were a lot like Golang with some added OCaml flavor. Complete with general GC, green threading etc. They pivoted to current Rust with its focus on static borrowcheck and zero-overhead abstractions very late in the language's evolution (though still pre-1.0 obviously) because they weren't OK with the heavy runtime and cumbersome interop with C FFI. So there's that.

masklinn
1 replies
8h39m

Complete with general GC, green threading etc.

AFAIK there was never "general GC". There was a GC'd smart pointer (@), and its implementation never got beyond refcounting, it was moved behind a feature gate (and a later-removed Gc library type) in 0.9 and removed in 0.10.

Ur-Rust was closer to an "applications" language for sure, and thus closer to Go's (possibly by virtue of being closer to OCaml), but it was always focused much more strongly on type safety and lifting constraints to types, as well as more interested in low-level concerns: unique pointers (~) and move semantics (if in a different form) were part of Rust 0.1.

That is what the community glommed onto, leading to "the pivot": there were applications language aplenty, but there was a real hunger for a type-heavy and memory-safe low level / systems programming language, and Rust had the bones of it.

tialaramex
0 replies
6h44m

a real hunger for a type-heavy and memory-safe low level / systems programming language, and Rust had the bones of it.

I didn't know I wanted this, but yes, I did want this and when I got it I was much more enthusiastic than I'd ever been about languages like Python or Java.

I bounced off Go, it's not bad but it didn't do anything I cared about enough to push through all the usual annoyances of a new language, whereas Rust was extremely compelling by the time I looked into it (only a few years ago) and has only improved since.

kibwen
1 replies
17h48m

Both Rust and Go are descendants of Limbo, Pike's prior language, although while Limbo's DNA remains strong in Go it's much more diffuse in Rust.

sapiogram
0 replies
5h17m

While those influences are important to Rust's history, they were mostly removed from the language before 1.0, notably green threads and the focus on channels as a core concurrency primitive. Channels still exist as a library in stdlib, but they're infinitely buffered by default, and aren't widely used.

veqq
2 replies
16h4m

runtime ... GC ... not viable as systems programming language

A GC can work fine. At the lower levels, people want to save every flop, but at the higher levels uncounted millions are wasted by JS, Electron apps etc. etc. We can sacrifice a little on the bottom (in the kernel) for great comfort, without a difference. But you don't even have to make sacrifices. A high performance kernel only needs to allocate at startup, without freeing memory, allowing you to e.g. skip GC completely (turn it off with a compiler flag). This does require the kernel to implement specific optimizations though, which aren't typically party to a language spec.

Anyway, some OS implemented with a GC: Oberon/Bluebottle (the Oberon language was designed specifically to implement the Oberon OS), JavaOS, JX, JNode, Smalltalk (was the OS for the first Smalltalk systems), Lisp in old Lisp machines... Interval Research even worked on a real time OS written in Smalltalk.

Indeed, GC can work in hard real time systems e.g. the Aonix PERC Ultra, embedded real time Java for missile control (but Go's current runtime' GC stops are unpredictable....)

Particularly when we consider modern hardware problems (basic OS research already basically stopped in the 90s, yay risc processor design...), with minimal hardware support for high speed context switching because of processor speed vs. memory access latency... Well, it's not like we can utilize such minuscule operations anyway. Why don't we just have sensible processors which don't encourage us to unroll loops, which have die space to store context...

There were Java processors [2] which implement the JVM in hardware, with Java bytecode as machine code. Before llvm gained dominance, there were processors optimized to many languages (even Forths!)

David Chisnell, a RTOS and FreeBSD contributor recently went into quite a bit of depth [1] ending with:

everything that isn’t an allocator, a context-switch routine, or an ISR, can be written in a fully type-safe GC’d language

[1] https://lobste.rs/s/e6tz0r/memory_safety_is_red_herring#c_gf...

[2] https://www.electronicdesign.com/technologies/embedded/artic...

davedx
1 replies
7h5m

The nice thing about Java is you can choose which GC to use

pjmlp
0 replies
5h37m

Not only the GC, the JIT compiler, the AOT compiler, the full implementation even.

0xpgm
1 replies
12h39m

Maybe the better term for go would be server-systems programming.

bborud
0 replies
8h23m

Server programming?

The term "systems programming" seems to be interpreted very differently by different people which in practice renders it useless. It is probably best to not use it at all to avoid confusion.

pjmlp
0 replies
10h35m

Niklaus Wirth, rest his soul, would disagree.

Like would the folks at WithSecure, selling the USB Armory with Go written firmware.

https://www.withsecure.com/en/solutions/innovative-security-...

Back in my day, writing compilers and OS services were also systems programming.

kazinator
0 replies
12h57m

The shells scripts that bring up a machine are also "systems programming".

Matl
0 replies
10h25m

"systems" can mean "distributed systems", "network systems" etc. both of which Go is suitable for. It's obviously not a great choice for "operating systems" which is well known.

zozbot234
31 replies
19h55m

Go was probably the key technology that migrated server-side software off Java bloatware to native containers

Interesting point of view - Golang might be pithily described as "Java done right". That has little to do with "systems programming" per se but can be quite valuable in its own terms.

grumpyprole
27 replies
19h39m

Java has a culture of over-engineering, to the point where even a logging library contains a string interpolator capable of executing remote code. Go successfully jettisoned this culture, even if the language itself repeated many of the same old mistakes that Java originally did.

zimpenfish
12 replies
19h11m

Java has a culture of over-engineering [which] Go successfully jettisoned

[looks at the code bases of several recent jobs] [shakes head in violent disagreement]

If I'm having to open 6-8 different files just to follow what a HTTP handler does because it calls into an interface which calls into an interface which calls into an interface (and there's no possibility that any of these will ever have a different implementation) ... I think we're firmly into over-engineering territory.

(Anecdata, obviously)

bb88
5 replies
18h45m

But don't those exist primarily for unit testing? That was my understanding of why interfaces were there.

If you wanted a mock, you needed an interface, even if there would ever only be one implementation of it in production.

tormeh
3 replies
15h17m

I hate this pattern. Needless indirection for insignificant benefit.

bb88
1 replies
13h40m

Golang "punishes" you for wanting to write a unit test around code.

You need to refactor it to use an interface just to unit test it.

sidlls
0 replies
12h26m

No, you don’t, unless you’re of the opinion that actual data structures with test data should not be used in a unit test.

akoboldfrying
0 replies
13h25m

It's indeed horrible when debugging. OTOH, there's merit to the idea that better testing means less overall time spent (on either testing or debugging), so design choices that make testing easier provide a gain -- provided that good tests are actually implemented.

zimpenfish
0 replies
9h31m

But don't those exist primarily for unit testing?

I believe that's why people insert them everywhere, yes, but in the codebases I'm talking about, many (I'd say the majority, to be honest) of the interfaces aren't used for testing because they've just been cargo-culted rather than actually considered.

(Obviously this is with hindsight - they may well have been considered at the time but the end result doesn't reflect that.)

dharmab
2 replies
18h16m

Why would those require 6 separate files?

zimpenfish
0 replies
9h22m

They don't; you could put it all in one file but people tend to separate interfaces from implementation from types from ...

theshrike79
0 replies
10h33m

Interface, actual implementation, Factory, FactoryImpl, you get the idea.

Java lends itself to over-engineering more than most languages. Especially since it seems that every project has that one committer who must be getting paid per line and creates the most complex structures for stuff that should've been a single static function.

grumpyprole
0 replies
19h7m

I stand corrected!

evantbyrne
0 replies
17h16m

Java is a beautiful and capable language. I have written ORMs in both Java and Go, and the former was much easier to implement. Java has a culture problem though where developers seem to somehow enjoy discovering new ways to complicate their codebases. Beans are injected into codebases in waves like artillery at the Somme. Errors become inscrutable requiring breakpoints in an IDE to determine origin. What you describe with debugging a HTTP handler in your Go project is the norm in every commercial Java project I have ever contributed to. It's a real shame that you are seeing these same kinds of issues in Go codebases.

davedx
0 replies
6h55m

Agree. Open source OAuth go libraries have this too. It's like working with C++ code from the bad old days when everyone thought inheritance was the primary abstraction tool to use.

dekhn
11 replies
19h25m

I asked Rob why he didn't like Java (Gosling was standing nearby) and he said "public static void main"

grumpyprole
5 replies
19h14m

Nice. My reply would have been something like: it combines the performance of Lisp with the productivity of C++. These days Java the language is much better though, thanks to Brian Goetz.

shakow
3 replies
16h23m

it combines the performance of Lisp with the productivity of C++

Is that supposed to be a jab? Because IME SBCL Lisp is in the same ballpark as Go (albeit offering a fully interactive development environment), and C++ is far from being the worst choice when it comes down to productivity.

grumpyprole
2 replies
12h3m

Hopefully you agree Lisp is more productive than C++? Lisp is however not quite fast or efficient enough to displace C++ completely, mainly because, like Java and Go, it has a garbage collector. C++ was very much the language in Java's crosshairs. Java made programming a bit safer, nulls and threads not withstanding, but was certainly not as productive as Lisp. Meanwhile Lisp evolved into Haskell and OCaml, two very productive languages which thankfully are inspiring everyone else to improve. Phil Wadler (from the original Haskell committee) has even been helping the Go team.

shakow
0 replies
9h23m

I agree; my point is simply that C++ is still OK-ish w.r.t. productivity, and Lisp is OK when it comes down to performances.

pjmlp
0 replies
10h15m

Common Lisp, is also among the GC based languages that offer mechanisms to do GC free allocations and has value types.

The only reason is the AI winter, and companies moving away from Lisp.

didntcheck
0 replies
8h32m

The performance of the JVM was definitely a fair criticism in it's early years, and still is when writing performance-critical applications like databases, but it's still possibly the fastest managed runtime around, and is often only a margin slower than native code on hot paths. It seems the reputation has stuck though, to the point that I've seen young programmers make stock jokes about Java being slow when their proposed alternative is Python

blibble
2 replies
18h0m

still better than case deciding visibility

nyolfen
1 replies
14h31m

i love this feature to be honest

cryptos
0 replies
10h33m

I consider that a bad practice, because it doesn't make things obvious. I guess it works so well in Go, because the language itself is small, so that you don't have to remember much of these "syntax tricks". Making things explicit, but not too verbose, is the best way in my opinion. JetBrains has done amazing work in this area with Kotlin.

    for (item in collection) {
        ...
    }

    list.map { it + 1 }

    fun printAll(vararg strings: String)

    constructor(...)

    companion object
I like the `for..in` which reads like plain English. Or `vararg` is pretty clear - compare that to "*" and the like in other languages. Or `constructor` can not be more explicit, there is no need to teach anyone that the name must be the same as the class name (and changed accordingly). Same is true for companion object (compare with Scala).

pron
0 replies
18h28m

Well, that's taken care of: https://openjdk.org/jeps/463 (well you need `void main`)

didntcheck
0 replies
8h29m

I've always found it eye rolling how often this is given as some sort of "mic drop" against Java. Yeah it's a little weird having to have plain functions actually be "static methods", but it's a very minor detail. And I really hope people aren't evaluating their long-term use of a language based on how tersely you can write Hello World

smitty1e
0 replies
18h37m

It seems the goal of Java is to have one executable line of code per file.

Thus, the Java exception trace in the log file is almost like an interactive debug trace.

Whether that is a bug or a feature is an exercise for the reader.

pjmlp
0 replies
10h18m

Go code is equally over-engineered when taken into the hands enterprise architects.

You just happen to be looking at the wrong spot, see Kubernetes, YAML spaghetti, and plenty of stuff originated from Go in the enterprise space.

unscaled
0 replies
6h12m

From looking at what the Go team had to say about Go in its earliest days, Go had very little to do with Java, and they weren't very concerned with fixing Java's issues.

The "Bloated Abstractions" issue in Java is more of a cultural thing than an issue of the language. You could even say it's partially because early Java (especially before Java 1.5) was too much like Go!

Java used to have the same philosophy around abstractions, and Sun/Oracle were pretty conservative about adding new language features. To compensate for lack of good language-level abstractions, Java developers used complicated techinques and design patterns, for example:

1. XML configuration, because there were no attributes. 2. Attributes because there were no generics and closures. 3. Observers/Visitors/Strategies/etc. because there weren't any closures. 4. Heavy Inheritance, because there was no delegation. 5. Complicated POJOs and beans, since Java didn't have properties or immutable records.

sigzero
0 replies
15h54m

No, not even "pithily" is it "Java done right".

kaba0
0 replies
2h29m

More like java 1.2 sold in a worse package.

liampulles
20 replies
19h33m

I was a Java dev and love using Go now, but I have to say I'm not sure if many of my Ex-Java-Colleagues would like Go. Go is kind of odd in that even when it was new, it was kind of boring.

I think a lot of people in the Java world (not least myself) enjoy trying to refactor a codebase for new Java features (e.g. streams, which are amazing). In the Go world, the enjoyment comes from trying to find the simplest, plainest abstractions.

groestl
9 replies
18h43m

it was kind of boring.

As a Java dev, I love boring. That's why I picked Java. Boring means less outages.

find the simplest, plainest abstractions.

Not sure I'd give that medal to Go.

techdragon
7 replies
17h45m

When did go get abstractions? (Only half joking)

Isn’t the entire language designed explicitly to prevent programmers from building their own sophisticated abstractions that could confuse other programmers who don’t understand that other persons code… as I understand it, if you can read go and understand basic programming you should be competent with go, and if you know your algorithms you should be proficient.

I hated old Java but the modern language isn’t as bad now some people have added some better syntax shortcuts the libraries are nearly twenty years more polished, and the IDE can nearly half write my code for me so the boilerplate and mind numbing aspect isn’t so bad… I loathe go because using it feels like programming with my hands tied behind my back trying on a keyboard with sandpaper keycaps, despite that, I didn’t bother “learning” go, I could just read it based on my Python/C/Basic/Java/C# experience instead of needing any extra learning.

plorkyeran
3 replies
16h17m

My experience with reading Go is that the language not giving tools to build good abstractions has failed to stop developers from trying to do so anyway. There's never a line of code where I just plain don't know what's even going on syntactically as some languages can have, but understanding what it's actually doing can still require hopping through several files.

valenterry
2 replies
13h3m

In short: a simple (programming) language does means that every small part/line is simple. But it doesn't mean that the combination of all parts/lines is simple. Rather the opposite.

cryptos
1 replies
10h51m

Very true! I think a lot of the accidental complexity of early Java systems were rooted in the not so powerful language. If the language is too powerful (like Scala 2) developers do insane things with it. If the language is not powerful enough, developers create their own helpers and tricks everywhere and have to write a lot of additional code to do so.

Just compare Java streams with how collections are handled in Go and scratch your head how someone can come up with such a restricted language in this century.

groestl
0 replies
10h30m

and have to write a lot of additional code to do so.

And most importantly: you have to read a lot of code like this, and understand it's assumptions, failure modes, runtime behavior and bugs, which are different every time. Instead of just reading "ConcurrentHashMap", and be done with your day.

zozbot234
1 replies
17h34m

K8s just so happens to be coded in Golang. A quick look at that overall codebase should be enough to disabuse people of this notion that Golang developers cannot possibly come up with confusing or overly sophisticated abstractions.

vasac
0 replies
2h5m

Maybe because k8s was originally written in Java ;)

sidlls
0 replies
12h29m

Eh, not really. Go’s philosophy around abstractions is quite poor. Duck typing begs engineers to create poor abstractions that simply reading a codebase does not necessarily lead to understanding. The bolted-on generics implementation actually makes this worse.

theshrike79
0 replies
11h39m

As a Java dev, I love boring. That's why I picked Java. Boring means less outages.

This is why I personally love Go too :)

There's very little room for fancy tricks, in most cases there is just one way to do things. It might be verbose, but writing code is the least time consuming part of my job anyway.

Someone
4 replies
18h41m

Go is kind of odd in that even when it was new, it was kind of boring.

Java was designed to be boring, too. That’s why, for example, it doesn’t have unsigned integers: it means programmers need not spend time choosing between signed and unsigned integers.

It evolved away from that.

brabel
2 replies
10h24m

Yeah, Java has been trying to add every feature under the Sun recently and it's definitely not a boring language anymore (since Java 21 at least, it's impossible to claim otherwise with things like pattern matching being in the language).

As a Java guy, I think this is looking like a desperate attempt to remain relevant while forgetting why the language succeeded in the first place.

kaba0
1 replies
2h30m

That’s an absolutely bad take. Java is still very very conservative with every change, and they almost always have only local behaviors, so not knowing them still gives you complete understanding of a program.

Like, records are a very easy concept, fixing the similar feature in, say, c# where they are mutable. Sealed classes/interfaces are a natural extension over the already existing final logic. It just puts a middle option between none and all (other class) being able to inherit from a superclass.

neonsunset
0 replies
2h14m

C# records default to immutability. However, struct records being a lower level construct default to mutable (which can be changed with readonly keyword):

    record User(string Name, DateOnly DoB); // immutable
    record struct Cursor(int X, int Y); // mutable
    readonly record struct Point(int X, int Y); // immutable

mqus
0 replies
8h3m

no unsigned integers gave us signed bytes. Not sure if that made things simpler.

resource0x
2 replies
17h51m

Almost every golang program I've seen was ugly. It's strange given that they designed the language from scratch with all the ugliness ingrained in its structure from day one.

geodel
1 replies
17h37m

Different people different taste, different context, different standards of beauty.

Only thing strange seems here is you are putting opinion as some kind of fact.

resource0x
0 replies
16h51m

Different people different taste, different context, different standards of beauty.

Every statement about aesthetics is subjective, you don't have to remind me of that. BTW, what did YOU write about Amazon shows 6 days ago? No one was pontificating about your opinion, right?

Please repent :-)

davewritescode
1 replies
17h31m

Working with deeply nested data structures in Go is still frustrating, it’s one place where the Java still wins thanks to the streams API.

kaba0
0 replies
2h30m

There are no area where java would fare worth than go.

0cf8612b2e1e
20 replies
19h49m

Not sure I would agree with the community leading aspect. It still feels like Google decides.

My particular point would be versioning. At first Go refused to acknowledge the problem. Then, when there was finally growing community consensus, Go said forget everything else, now we are doing modules.

I also recall the refusal to make montatomicly-increasing-time a public API until cloudflare had a daylight savings time outage.

kevingadd
17 replies
19h40m

Personally their handling of versioning, generics and ESPECIALLY monotonic time (in all 3 cases, seemingly treating everyone raising concerns about the lack of a good solution as if they were cranks and/or saying fix it yourself) definitely soured me on Go and I would never choose it for a project or choose to work for a company that uses it as language #1 or language #2.

It just left a bad taste in my mouth to see the needs and expertise of actual customers ignored by the Go team that way since the people in charge happened to be deploying in environments (i.e. Google) where those problems weren't 'real' problems

Undeniable that people have built and shipped incredible software with it, though.

Groxx
7 replies
18h41m

Package management is a very blatant entry for this list too.

okanat
6 replies
17h50m

Backend people use Go in my company. They do great things with it. It works well enough when the interface between a Go program and another one is a socket kind of thing.

But we also have a couple system utilities for embedded computers written in Go. I still get frustrated that I have to go and break my git configuration to enable ssh-based git clones and set a bunch of environment variables for private repos. Then there is CGO stuff like reading comments as code interfaces. Those things are incredible waste of time of the embedded developers and it makes harder for no reason to onboard people. Go generally spits out cryptic errors when building private repos like those.

I always wanted and still want to create a wrapper that lauches a container, makes whatever "broken" configuration that makes Go compiler happy, figures out file permissions and runs the compiler. The wrapper should be the only go executable in my host system and each repo should come with a config file for it.

starttoaster
4 replies
17h18m

I still get frustrated that I have to go and break my git configuration to enable ssh-based git clones and

Just curious.. But why would you disable ssh-based git authentication? It's significantly more convenient when interacting with private repositories than supplying a username and password to https git endpoints.

set a bunch of environment variables for private repos.

Set up a private Go module proxy. Use something like Athens. The module proxy can handle authentication to your private module repositories itself, then you just add a line in your personal bashrc that specifies your proxy.

In general I don't have complaints with the things you take issue with so I'll digress on those.

shakow
2 replies
16h28m

But why would you disable ssh-based git authentication?

Ask the Go developers. AFAIK the only package manager where I have to change my global git configuration to make it work. Even the venerable CPAN and tlmgr behave better.

https://stackoverflow.com/questions/27500861/whats-the-prope...

starttoaster
1 replies
14h44m

Yeah, this isn't at all necessary. It might work for you but it's not how we accomplish the same thing. See my original comment for what we do.

shakow
0 replies
11h11m

I'm sorry, but I didn't catch it; how exactly do you `go get` private github repos?

okanat
0 replies
10h12m

Just curious.. But why would you disable ssh-based git authentication?

I don't disable it. However, not every git repo requires ssh to pull. When working with other languages, if there is a library that I purely depend on, it is perfectly okay to use https only and I use https.

However to use private repos with Golang, one has to modify their global git configuration to reroute all https traffic into ssh because Golang's module system uses only https and the private repos are ssh-authenticated. There is no way to specify which repo is ssh and which repo is https. Last time I used Go, it was at 1.19.

Set up a private Go module proxy. Use something like Athens. The module proxy can handle authentication to your private module repositories itself, then you just add a line in your personal bashrc that specifies your proxy.

Why should we put more things on our stack just to make a language, which claims to be modern, work? Why do we have to change the global configuration of a build server to make it work? Rust doesn't require this. Heck our Yocto bitbake recipes for C++ can do crazy things with url including automatically fetching the submodules.

Maybe it would make sense to make that change if we used Go everyday but we don't.

masklinn
0 replies
8h30m

Then there is CGO stuff like reading comments as code interfaces.

That's not exactly novel, and while I agree that it's meh what really grinds my gears is the claims / assertions that Go doesn't have pragmas or macros, while they're over there using specially formatted comments as exactly that like it's 2001-era Java.

hnarn
4 replies
18h51m

the needs and expertise of actual customers

What is an ”actual customer” in the context of the Go programming language?

kevingadd
3 replies
14h39m

Anyone who's run an open source project is used to getting feature requests or complaints from groups like:

* people who are merely interested but have no plans to use your project

* people with strong opinions not backed by actual experience

* people with a specific interest (like a new API or feature) who want to integrate it into as many projects as possible

From a naive perspective, it makes sense to treat a request like 'we need monotonic time' as something that doesn't necessarily have any merit. The Go team are very experienced and opinionated, and it seems like it was a request that ran against their own instincts. The design complication probably was distasteful as well.

The problem is, the only reason they never needed monotonic time in the past was that many of them spent all their time working in special environments that didn't need it (Google doesn't do leap seconds). In practice other people shipping software in the wider world do need it, and that's why they were asking for it. Their expertise was loudly disregarded even though the requests came with justification and problem scenarios.

FridgeSeal
2 replies
10h29m

For anyone not familiar with the monotonic time issue, the implementation was found to be incorrect, and the go devs basically closed it and went “just use google smear time like we do lol, not an issue, bye”.

It did eventually get fixed I believe, but it was a shitty way of handling it.

tubthumper8
0 replies
7h20m

For reference, the GitHub thread is: https://github.com/golang/go/issues/12914

masklinn
0 replies
8h15m

Even the "fix" is... ugh: instead of exposing monotonic time, time.Time contains both a wallclock time and an optional monotonic time, and operations update and use "the right one(s)".

Also it's not that the implementation was incorrect, it's that Go simply didn't provide access to monotonic time in any way. It had that feature internally, just gave no way to access it (hence https://github.com/golang/go/issues/16658).

pa7ch
2 replies
18h17m

I mostly loved what they did with with modules/package mgmt. I think SIV was a mistake but modules was miles better then the previous projects even with SIV. Some people seemed to take it very personally that the Go team didn't adopt their prior solution but idk why they expected the go team to use their package manager. I think the SAT style package manager proposed would have created a lot more usability problems for developers and would have been much harder to maintain.

Groxx
1 replies
10h26m

They took it personally because Go led the community and those project leaders on for years that it would be looking, learning, and communicating...

And then dropped the module spec and implementation and mandated it all in about two days. With no warning or feedback rounds or really any listening at all, just "here it is, we're done", out of nowhere.

They have every right to be personally insulted by that.

kitd
0 replies
9h48m

2 days? ISTR discussions going on for at least 6 months comparing dep with what the 'official' one would do.

starttoaster
0 replies
17h22m

It just left a bad taste in my mouth to see the needs and expertise of actual customers ignored by the Go team that way since the people in charge happened to be deploying in environments (i.e. Google) where those problems weren't 'real' problems

I feel for this, but only to an extent. It's hard to work in any service industry and retain any notion of, "the customer intelligently knows what they want," as a part of your personal beliefs. At the end of the day, you had an idea for a product, and you have to trust your gut on that product's direction, which is going to make some group of people feel a little unheard.

voidhorse
1 replies
15h48m

I think Go's language leadership is one of the worst if not the worst I've ever seen when it comes to managing a language community/PR. Both Ian and Rob come off as dismissive of the community and sometimes outright abrasive in some of the interactions I've seen. Russ Cox seems like a good person, though.

They probably think being hardheaded "protects" the language from feature creep and bad design, but it has also significantly delayed progress (see generics) and generally made me completely turned off from participating in language development or community in any meaningful way, even though I actually like the language. I think there are ways to prevent feature creep and steer the language well without being dismissive or a jerk.

aatd86
0 replies
14h1m

Not my experience (except for the Russ bit :o)

I've actually been quite impressed by Ian's patience.

People are at different levels of understanding and sometimes it's hard to communicate.

closeparen
9 replies
19h26m

Restricting the language to its target use: systems programming, not applications or data science or AI...

Go is used extensively in server-side business applications. Arguably it shouldn't be, but it is.

hnarn
8 replies
18h49m

Why shouldn’t it be?

eweise
5 replies
18h38m

Too low level and lacks the power to cleanly model the business domain.

dharmab
3 replies
18h14m

"Too low level" "lacks the power" - I don't understand what this means. What are things that are hard to do in Go business applications that other languages do better?

za3faran
0 replies
16h58m

It lacks modeling capability that you'd find even in languages like Java and C#. Enums, records, pattern matching, switch expressions, and yes even inheritance where it makes sense.

shakow
0 replies
16h21m

What are things that are hard to do in Go business applications that other languages do better?

Streams, meta-programming, enums, ‶modern″ switches, generators, portable types in CGo, streamlined error handling, safer concurrency primitives, nil, partial struct initialization, ...

rowls66
0 replies
17h8m

Here is an example. Go let’s structs be passed by value or by reference. The programmer needs to decide, and that adds complexity that is largely irrelevant for modeling complex business logic. Java does not provide a choice, which keeps it simple.

closeparen
0 replies
18h36m

Go has pretty powerful composition, reuse, higher-order functions etc. for dealing with byte arrays and streams. Not so much for business domain entities.

wavemode
1 replies
18h13m

I believe even the language's own designers would agree with that sentiment. There's just generally a lot of things about Go that are great for low-level microservices but not great for 1M+ line of code business applications maintained by large teams.

I can't speak for others, but personally if I'm writing software with complex business logic, I'd want null safety, better error handling, a richer type system, easier testing/mocking... I've also never liked that a panic in one goroutine crashes the whole application (you can recover if it's your own code, sure, but not if it happened in a goroutine launched by some library).

dx034
0 replies
9h30m

I'd disagree with most of that, but the panic in goroutines really hits home. It's so annoying to remember implementing recover in every goroutine started to avoid crashing your application. I don't get why there's no global recover option that can recover from panic's in goroutines as well.

scythe
5 replies
19h38m

Restricting the language to its target use: systems programming, not applications or data science or AI...

The only Go I ever touched in industry was the backend of a web-app at Salesforce. I'm not sure this counts as "systems programming".

https://engineering.salesforce.com/einstein-analytics-and-go...

WatchDog
4 replies
19h31m

Rob describes go as a language for writing server applications, and I think that is a much more applicable term than systems programming.

bb88
2 replies
18h40m

Drew DeVault called it an "internet" language back in 2021. And to that I more or less agree.

Read footnote 1 for context.

https://drewdevault.com/2021/04/02/Go-is-a-great-language.ht...

cryptos
1 replies
10h30m

But then again the internet is everywhere now: desktop, servers, watches, washing machines, industrial systems, sensors ... So "internet language" is a somewhat pointless term.

iainmerrick
0 replies
9h55m

That term isn’t meant to include mobile apps, desktop apps, web apps (even though those all use the internet, of course). Nobody is using Go for any of those, as far as I know.

So I think it is a useful term, and captures the things Go is good at surprisingly well.

sanderjd
0 replies
18h20m

Yep, I'd say one of the things they could have done better is in making this distinction more clear to people. I spent multiple years being confused about what made go a "systems" language, when it didn't seem very good for that at all. When all the devops / infrastructure tooling started being written in it, its niche suddenly became more clear to me.

munificent
4 replies
14h25m

> It's interesting to compare Dart, which has zero uptake outside Flutter

Caveat: I work on Dart.

I don't see that that's a very damning critique of Dart. Every language needs libraries/frameworks to be suited for a domain. Flutter is a framework for writing client apps in Dart. Without Flutter, no matter how much you like Dart the language, you'd be spending a hell of a lot of time just figuring out how to get pixels on screen on Android and iOS. Few people have the desire or time for that.

Anyone writing applications builds on a stack of libraries and frameworks. The only difference I see between Go and Dart with Flutter is that Go's domain libraries for networking, serialization, crypto and other stuff you need for servers are in the standard library.

Dart has a bunch of basics in the built in libraries like collections and asynchrony, but the domain-specific stuff relies on external packages like Flutter.

That's in large part because Dart has had a robust package management story from very early on: many "core" libraries written and maintained by the Dart team are still shipped using the package manager instead of being built-in libraries because it makes them much easier to evolve.

I prefer that Flutter isn't baked into Dart's standard libraries, because UI frameworks tend to be shorter-lived than languages. Flutter is wonderful, but I wouldn't be surprised if twenty years from now something better comes along. When that happens, it will be easier for Dart users to adopt it and not use Flutter because Flutter isn't baked in.

iainmerrick
3 replies
10h3m

I don’t disagree with all that but it seems tangential to the point being made, that people just aren’t using Dart except for Flutter apps. So compared to Go it’s very much a niche language (although maybe it’s a really big niche, I don’t know).

copx
2 replies
9h2m

One might as well say "People just aren't using Go except for server apps."

iainmerrick
1 replies
8h35m

Go is used for command-line tools too, e.g. esbuild.

It's a question of whether the tail is wagging the dog. Flutter is more important than Dart, unless Dart finds a way to expand into another niche. I don't think any one Go framework is bigger than the language itself (even if you were to include the standard library networking utils as a framework).

munificent
0 replies
54m

By that same token, Go's standard library and runtime is more important than Go.

Is that an indictment of Go, or just an observation that a usable platform is an interconnected set of tools?

sjwhevvvvvsj
3 replies
16h44m

They gutted the key FOSS teams during the layoffs, the c-suite hates real FOSS, it doesn’t look good on Ruth’s spreadsheet.

Of course, pretend FOSS like Android they strategically tolerate, but beyond that unless it results in an ad click it’s useless.

LispSporks22
2 replies
16h1m

Who is Ruth?

vajrabum
1 replies
15h49m

Google CFO

sjwhevvvvvsj
0 replies
15h36m

Most employees think she’s the CEO to be fair.

kaba0
1 replies
5h53m

Go was probably the key technology that migrated server-side software off Java bloatware to native containers

Bloatware? That’s already an uninformed and loaded term, and Google still has and writes more Java than Go, if I’m not mistaken.

pjmlp
0 replies
5h36m

Java is so bad, and .NET as well, that now WASM startups are re-inventing application servers on top of Kubernetes + WASM.

On one side the ecosystems get bashed, on the other, the same ones complaining get to re-invent them, badly.

pjmlp
0 replies
10h33m

Where I stand, we keep enjoying Java and .NET bloatware, with powerful programming languages, thank you very much.

Go is only used in DevOps scenarios where there is no way around it.

The only place I advocate for it, is a C replacement, for the same role as Limbo in Inferno, or Oberon in 1992, not Java or .NET ecosystems.

jacquesm
0 replies
15h44m

What they really got right in my opinion: show, don't tell and modesty.

dekhn
69 replies
20h40m

This is a retrospective written by Rob Pike, one of the creators of the Go language.

I worked at Google at the time go was created and had coffee with Rob from time to time and got to understand the reasons Go was created. Rob hates Bjarne Stroustrup and everything C++ (this goes back decades). C++-as-used at Google (which used far more threads that he says) had definitely reached a point where it was extremely cumbersome to work with.

I can think of some other things that they got wrong.

For example, when I first started talking to Rob and his team about go, I pointed out that scientific computing in FORTRAN and C++ was a somewhat painful process and they had an opportunity to make a language that was great for high performance computing. Not concurrent/parallel servers, but the sorts of codes that HPC people run: heavily multi-threaded, heavily multi-machine, sophsticated algorithms, hairy numerical routines, and typically some sort of interface to a scripting language with a REPL.

The answers I got were: Go is a systems programming language, not for scientific computing (I believe they have now relaxed this opinion but the damage was already done).

And a REPL wasn't necessary because Go compiled so quickly you could just write a program and run it (no, that misses the point of a repl, which is that it builds up state and lets you call new functions with that built-up state).

And scripting language integration was also not a desirable goal, because Go was intended for you to write all your code in Go, rather than invoking FFIs.

A number of other folks who used Go in the early days inside Google complained: it was hard to call ioctl, which is often necessary for configuring system resources. They got a big "FU" from the go team for quite some time. IIUC ioctls are still fairly messy in Go (but I'm not an expert).

I think Go spent a lot of time implying that goroutines were some sort of special language magic, but I think everybody knows now that they are basically a really good implementation of green threads that can take advantage of some internal knowledge to do optimal scheduling to avoid context switches. It took me a while to work this out, and I got a lot of pushback from the go team when I pointed this out internally.

IN short, I think go could have become a first-class HPC language but the go team alienated that community early on and lost the opportunity to take a large market share at a time when Python was exploding in ML.

randomdata
17 replies
20h34m

> I believe they have now relaxed this opinion

Or is it that scientific computing is starting to realize that it can benefit from a systems programming language?

Scripting langages are great for exploratory work, but if you want to put to work into production, scripting starts to really show its limitations. There is a reason systems languages exist. There is good reason why they both exist. They are different tools for different jobs.

coldtea
8 replies
20h20m

Or is it that scientific computing is starting to realize that it can benefit from a systems programming language?

Can't be it, since a huge percentage of scientific computing is done with a scripting language.

A systems programming language is still good for the backend libraries of scientific computing - but Go has zero share of that.

randomdata
7 replies
20h18m

A huge percentage of scientific computing is scripting in nature. It would be silly to use anything other than a scripting language.

But not all. Whether or not it is Go that gets the job, a systems language of some kind would be beneficial in those circumstances.

aragilar
3 replies
12h8m

But Go lacks a good FFI story (see CGo discussions), so Go has no hope here.

randomdata
2 replies
12h0m

What are you referring to?

The only active, related discussion I am aware of is about the high call overhead imposed by the gc compiler. Of course, other Go compilers have different calling conventions. tinygo, for example, can call C functions from Go about as fast as C can call C functions. So that isn't really a Go issue, just a specific compiler implementation issue. And as you know (it's in the link!), the Go team themselves maintain two different compilers and pride themselves on Go not being defined by one compiler. To equate gc and Go as being one and the same would be quite faulty.

So obviously you are not talking about that one. What else are people discussing?

coldtea
1 replies
9h17m

So obviously you are not talking about that one.

Obviously? Not of the above rings obvious to me.

A "specific compiler implementation issue" when said compiler is the compiler that 99% of Go users use, can just as well be called a Go issue.

Whether "the Go team themselves maintain two different compilers and pride themselves on Go not being defined by one compiler" is basically irrelevant in praxtice, since people using/interested in Go predominantly mean and use a specific compiler (unlike with C++ where they might use one of several available compilers equally likely).

To equate gc and Go as being one and the same would be quite faulty.

No, it would be the most pragmatic thing to do. De jure and de facto and all that.

randomdata
0 replies
4h46m

I appreciate your dedication to reminding us that your original comment was posted without any research or thought, but it remains that the Go project itself, along with other third-parties, provide different FFI solutions so that you can pick the one that best suits your circumstances. There is no one-size-fits-all solution.

99% of Go users don't need an FFI story at all. They can choose a compiler on different merits. If you have an FFI story to consider, then it is logical that you will need to evaluate your choices on different attributes and very well might find that the compiler 99% of users with different problems won't match your own. gc is not ideally suited to FFI. But Go offers compilers that are. This is not a Go problem. Go provides solutions. What you speak of is only a gc thing.

What story do you have for us next? That your neighbourhood restaurant, with a full menu, has no hope because the one dish you tried wasn't to your existing taste – not having it occur to you that other items on the menu might be exactly what you are looking for?

galaxyLogic
2 replies
19h40m

There are 2 conflicting goals: 1) Having a language in which it is easy to express and try out ideas and 2) Producing fast and safe programs.

A scripting language would seem to be good for the exploratory scientific research, because of that. Whereas when you need to create a performant library that can do heavy crunching with reproducible results on any platform you need the other. The questions is: Do you know what you want to implement, or is that still an open question?

randomdata
0 replies
18h38m

No doubt it starts as an open question and slowly moves towards knowing.

Which isn't really much of a conflict. You can prove out your thoughts in a scripting language, and after the dust has settled you can move the workload to a systems language. Different jobs, different tools.

I guess if you're one of those weird religious types that insist there is only one true God (read: programming language) you might feel conflict, but nobody else cares.

coldtea
0 replies
9h14m

There are 2 conflicting goals: 1) Having a language in which it is easy to express and try out ideas and 2) Producing fast and safe programs.

Why are they "conflicting"?

Common Lisp could do both quite well.

Guvante
7 replies
19h54m

Except what does Go get for ditching the REPL?

It already has a substantial runtime (the usual pain of an REPL)

randomdata
3 replies
19h51m

What would it gain? REPLs make perfect sense for scripting problems, but when you would you ever use it for systems problems?

dekhn
1 replies
18h32m

often when I am developing systems code or RPC client code, I sit in a REPL and make repeated ad-hoc calls to various functions, building up various bits of state (I started using python long before Jupyter). I find this much more intuitive than writing code, compiling it, executing it, going back to the editor, changing the code, re-running it with something that loads a bunch of data, just so I can answer a question about the runtime behavior of the system I'm working with.

randomdata
0 replies
18h26m

There is certainly something to be said about being able to answer questions about another system by poking at it, but that is a scripting task. You would be better served by a scripting language. And, as it happens, most scripting languages come with REPLs. That is a solved problem that was solved long before Go was ever imagined.

Just because you are building a particular systems program does not mean everything you do has to be a systems problem. And, really, if you don't exactly know what you're building, it is probably too soon to consider any of it a systems problem.

Guvante
0 replies
11h14m

You say systems problems like Go is for embedded work.

Half the things I have seen replaced Python scripts. Seeing what your internal state is after taking X action is exactly what REPLs are good at...

ncruces
2 replies
16h48m

Not being able to create new code at runtime is a pretty huge freepass for the compiler to implement stuff a certain way.

You can bet the Go compiler and runtime leverage that assumption.

Guvante
1 replies
11h15m

Even without optimizations?

ncruces
0 replies
4h53m

Yes. There are long standing feature requests for (e.g.) the reflect package that simply don't get done because they'd break this assumption and/or force further indirection in hot paths to support "no code generation at runtime, ever".

Packages like Yaegi (that offers an interpreted Go REPL) have "know limitations, won't be addressed" also because of these assumptions.

https://github.com/golang/go/issues/4146

https://github.com/golang/go/issues/16522

https://github.com/traefik/yaegi?tab=readme-ov-file#limitati...

tejohnso
15 replies
20h9m

Rob hates Bjarne Stroustrup

That sounds extreme. What did Bjarne do to him?

el-dude-arino
8 replies
19h59m

99% of programmers, but especially the brilliant/well known ones, are insufferable egotists. Many cannot have a technical disagreement without despising the person they disagree with.

Professionalism and the tech industry are just starting to get acquainted.

1attice
5 replies
19h39m

Most professions, quietly, are currently like this. Architects and scientists and doctors and surgeons and lawyers all develop strong opinions about each other based on their positions.

For instance: ask a lawyer what they think about the overturning of Roe v Wade. Now ask them what they think about their colleagues who disagree. Don't forget to duck.

Which isn't to say that we wouldn't all be much better off with more distance between our opinions and our identities. But the thing you're looking for is a cultivated practice that is often not at odds with 'egotism' (what is that, precisely, anyway?) but _entailed by_ it: the not-wanting-to-be-the-kind-of-person-who-does-XYZ.

Not all vanities are risible.

What you're looking for is a long-cultivated, inward practice that can be supported or hindered by all the usual forces, local context (culture, practice, etc) chief among them.

Put another way: your claim isn't that most programmers are unprofessional. It's that they're _uncollegial_. And you're right. So are a lot of other contemporary professionals. It's a shouty era.

zik
3 replies
17h51m

That kind of behaviour leads to a lot of valuable people abandoning workplaces because they can't stand the abusive atmosphere. In the end all that's left are the shouty arseholes and that doesn't do anyone any favors.

geodel
1 replies
13h3m

So where do those valuable people end up then?

1attice
0 replies
12h39m

Well, a shockingly high percentage drop out of their profession, and get stuck somewhere doing something like CSR.

1attice
0 replies
16h3m

Correct. But my original post is not normative, but rather descriptive. Despite how things should be, this is how they are.

I've found that it's extraordinarily valuable to separate these two cognitive modes wherever possible.

rramadass
0 replies
12h13m
rramadass
0 replies
12h15m

To paraphrase more generally;

99% of [people], but especially the brilliant/well known ones [in any domain], are insufferable egotists.

IMHO this is normal. When somebody puts in a lot of effort in mastering something they intrinsically know that they are better than most people in that something. As social animals, getting "noticed" is a form of being conferred "status" in the group. Thus you tend to "act out" to acknowledge/confirm that recognition. It is fine as long as it is within acceptable social bounds and not out of touch with reality.

anotherevan
0 replies
18h53m

99% of programmers, […] are insufferable egotists.

As I read this while sipping my turmeric latte my monocle popped right out!

dekhn
2 replies
19h22m

Bjarne took the wonderful thing that was C and made C++. Rob is not a fan of C++: he thinks the language evolved badly and added poor concepts from the beginning (IIRC iostreams and templates were two of the concepts), and it embedded a number of design decisions that led to extremely slow compiles and links (like, 45 minutes to link a Google binary). Ian Taylor even wrote a better linker (gold) for Google to deal with that.

formerly_proven
1 replies
18h51m

Compare the designs of iostreams (Stroustrup) and the STL (Stepanov), conclusions left as an exercise for the reader.

dekhn
0 replies
17h56m

when I discovered STLport around 2001, it was a real revelation and very convenient because I was finally able to compile the code my coworkers wrote on "real UNIX" with "real C++ compilers" (lol cfront) In researching my answer I came across http://www.stlport.org/resources/StepanovUSA.html# "putting it simply, STL is the result of a bacterial infection."

aragonite
2 replies
20h3m

Don't know about Rob Pike in particular but Ken Thompson, who probably had the same reasons for "hating" Stroustrup, had this to say about him (from Coders at Work):

Seibel: You were at AT&T with Bjarne Stroustrup. Were you involved at all in the development of C++?

Thompson: I'm gonna get in trouble.

Seibel: That's fine.

Thompson: I would try out the language as it was being developed and make comments on it. It was part of the work atmosphere there. And you'd write something and then the next day it wouldn't work because the language changed. It was very unstable for a very long period of time. At some point I said, no, no more.

In an interview I said exactly that, that I didn't use it just because it wouldn't stay still for two days in a row. When Stroustrup read the interview he came screaming into my room about how I was undermining him and what I said mattered and I said it was a bad language. I never said it was a bad language. On and on and on. Since then I kind of avoid that kind of stuff.

Seibel: Can you say now whether you think it's a good or bad language?

Thompson: It certainly has its good points. But by and large I think it's a bad language. It does a lot of things half well and it's just a garbage heap of ideas that are mutually exclusive. Everybody I know, whether it's personal or corporate, selects a subset and these subsets are different. So it's not a good language to transport an algorithm—to say, “I wrote it; here, take it.” It's way too big, way too complex. And it's obviously built by a committee.

Stroustrup campaigned for years and years and years, way beyond any sort of technical contributions he made to the language, to get it adopted and used. And he sort of ran all the standards committees with a whip and a chair. And he said “no” to no one. He put every feature in that language that ever existed. It wasn't cleanly designed—it was just the union of everything that came along. And I think it suffered drastically from that.

Seibel: Do you think that was just because he likes all ideas or was it a way to get the language adopted, by giving everyone what they wanted?

Thompson: I think it's more the latter than the former.

zozbot234
0 replies
16h31m

Interesting opinion, it certainly shows the broad mindset behind Golang (and its predecessors Alef and Limbo). Also let's face it, it really took Cyclone and Rust to prove that a broadly C++ish language could be made both safe for large-scale systems and developer-friendly. If your only point of reference is C++ itself, these remarks are not wrong per se.

aragonite
0 replies
17h4m

But also see previous discussion of these remarks on HN:

https://news.ycombinator.com/item?id=27938122

mattbee
10 replies
20h31m

I think Java's "green" threads only ran on a single core, a stop-gap for 90s machines that only had one.

Goroutines use OS threads, but only create one OS thread per core. Go does the scheduling internally on top of those few "real" threads.

Java itself now provides a goroutine-type model with Virtual Threads, but as the programmer you've got to ask for it.

Well, I just felt like "green" is a confusing moniker in that context.

zozbot234
6 replies
20h6m

Goroutines use thread-per-core but run stackful fibers on top of that. (A similar model is sometimes known as "Virtual Processors" or "Light-weight processes".) This is unlike the use of stackless async-await in other languages. This peculiar use of fibers in Go is also what gets in the way of conventional C FFI and leads to quirks like cgo.

dekhn
4 replies
19h26m

In the future, I think thats what I'll say. Thanks.

pcwalton
3 replies
19h24m

I usually just say that Go implements "userspace threading", since that's really what it is. Some early, pre-pthreads, implementations of Linux threads worked the same way as Go does and they usually called such implementations "M:N", to indicate M userspace threads mapped onto N kernel threads, so "M:N" is a good descriptor too.

dekhn
2 replies
18h56m

but that's not really accurate. It uses system threads. Userspace threading doesn't make a transition to kernel (IIUC).

pcwalton
1 replies
18h45m

Go doesn't need to transition to the kernel to switch threads, does it? It just saves and loads the register state in userland.

dekhn
0 replies
18h35m

IIUC it can, sometimes, avoid a kernel transition when it knows it can schedule the recipient of a message, but I believe that golang creates a threadpool for running goroutines on platforms that use thread primitives.

From https://github.com/golang/go/blob/c7d6c6000a84b61ac8bb2e38e8... I believe "worker threads" are OS threads.

jtasdlfj234
0 replies
19h46m

Excellently described.

I'd love to see a resource that highlights these all on a table across programming languages as well as the associated strengths and weaknesses of such threading & concurrency models.

Perhaps a ByteByteGo graphic, if you will.

unscaled
1 replies
10h51m

Back when Java had green threads (the late 1990s), there was no such thing as "multi-core machines". Some top-of-the-line servers had SMP (i.e. two or more physical processors running together on the same bus and sharing the same memory), but very few programs were built to take advantage that option yet.

So Java's green threads was not a stop-gap for the 90s machine which only had one core. That's preposterous. Does Go need to disable its goroutines to support the Raspberry Pi Zero that has only one core? Obviously not! The reason Java didn't support multi-core scheduling is that multi-core processors still weren't a thing, and SMP was too high-end for them to bother (and by the time they did start caring about high-end systems, they've already moved to kernel threads).

Nothing prevents green threads from supporting multi-core (Java 21's Virtual Threads obviously do that, but Erlang's processes also had SMP support well before Go).

I think the terms "green threads" or "user-space threads" are really not that confusing. Definitely not confusing enough to warrant inventing a new term like "goroutines". THAT is confusing. I'm happy the Project Loom team resisted the urge to give the Virtual Threads a fun name like "Jorutines".

kragen
0 replies
9h21m

sun, where java was written, was shipping smp computers with 64 processors in 01997, two years after they first released java https://en.wikipedia.org/wiki/Sun_Enterprise

they bought that off cray, but in 01995 a few months after they released java, they (a different division) released the dual-processor ultra 2 https://en.wikipedia.org/wiki/Sun_Ultra_series

but that wasn't sun's first smp machine; the sparcserver 630mp, among others, supported four cpus in 01991 https://en.wikipedia.org/wiki/SPARCstation#Server_systems

i had a dual-processor smp pentium pro under my desk by 01998, running windows nt 4.0 and occasionally java

really tho the bigger issue with java performance was that there was no supported native-code compiler until hotspot; though gcj was available, sun didn't support it, and i don't recall its performance as being that great. so by writing your code in java you were usually wasting 95% of your cpu on the bytecode interpreter, like python today. it wasn't something you'd do for things that required a lot of cpu

Thaxll
0 replies
18h33m

The Go runtime can create more that one thread per core, especially when there is some blocking syscall.

pphysch
5 replies
20h26m

This seems misdirected. Go clearly wasn't designed for scientific computing, and that's okay. I've successfully written some multi-node MPI codes in Go, but there's not much advantage over C, and likely some disadvantages relating to the Go runtime and linking behavior.

Python (largely) is the present and future of scientific computing because people realized you can write the mathy kernels in something low-level and just orchestrate it with ergonomic Python and its bountiful ecosystem/mindshare. Python adequately checks all the boxes and I don't see how a newcomer like "Scientific Go" or R or Julia will ever unseat it. Not to mention curmudgeonly researchers have little desire to learn new tricks.

But I do use Go when needed as a systems language, and it is fantastic for whipping out the occasional necessary microservice (due to network boundaries, etc).

theLiminator
2 replies
20h6m

I think the main issue with Python tends to not be performance (though it can be hard to speed up certain bottlenecks), but rather there's a point where maintaining it goes from very easy to very difficult due to lack of static typing. Where this occurs can be pushed further back with very careful programming discipline (or by adopting mypy et. al from the start). But I could see a world where something go-like with a little more expressivity could've become that glue language instead of python.

pphysch
1 replies
19h59m

This is a software engineering practices problem rather than a Python problem. Python has great tooling and language support for type annotations. I work on large Python codebases with ease because I leverage these things. My IDE is able to do static analysis and catch potential typing errors before runtime.

The problem is we have researchers with no SWE expertise writing huge codebases in Jupyter notebooks inside a bespoke, possibly unreproducible Anaconda environment. That is going to be a maintenance disaster no matter which language is being used.

And if you force your researchers, who are using to being productive prototyping in Python, R, etc. to use a statically-typed language, they are going to complain and be a lot less productive at their job.

theLiminator
0 replies
19h55m

I find mypy and the other type checkers I've used kinda painful due to false positives.

I think with proper type inference, there would be a lot less pain around static typing.

Ideally users should be forced to type their function signatures and everything else can be inferred.

Definitely I do agree with you that the situation today is much superior to even just 4-5 years ago.

truculent
0 replies
17h38m

newcomer like ... R

R is only 2 years younger than Python (32 years old)

lamontcg
0 replies
19h57m

I don't see how a newcomer like "Scientific Go" or R or Julia will ever unseat it.

Julia's support for Autodiff

rdtsc
3 replies
19h51m

Go is a systems programming language,

I remember that being a meme of sorts. Most people understood that as a C/C++ replacement with operating systems and drivers being written in Go. System programmers laughed, of course. Eventually when it didn't look like it wasn't going happen, the token reply from Go devs became "Well not those kind of systems, we always meant a different kind of systems programming language, not what you all thought".

kragen
1 replies
9h36m

rob pike's definition of 'systems software' from 02000 is in http://doc.cat-v.org/bell_labs/utah2000/utah2000.html

Systems: Operating systems, networking, languages; the things that connect programs together. Software: As you expect. (...) What is Systems Research these days? Web caches, Web servers, file systems, network packet delays, all that stuff. Performance, peripherals, and applications, but not kernels or even user-level applications.

he goes into a lot more detail there on the things he sees in 'systems software research', and it goes pretty far beyond kernels and drivers. this is not a definition he retconned onto golang in 02014 or, i would claim, a definition unique to him

rdtsc
0 replies
4h47m

Then which languages aren’t systems languages? Python can connect things together, BASIC, Java, Javascript. I guess only some GUI DSL languages might not be.

sevagh
0 replies
15h12m

From the same team that came up with the unique, searchable, and identifiable word "Go" for a programming language.

qaq
2 replies
9h43m

Are there a lot of HPC workloads at Google compared to concurrent/parallel servers?

kragen
1 replies
9h35m

well, now there are. but pagerank was one from the beginning

dekhn
0 replies
2h13m

pagerank was implemented as an iterative mapreduce on classic hardware (and sibyl later adopted this model, using MR as an engine to do what is really an HPC job). Not sure I really consider it HPC, more like high throughput. HOwever, the MR approach worked really well when google was scaling super-fast in the early days; if they'd chosen to solve the problem using MPI and infiniband on expensive SGIs, they probably wouldn't have become the company they are today.

leetrout
2 replies
20h33m

I dream of a world where we have Go or something similar with the same / similar UX of Jupyter Notebooks. I keep an eye on Julia but I never make the leap to use it and still pickup Python (or Go).

zozbot234
0 replies
20h12m

https://github.com/evcxr/evcxr can run Rust in a Jupyter notebook. It's not Golang but close enough.

Art9681
0 replies
18h46m

https://github.com/gopherdata/gophernotes

I've had this bookmarked for some time and just havent gotten around to it.

tgv
0 replies
9h24m

I can totally agree on excluding scientific computing. A lot of that simply is more cobbling something together than software engineering. And HPC wouldn't be happy with Go anyway.

Python was exploding in ML

It's very easy to start in Python. So it's taught everywhere, and everyone knows it. Until 10 years after the majority of colleges have replaced Python with another language, Python will stay dominant. From that point of view, Mojo is quite clever.

sapiogram
0 replies
3h57m

I think Go spent a lot of time implying that goroutines were some sort of special language magic, but I think everybody knows now that they are basically a really good implementation of green threads that can take advantage of some internal knowledge to do optimal scheduling to avoid context switches. It took me a while to work this out, and I got a lot of pushback from the go team when I pointed this out internally.

This comment deeply fascinates me, because I get the same feeling every time I go back and read/watch early Go resources from Rob Pike and others, but I've never actually heard it articulated before. I've always thought that surely they weren't that ignorant about PL theory and history? Or maybe Rob Pike himself was, but surely his team knew better?

It really feels like they thought they were making some special hybrid between threads and coroutines, combining the advantages of both. Like this 2011 talk[1] which demonstrated a classic coroutine code pattern implemented with goroutines and channels. But as time went on, goroutines became fully pre-emptive, and their programming model is now identical to threads. Using them as coroutines only leads to misery (read: data races) in my experience. They're faster than OS threads, sure, but that's an implementation detail at this point.

When do you think they eventually realized they were re-inventing the same green threads Java had created and scrapped 10 years earlier?

[1] https://www.youtube.com/watch?v=HxaD_trXwRE

npalli
0 replies
20h25m

Good historical perspective thanks for it. Your comments on scientific computing/HPC are interesting. Golang could indeed have solved the two-language problem and taken off like a rocket in comparison to where it is (hovering in the top 15). However, I think it would have to tackle some very orthogonal concepts - to the systems language creators like Rob and others on the go team - like vectorization as first class concept, parallelism (not green threads) etc. which might have limited some of the initial implementation efficiencies, not sure. There is still room for such a language (Julia is getting there...) perhaps some disgruntled FORTRAN elder who is sick of C++ will create such a new language :-)

norir
0 replies
17h5m

And a REPL wasn't necessary because Go compiled so quickly you could just write a program and run it (no, that misses the point of a repl, which is that it builds up state and lets you call new functions with that built-up state).

He'd have had a point if the go compiler could reliably compile programs in faster than 50-100ms. The claims that go is a fast compiler always seem to be relative to c++ or something. Last time I checked just compiling hello world took over 200ms which was shocking to me given how I'd heard that one of the language's claims to fame was a fast compiler.

liampulles
0 replies
20h34m

Interesting context. I use Go a lot for enterprise backend type work and I have to say I'm glad its not geared towards being an HPC language, but to each their own.

leeoniya
0 replies
20h35m

And scripting language integration was also not a desirable goal, because Go was intended for you to write all your code in Go, rather than invoking FFIs.

"just avoid cgo" is a something i've heard many times from all Go devs

google234123
0 replies
20h31m

There’s engineers at Google trying to get things done, and ones trying to create new languages… they don’t talk to each other too much

chaxor
0 replies
20h0m

It's ok that Go didn't work out for scientific crowds, since Julia works better for the scientific community as a replacement for hacky Matlab/c++/Fortran conbled-together scripts.

knorker
46 replies
20h30m

I think two big failures were:

1. Nil pointers (two types of them, even!). We knew better even then.

2. Insisting that the language doesn't have exceptions, when it does. User code must be exception safe, yet basically never use exceptions. The standard library swallows exceptions (fmt and http)

Those are the biggest day to day concrete problems. There are many more that are more abstract, but also hurt.

randomdata
32 replies
20h26m

> Insisting that the language doesn't have exceptions

Insist in what way? The Go website insists that Go has actual exceptions, unlike the pretend exceptions that are actually errors passed around using goto like you find in Java and other languages inspired by it.

monocasa
18 replies
19h58m

Given how they being up how fmt and http swallow them, I believe the parent is referring to panics rather the errors returned via standard control flow.

randomdata
17 replies
19h56m

Yes, that's what we're talking about (exceptions, or panics if that's what you want to call them). That's what exceptions are.

monocasa
16 replies
19h51m

I guess I'm confused since panics are equally errors passed around by gotos as much as java exceptions are. Probably more so since at least with java it ends up being part of the the function type signature the vast majority of the time.

everforward
14 replies
19h30m

It creates 2 disparate types of error handling that don't neatly mesh together. You have to handle error return values, but you also have to handle exceptions (panics) because they still exist.

My issue is mostly implementing both ways of bubbling up an error to somewhere it can be handled. I think having either error return values or exceptions is preferable to having both. I don't think exceptions are perfect, but if panic() absolutely has to exist then I'd rather have an entirely exception-based language than a language that uses both systems simultaneously.

E.g. if I write a function that accesses an element of an array without bounds-checking, it could panic and I have to handle that exception. Bounds-checking basically just becomes finding things that would throw exceptions and converting them to errors so we can pretend that exceptions don't exist.

randomdata
9 replies
19h18m

> It creates 2 disparate types of error handling

They are disparate conditions. Errors happen in response to conditions that occur during the execution of the application. Exceptions happen in response to conditions that occurred when the code was written. Very different things.

It is highly unlikely that you want to handle an exception. It's the runtime equivalent of a compiler error. Do you also want to handle compiler errors so that your faulty code still compiles? Of course not, so why would you want to do the same when your coding mistakes are noticed at runtime?

There are, uh, exceptions to that when it is necessary to handle exceptions, but if it you see it as routine you're doing something wrong. If you overloaded that with errors, forcing it to be routine, you'd have a nightmare on your hands (like in those other languages that have tried it).

groestl
7 replies
17h42m

Exceptions happen in response to conditions that occurred when the code was written.

Huh? Stack overflow? Out of memory?

It is highly unlikely that you want to handle an exception

It is very likely that I want to handle an exception. In fact, I want to handle all exceptions and keep my process and all other concurrent requests to it running. And don't tell me, that's not possible, because I've been doing that for decades. In Java that is.

randomdata
6 replies
17h12m

> Stack overflow?

Exception. The minimum available stack space is a known quantity. Exceeding it means you made a mistake.

> Out of memory?

Error. The available heap is typically not predictable. Your allocation function should provide an error state; and, indeed, malloc and friends do.

> And don't tell me, that's not possible

It is perfectly possible. Probably not a good idea, though, as you have proven that your code is fundamentally broken. Would you put your code in production if there was a way to ignore compiler failure?

groestl
3 replies
11h3m

The minimum available stack space is a known quantity.

It is. Tracking and erroring out on it to avoid the exception means replicating your runtime environment's mechanism for tracking and erroring out on stack overflow (system in a system / inner platform anti-pattern). Your runtime environment's implementors know that, so it's unlikely you'll find the APIs necessary to avoid an exception (i.e. a maxRecursion param and equivalent error result).

Exceeding it means you made a mistake.

No, it can be just a part of processing a request. Depending on the particular runtime environment, it does not have any impact on other parts of the process.

randomdata
2 replies
4h25m

> so it's unlikely you'll find the APIs necessary to avoid an exception

Lacking a needed API is programmer error. Better programming can avoid that kind of exception. A hypothetical, sufficient smart compiler could fail at compile time, warning you are missing code to handle certain states in the absence of such an API.

To reiterate, exceptions are faults which come as a result of incorrect programs. Errors are faults which come as a result of external conditions. A program that overflows the stack is an incorrect program. The stack size is known in advance. If it is overflown, a programmer didn't do proper accounting and due diligence.

groestl
1 replies
4h1m

Lacking a needed API is programmer error.

Whoa, easy there. We're talking about standard libraries, and the designers of those are not complete morons. The API is lacking because the runtime environment already provides a safe and defined environment for the observed behavior. It just happens to not fit your mental model, which I find too strict and off wrt reality on one hand, and infeasible on the other (Gödel wants to have a talk with you).

randomdata
0 replies
3h56m

Don't let perfect be the enemy of good. It is quite pragmatic to make such an error.

We're ultimately talking about engineering here. Engineering is all about picking your battles and accepting tradeoffs. You go into it knowing that you will have to settle on making some mistakes. Creating an ideal world is infeasible.

Indeed, it is your mental model that is too strict. To err is fine. To err is human!

quaunaut
1 replies
16h23m

Would you put your code in production if there was a way to ignore compiler failure?

What compiler failure? Go literally does not warn you until it hits the error at runtime for these exceptions.

randomdata
0 replies
16h12m

> What compiler failure?

Pick something. I don't care. Let's say failure for reasons of having no return statement in a function that declares itself to return something. If you could flip a switch to see that code still compile somehow, knowing that the program is not correct, would you deploy it to production?

> Go literally does not warn you until it hits the error at runtime for these exceptions.

True, but only because the Go compiler isn't very smart. It trades having a simpler compiler for allowing some programmer faults to not be caught until runtime. But if there was such a thing as an ideal Go compiler, those exceptions would be caught at compile time.

When it comes to exceptions, the fault is in the code itself, unlike errors where the fault is external to the program. Theoretically, those faults could be found before runtime. But it is a really hard problem to solve; hence why we accept exceptions as a practical tradeoff. We are just engineers at the end of the day.

troupo
0 replies
13h14m

Errors happen in response to conditions that occur during the execution of the application. Exceptions happen in response to conditions that occurred when the code was written.

wat.

You have code that ends up dividing by zero, and boom, you have an exception while the app is running.

It is highly unlikely that you want to handle an exception.

You always want to handle an exception. That is how actual resilient systems are written

abtinf
2 replies
19h13m

I cannot conceive of the scenario where it makes sense to recover a bounds-checking induced panic. The process should crash; the alternative is to continue operating in an unknown, irrecoverable, and potentially security compromised state.

steveklabnik
0 replies
18h54m

Rust shares Go's "errors as values + panics" philosophy. Rust also has a standard library API for catching panics. Its addition was controversial, but there are two major cases that were specifically enumerated as reasons to add this API: https://github.com/rust-lang/rfcs/blob/master/text/1236-stab...

It is currently defined as undefined behavior to have a Rust program panic across an FFI boundary. For example if C calls into Rust and Rust panics, then this is undefined behavior. Being able to catch a panic will allow writing C APIs in Rust that do not risk aborting the process they are embedded into.

Abstractions like thread pools want to catch the panics of tasks being run instead of having the thread torn down (and having to spawn a new thread).

The latter has a few other similar examples, like say, a web server that wants to protect against user code bringing the entire system down.

That said, for various reasons, you don't see catch_unwind used in Rust very often. These are very limited cases.

everforward
0 replies
18h51m

I cannot conceive of the scenario where it makes sense to recover a bounds-checking induced panic.

A bog-standard HTTP server (or likely any kind of request-serving daemon). If a client causes a bounds-checking panic, I do not want that to crash the entire server.

It's not even really particular to bounds-checking. If I push a change that causes a nil pointer dereference on a particular handler, I would vastly prefer that it 500's those specific requests rather than crashing the entire server every time it happens.

The Go HTTP server does this internally (though there is talk about not doing it, deferred til Go 2 https://github.com/golang/go/issues/5465).

The process should crash; the alternative is to continue operating in an unknown, irrecoverable, and potentially security compromised state.

The goroutine should probably crash, but that doesn't necessarily imply that the entire program should crash. For some applications the process and the goroutine are one and the same, but that's not universally true. A lot of applications have some kind of request scope where it's desirable to be able to crash the thread a request is running on without crashing the entire server.

the_gipsy
0 replies
16h14m

There are two types of errors and Java's mistake was making them all the same.

erik_seaberg
0 replies
19h16m

Only at the level of Java source. The JVM (and several other languages) doesn’t actually care or enforce which exceptions a method might throw, which is what makes tricks like https://projectlombok.org/features/SneakyThrows possible.

stonemetal12
7 replies
19h39m

Other than the fact that they spelled "throw" "panic", "catch" "recover", and "finally" "defer" how are go exceptions different than what you find in java?

I get that Go devs like to claim they are completely different because you are supposed to use them differently, but under the hood they are identical as far as I can tell.

eweise
3 replies
18h48m

Not sure why people downvote instead of answering the question.

randomdata
2 replies
18h43m

The comment answers its own question.

As to why press a button that does nothing? For the same reason fidget spinners were all the rage a few years back: Bored people like to do something with their hands.

Perhaps if the comment had a question that was left unanswered, people wouldn't have become so bored?

eweise
1 replies
18h15m

"how are go exceptions different than what you find in java?" They are not the same. Thought someone might be helpful and list the differences.

randomdata
0 replies
18h6m

Why not lead by example?

tubthumper8
2 replies
5h51m

Disclaimer: just trying to directly answer the question, but also may be wrong in many ways. Please correct me.

I thought one difference was that a panic in a goroutine kills the whole process vs. an exception in a Java thread would just kill that thread. That could be more of a consequence of "Goroutines are not threads" rather than "panics are not exceptions".

Java Checked exceptions are certainly quite different than Go panics in terms of compile-time checks and what code the user of the language must write.

I thought there are some differences with how stack traces are accessed on caught/recovered exceptions? It's been a while now but I thought you needed something special to get the Go stack trace out. Fairly minor detail though.

Error is an interface in Go vs. a base class that's extended. Probably more of a result of other language design decisions rather than a decision in this particular area.

I haven't really seen the catch-and-rethrow paradigm for Go panics, but it's kinda different because `panic` only accepts a string argument whereas you can rethrow an "object" in Java (side note - sometimes Go error handling ends up having a lot of string concatenation ex. errors.Wrap because of the focus on "errors are strings"). The lack of catch-and-rethrow is more of a usage difference than a design difference, to your main point.

Probably others but can't think of them right now.

knorker
1 replies
2h31m

I thought one difference was that a panic in a goroutine kills the whole process

Only if not caught. But in any case this still means that you need to write exception safe code, because you don't know if the function you call will throw, and you don't know if the function calling you will catch.

`panic` only accepts a string argument

No: https://go.dev/play/p/ZDKpybtxABL

catch-and-rethrow paradigm for Go panics

Probably because "you're not supposed to". You can.

tubthumper8
0 replies
2h8m

Thanks for the clarification on the panic function signature - I missed it when checking the spec but it's definitely in there and defined as taking `interface{}`

https://go.dev/ref/spec#Handling_panics

lokar
3 replies
20h17m

So many people confuse errors with exceptions

groestl
2 replies
17h49m

Errors are just checked exceptions where you get the chance to introduce bugs, by unrolling the stack manually.

randomdata
1 replies
17h0m

Errors are things that can fail after the program is written (hard drive crash, network failure, etc.).

Exceptions are things that were already broken when you wrote the code (null pointer access, index out of bounds, etc.)

To put it another way, exceptions are failures that a sufficiently smart compiler would have been able to catch at compile time. Of course, creating a compiler that smart is a monumental task, so we accept catching some programmer mistakes at runtime as a reasonable tradeoff.

groestl
0 replies
10h57m

exceptions are failures that a sufficiently smart compiler would have been able to catch at compile time

Would love to see your proof of this for stack overflow exceptions. You could become very famous.

knorker
0 replies
2h16m

Not sure what your question is.

"Insist in what way": starts with things like the Go FAQ having a question called "Why does Go not have exceptions?".

The answer does elaborate, so it's not like they're lying, exactly. But anything and everything Go says about this also applies to C++. There's no relevant technical difference between Go and C++ exceptions, nor is there a difference in how the standard library uses them.

... except the Go standard library swallows exceptions in some cases, which is like the biggest no-no you can do.

But nobody would say that C++ doesn't have exceptions.

tptacek
8 replies
19h54m

Go has been my daily driver for over a decade. I was in the past a C++ programmer. In what ways am I writing exception-safe code when I write ordinary Go code?

monocasa
6 replies
19h47m

I've run into issues where panics cause half of what should be a multistep but assumed to be atomic transaction to occur, putting the system into a goofy state that required fairly manual intervention. In my case a system daemon that required someone to manually fix up system state on the CLI and restart the system.

tptacek
2 replies
19h44m

That's like, strong-form exception safety, a problem in most mainstream languages. But when C++ people talk about "exception safety", they're talking about basic or weak-form exception safety: not leaving dangling pointers and resources as a result of unexpected control transfer. That style of defensiveness is not common in Go code.

monocasa
1 replies
19h34m

Well that's the thing, I am talking about resources left 'open' since they didn't complete their lifecycle due to the unexpected control flow. Yes, it's not common in go code, but I think that's more a combo of the GC making dangling memory not a problem, and the environment that most go code lives in (ie. kubernetes clusters or some equivalent) where the other resources leaked are eventually reclaimed by the autoscaler and other devops automation.

The GC is ubiquitous, and definitely a point in favor for go for the vast majority of use cases, but I've found it more difficult than anticipated to write go code that manipulates resources other than memory that the environment you're running in won't clean up for you. And that's coming from C++ code originally including the exception safety issues.

Smaug123
0 replies
18h57m

(By the way, [the Austral spec](https://austral-lang.org/spec/spec.html#rationale-errors) discusses in great detail the available tradeoffs in this area of language design.)

Thaxll
2 replies
18h41m

Well panic = very serious problem, what do you expect? You can catch them and handle them but it does not means the system is stable.

Also defer() will run even if there is a panic.

skitter
0 replies
16h25m

panic = very serious problem, what do you expect?

Even the Go standard library itself panics (and recovers) when you try to e.g. json-encode a NaN, which doesn't seem like something that should make the system unstable.

groestl
0 replies
17h54m

but it does not means the system is stable.

This is a direct result of Go's lacking error handling, and would not be necessary if they'd have learned from C++ and Java's mistakes, instead of just repeating them.

knorker
0 replies
2h36m

The simplest example is that you're probably writing this function:

    func Foo() error {
      m.Lock()
      defer m.Unlock()
      return fooWithLockHeld()
    }
But your less experienced colleague was told by Rob Pike himself that the language doesn't have exceptions, so writes:

    func Foo() error {
      m.Lock()
      err := fooWithLockHeld()
      m.Unlock()
      return err
    }
That code is not exception safe. It may be a contrived example, but in code review I've many times seen real code that is not exception safe, for basically the same reason.

A held lock is fairly benign (only causes a deadlock, not corrupted data) if the net/http.Server swallows it in your handler, or fmt.Print swallows it. But some other errors are not as benign.

__turbobrew__
1 replies
19h44m

Panicing on nil pointers is definitely the thing which I have seen cause the most pain.

silvestrov
0 replies
19h34m

The ?. (optional chaining) operator in Javascript is really a godsend for this.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

tialaramex
0 replies
15h48m

Right, people really need to take Tony's Billion Dollar Mistake more seriously, which means no you can't have "null" or "nil" or whatever you're calling it. We know that's a bad mistake so you shouldn't repeat it.

Just as I can excuse using a fat pointer to some bytes as your "string" type in a very close to the metal language. I can excuse the possibility of a null pointer or reference in such a language, just above the level where we're doing machine code. It's not nice, but you're banging rocks together and there's a zero value in this register and so fine, let's have a "null" pointer. This is not something to be proud of, it shouldn't make its way into code that can avoid it, but it will need to exist in the very heart of the fire.

Go is far above that level, so it needs to just not expose Go programmers to Tony's mistake at all. It should have been defined out of existence.

015a
0 replies
12h0m

I tend to feel, just error handling in general. Its not even something I'd care so much about, if it didn't seem like everyone else felt like the way Go does errors is great.

You can't/shouldn't do custom error types, even though its an enticing sexy interface, because of things related to the massive nil/nil mistake you've covered. We had errors-as-values in popular, large languages (Javascript callbacks?), and by the mid-10s everyone recognized that they're kinda whatever, mostly just a different way of doing the same thing less conveniently, and that community got rid of them (as a side-effect of the more general push to get rid of callback hell, but certainly no effort was made to keep errors-as-values around). We say "being forced to handle errors is great in Go", but (1) you don't have to handle them, you just have to acknowledge them with `_`, and (2) Java has had checked exceptions for years, and everyone also recognizes that those are ish. And, as you say, Go has two fundamentally different kinds of errors functions can throw (what color is your function?), except the facilities for handling panics are essentially a goto (which, I love pointing out lest we all forget, go also has literal goto). Sure, working code shouldn't panic; but all that asserts is that Go wasn't designed to be fault tolerant.

To be clear, I don't feel as much passion for hating on Ruby, because I don't use Ruby. I use Go. I don't wish to hate on the language for the purpose of hate; I wish that more people would agree that the situation is quite poor, rather than good, and that we could make meaningful positive change to the language.

moomin
37 replies
20h19m

He’s claiming go popularised concurrency and invented interfaces? I’m sorry but that’s laughable.

I mean, congratulations, you created a new popular programming language, and that’s not nothing, but let’s not rewrite history here.

tomp
23 replies
20h16m

What other languages of comparable popularity do you know that had concurrency similar to Go's, and interfaces?

Actually, name just one that has either, even today!

dwattttt
7 replies
19h56m

Java has had interfaces for a _long_ time, and it definitely meets the popularity requirement.

EDIT: Since its 1.0 release in 1996

tomp
6 replies
19h24m

There's nothing similar about Java's and Go's interfaces, except the name.

Java interfaces must be implemented explicitly (they're nominal).

Go interfaces are automatically satisfied by any type that declares matching methods (similar to protocols in other languages) (they're structural).

dwattttt
2 replies
18h48m

Go's nominal interface feature is definitely interesting, but that section doesn't talk about how amazing it was to implement interfaces implicitly:

That notion quickly grew into the idea that value types had behaviors, defined as methods, and that sets of methods could provide interfaces that functions could operate on. Go's interfaces arose pretty much right away.

If you replace Go with Java, this would've accurately described Java ~30 years ago.

tomp
1 replies
18h46m

You missed this:

> Making interfaces dynamic, with no need to announce ahead of time which types implement them

moomin
0 replies
8h11m

Yeah, they managed to recreate the advantages and disadvantages of Modula-3's typing system within three decades. Groundbreaking.

twic
0 replies
18h49m

They are certainly different, but to say there is nothing similar is plainly untrue - apart from nominal vs structural, they are pretty much the same.

I'm not sure how significant the nominal vs structural distinction even is. In Go, a struct can implement an interface without declaring it, but the programmer still needs to deliberately write the struct to conform to the definition of the interface, so they're still coupled, just not explicitly [1]. Yes, it is possible to define a new interface which fits existing structs which weren't designed for it - but how common is that? That is, how common is it for two or more structs to have a meaningful overlapping set of methods without being designed to conform to some pre-existing interface?

[1] which is obviously a bad thing

marwis
0 replies
18h10m

Scala had it 2 years before golang, OCaml more than decade. https://en.wikipedia.org/wiki/Structural_type_system

Izkata
0 replies
17h34m

So they're like python's Abstract Base Classes? (No idea which came first)

scythe
4 replies
19h40m

Fibers were implemented by Microsoft in Win32 API in 1996:

https://devblogs.microsoft.com/oldnewthing/20191011-00/?p=10...

As the review Chen links discusses, it turns out that M:N threading (i.e. goroutines) and good C compatibility are mutually exclusive. Go went one way, every other language went the other way. The most common alternative is stackless coroutines, which are much more widely implemented than the Go model.

tomp
1 replies
19h21m

Even stackless coroutines aren't very popular. No other popular language specification has them (except maybe Haskell/GHC, but it's not that popular).

iainmerrick
0 replies
18h22m

JavaScript?

zozbot234
0 replies
19h24m

That review is by Gor Nishanov: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p136... The most relevant quote:

DO NOT USE FIBERS!
magicalhippo
0 replies
19h15m

And COM interfaces. They could be clunky in C/C++ but Delphi implemented interfaces quite nicely[1] back in 1999[2].

And being for Windows, Delphi had full access to Win32 API.

[1]: https://docwiki.embarcadero.com/RADStudio/Sydney/en/Using_In...

[2]: https://en.wikipedia.org/wiki/History_of_Delphi_(software)#B...

ninepoints
4 replies
20h12m

Erlang

phinnaeus
1 replies
20h10m

of comparable popularity
throwawaymaths
0 replies
19h26m

It was more popular than go in 2009

tomp
0 replies
19h22m

Erlang's processes might have similar semantics to Go's goroutines (green threads) but Erlang is a much simpler language, because it doesn't have shared state.

A lot of work went into optimizing Go's GC to be able to cope with concurrency.

Thaxll
0 replies
20h10m

Erlang is not a popular language.

easton
1 replies
19h56m

C#? It has channels and good concurrency stuff. I don’t know if it was as mature in 2007 when they did this work.

edit: Concurrency, not the language itself. That was mature.

moomin
0 replies
8h9m

C# not only has channels, it has async/await, which he concedes is the actually popular form of concurrency in that others have adopted it.

neonsunset
0 replies
18h5m

Go's concurrency isn't even that good. It just looks good for anyone coming from the languages which don't have that (which are the majority).

One of the earliest high level languages with powerful concurrency and parallelism APIs are C# and F# (TPL and Parallel/PLINQ, some of which was available back in 2010).

moomin
0 replies
8h13m

To the extent that Go has features that no other popular language has, it is not influential. To the extent to which it invented those things, it's not influential. And that's why he didn't make that claim, he made a much broader one. The only problem is, if you make the broader one, it's obvious it's F#/C# that's been influential.

OkayPhysicist
0 replies
17h39m

Elixir and Erlang have both, and Go's concurrency model is just a poor imitation of what you get from Erlang, that's been around since the 80's.

liampulles
12 replies
20h9m

I don't think you read the Interfaces section properly. Rob is not claiming go invented interfaces.

If you want to level such a charge that he is rewriting history, you need to take special care yourself in representing his views accurately.

dwattttt
9 replies
19h49m

I came away with the impression they were saying they'd invented interfaces, so I reread it:

That idea was exciting for us, and the possibility that this could become a foundational programming construct was intoxicating.

Talking about interfaces as an idea that could become a foundational programming construct definitely sounds like they're saying they invented it.

PH95VuimJjqBqy
4 replies
18h17m

go's version of interfaces is fairly unique, if you read that as them inventing interfaces you misread. If you read it as them having a unique twist on interfaces they felt was powerful, then you read it correctly.

It never even occurred to me that someone would suggest they're claiming to have invented interfaces, mostly because obviously they didn't.

dwattttt
3 replies
17h36m

I was careful to try read what was said; the language might be a bit loose because it's a transcript of a live talk. That said, the example they gave, and their motivating problem of the qsort API in C don't show anything about using nominal interfaces, and instead look like a normal use of interfaces, combined with language about being wowed of how powerful they could be.

PH95VuimJjqBqy
2 replies
12h18m

The text is recreated below:

It's clear that interfaces are, with concurrency, a distinguishing idea in Go. They are Go's answer to objected-oriented design, in the original, behavior-focused style, despite a continuing push by newcomers to make structs carry that load.

Making interfaces dynamic, with no need to announce ahead of time which types implement them, bothered some early critics, and still irritates a few, but it's important to the style of programming that Go fostered. Much of the standard library is built upon their foundation, and broader subjects such as testing and managing dependencies rely heavily on their generous, "all are welcome" nature.

I feel that interfaces are one of the best-designed things in Go.

Other than a few early conversations about whether data should be included in their definition, they arrived fully formed on literally the first day of discussions.

And there is a story to tell there.

On that famous first day in Robert's and my office, we asked the question of what to do about polymorphism. Ken and I knew from C that qsort could serve as a difficult test case, so the three of us started to talk about how our embryonic language could implement a type-safe sort routine.

Robert and I came up with the same idea pretty much simultaneously: using methods on types to provide the operations that sort needed. That notion quickly grew into the idea that value types had behaviors, defined as methods, and that sets of methods could provide interfaces that functions could operate on. Go's interfaces arose pretty much right away.

That's something that is not often not acknowledged: Go's sort is implemented as a function that operates on an interface. This is not the style of object-oriented programming most people were familiar with, but it's a very powerful idea.

That idea was exciting for us, and the possibility that this could become a foundational

programming construct was intoxicating. When Russ joined, he soon pointed out how I/O would fit beautifully into this idea, and the library took place rapidly, based in large part on the three famous interfaces: empty, Writer, and Reader, holding an average of two thirds of a method each. Those tiny methods are idiomatic to Go, and ubiquitous.

_THE WAY INTERFACES WORKED became not only a distinguishing feature of Go, they became the way we thought about libraries, and generality, and composition. It was heady stuff.

emphasis at the end there is mine.

how the hell _anyone_ reads that and comes away with the idea that they're claiming they invented the idea of interfaces is beyond me.

my only guess here is that many people are not familiar with the sort problem they're describing.

very famously, C++'s sort is more performant than C's sort because C uses a void pointer and C++ uses templates. The extra type information allows C++ to optimize the sort in a way that C cannot, so while C is generally more performant than C++ in a lot of ways, this is one particular area where C++ shines.

So it's no surprise that Rob Pike, et al, paid close attention to sort as something to improve over C and the way to do that is having more type information available (C++ very clearly has shown this).

moomin
1 replies
7h48m

That's something that is not often not acknowledged: Go's sort is implemented as a function that operates on an interface. This is not the style of object-oriented programming most people were familiar with, but it's a very powerful idea.

I feel like most of this is just ignoring the prior art. C#'s sort works the same way. Admittedly, this isn't obvious because of the way it's implemented. But if you're a language designer you don't have much of an excuse there.

PH95VuimJjqBqy
0 replies
4h11m

it would behoove you to quote the entire context

Robert and I came up with the same idea pretty much simultaneously: using methods on types to provide the operations that sort needed. That notion quickly grew into the idea that value types had behaviors, defined as methods, and that sets of methods could provide interfaces that functions could operate on. Go's interfaces arose pretty much right away.

That's something that is not often not acknowledged: Go's sort is implemented as a function that operates on an interface. This is not the style of object-oriented programming most people were familiar with, but it's a very powerful idea.

What he actually said is that being able to pass a value type into a sort function and have it work without needing to explicitly define and implement an interface was not a style that most people are familiar with.

and indeed, C# absolutely does NOT work that way.

It probably would have been better for you to claim that Ruby had prior art, but even that's implemented in an entirely different way it's just that the behavior is closer to Go's behavior than C# is.

twic
3 replies
18h56m

In his defence, their interfaces do work somewhat differently to Java's, because they don't need to be explicitly implemented. I don't know if that matters enough to make them a novel invention, though.

za3faran
0 replies
16h35m

Java's interfaces are nominal. What you're describing is called structural interfaces, and they exist in other languages like Scala.

shakow
0 replies
16h1m

I don't know if that matters enough to make them a novel invention, though.

That's called structural typing, which is at least as old as OCaml, i.e. 25+ y.o.

moomin
0 replies
8h17m

Yes, they work the same way as Modula-3's did in 1988. Still not exactly innovation.

throwawaymaths
1 replies
19h29m

It sounded like it to me. I kept thinking didn't java have interfaces to prevent multiple inheritance? All go did was replace all inheritance with interfaces

moomin
0 replies
19h0m

Which I’m pretty sure CLU did first (honestly, I’m pretty sure nearly every good idea was in CLU first).

cangeroo
32 replies
19h25m

(rant)

That's nice, but also rather self-congratulatory. I was expecting some kind of acknowledgment of the deeper issues in the language. But perhaps that's the central issue, that the language is perfect in their eyes. I'm the problem.

Well, okay then.

I can't recommend the language, because of its type system, the error handling, the unsafe concurrency, the simplistic syntax, nil, default zero, and a large number of mainstream packages are abandoned.

I now use Rust as my main language. It has a flourishing ecosystem and is visionary in so many ways that Go is not.

Put more pointedly, I'm sure Go had its day, when it was competing with PHP as a backend language.

zik
8 replies
17h43m

I don't think this comment really contributes to the discussion. It comes across as Rust advocacy without having any tangible points to make.

First paragraph: "I don't like it". Second paragraph: snide comment. Third paragraph: "I don't like it". Fourth paragraph: "Rust is better". Fifth paragraph: snide comment.

ncruces
6 replies
16h30m

There's one thing in there worth discussing IMO: the focus on zero values (incl. nil).

That's the Go mistake, the one that causes most of the issues for the intended audience, the one that can't really be fixed. It's a shame Pike doesn't really discuss this, even if it's hopeless now.

The rest is just people projecting and self-selecting outside of the intended audience. Don't like it, don't use it, we don't all need to agree with you.

campbel
5 replies
13h33m

Zero values are fine, some benefits, some drawbacks. Workarounds exist for when you need to identify the difference between unset and zero.

FridgeSeal
1 replies
10h18m

No they’re not, they’re a horrible decision, and some of the “solutions” I’ve seen for working around them are band-aid-code at best.

The design decisions around zero values infect protobufs too, and they suck to work around. The fact that an empty message can successfully deserialise into any valid protobuf is an insane decision and should have been thrown out long ago.

znkr
0 replies
8h31m

The fact that an empty message can successfully deserialise into any valid protobuf is an insane decision and should have been thrown out long ago

The reason for this is that the protobuf wire format is designed for very high entropy: It contains only a minimal amount of metadata and consists mostly of data. This means you can deserialize most wire messages as a different message. This is a tradeoff: smaller message size for loss of schema information. This just means that schemas need to be handled at a higher level. This tradeoff makes some sense if you process millions of protos per second.

BTW: Dismissing a tradeoff like this as insane is derogatory. You can do better

orangeboats
0 replies
8h52m

I don't see how they are fine, when NULL is widely known (even though it might not be agreed by some) as the billion-dollar mistake.

One could have easily added an Optional type that forces programmers to check for nil every time the optional variable is accessed.

ncruces
0 replies
6h4m

Sorry, but no.

Interfaces and how they ended up limiting generics (parametric polymorphism) are a tradeoff. Structural interfaces (duck-typed in an otherwise static-typed, with composition-over-inheritance, language) are innovative, interesting, and offer many benefits, enough to compensate any drawbacks. This is mentioned in the talk.

But the fact that anyone can just conjure a zero value out of thin air, and this fine because it's zero initialized (a decade after Java had proved this was really not good enough) it's pretty inexcusable. And this is not just a "default," it's actually impossible by design to enforce initialization in any way.

Then they did this in a language with pointers, which by necessity are zero initialized to nil, and simply added some timid steps to make nils more useable/useful (like nil receivers being valid). Which unfortunately, in the end, only further complicates static analysis and tooling that might ameliorate the issue.

Finally, if this wasn't enough of a problem, nil panics (any panics, in fact) are a hard crash if a goroutine doesn't handle them, and: it's impossible to add a global handler, it's impossible to prevent goroutines from being created that don't handle panics. So any code that you call can crash your program, and this is considered good form.

If you really feel that zero values are useful enough to justify all this, please explain. Because I just don't see it. This isn't a widely innovative feature that shapes idiomatic programming in an amazing way. The standard library is full of awkward hacks to make zero values useful (esp. in the face of backwards compatibility), where simple enforced construction would be much better.

I love Go. It's my favourite programming tool. I wish I could use it more professionally. But not acknowledging this error, or being dismissive, and doing nothing about it, helps no one.

euroderf
0 replies
9h7m

when you need to identify the difference between unset and zero.

Use pointer variables ? Works 4 me.

cangeroo
0 replies
6h41m

I was trying to say that:

by paragraph:

1) The article should have acknowledged the issues that a wide part of the community are experiencing.

2) I accept that I'm the problem. But then I must leave.

3) The consequence of their attitude is that I cannot recommend the language anymore. I used to be excited about it, and that disappointment makes me angry.

4) If only Go was trying to be better. Rust is just an example of the visionary leadership that I expect from Go. I want go to be visionary! Because they have some things right, like fast compiliation, cross compilation, simple syntax, and a focus on simple concurrency. But it's like those ideas never developed.

5) Rust is a counterexample, that a language can be visionary, without giving up on its fundamentals.

6) Acknowledgment that Go was the best solution at a time. But also that the time seems to have passed.

satvikpendem
6 replies
16h20m

I agree, I've used Go before I learned Rust and seeing the differences really changed my mind. I used to use OCaml before so I understood the value of Option and Result types over try/catch and `nil`, but I used Go because it was easy. However, that easiness comes at a cost, namely maintenance over time. You want to get it right the first time around and not have to face challenges later on.

Not to be flippant, but I've often heard Go described as taking the programming language advances over the last 50 years and throwing them away.

bb88
4 replies
13h48m

but I've often heard Go described as taking the programming language advances over the last 50 years and throwing them away.

That's by design, right? Go is very opinionated. They looked at other languages that they hated, e.g., C++/Java and didn't want to replicate them. But then adding their own mistakes along the way.

The brand new mistake that surprised me was that nil does not always equal nil. So just checking for nil is not enough sometimes, one has to "cast" as nil to the type you're expecting. And goland doesn't catch it. C++/Java/Python/C, null always equals null. But not in golang. shrug

The primary issue I have is that go doesn't make it very easy to write unit tests. You have to use interfaces everywhere just to inject your mocks.

I feel like that's the big mistake they made. Any new language needs to make it super easy to write unit tests without forcing major design decisions that affects development.

earthboundkid
2 replies
12h36m

I’ve never seen a real bug from the interface nil != concrete nil thing. When does it come up?

lenkite
1 replies
12h2m

People have raised dozens of issues on the Go tracker because they got stumped by this. You don't really need to look hard on a search engine to see thousands of questions/issues.

If you don't believe me, take a project like k8s and starting from its inception, you can see dozens of issues where people have to explain function returns a nil interface and not a nil value. This is explained a stupendous number of times.

Now repeat this for thousands of Go projects.

Of-course you can claim this is not an issue in the same way that anyone can claim that buffer overflows are not an issue in C/C++ for "real" programmers.

earthboundkid
0 replies
3h3m

I see people say that they're confused by it a lot, sure. But do the bugs get to production? That was my question. Buffer overflows get to production all the time! But do the concrete vs interface nils get to production or are they caught in dev and then someone opens an issue to ask why it doesn't work? I suspect it's more the later.

Obviously, it would be better if Go didn't have this problem. If I made Go 2, it would have blank interface be "none" and blank pointer be "null". Even better, I would add a reference type that can't be null and sum types or something like that. But these things are all relative.

In Python, people using "is" for numbers is a problem. But in practice only very junior developers do that and it mostly gets caught in dev. There was the Digg outage caused by a default argument being [] instead of None https://lethain.com/digg-v4/ and that I see as a slightly more serious bug that can slip by without good code review.

Every language has pitfalls and the question is whether they outweigh the other benefits.

FridgeSeal
0 replies
10h16m

That's by design, right? Go is very opinionated. They looked at other languages that they hated, e.g., C++/Java and didn't want to replicate them.

It’s a pity there’s no other languages in the world that they could have taken good design from, rather than looking at a handful of languages they didn’t like and basically throwing the baby out with the bath water.

didntcheck
0 replies
8h16m

Yep. Every now and again I look into contemporary Go and try to give it the benefit of the doubt, but it just looks like intentionally going back to Java 5 [1] and touting the lack of features as a feature. What I find surprising is its apparent popularity among people who also like languages like Typescript and Kotlin, which go in the opposite direction, with high expressibility

To be more controversial, its flaws remind me of COBOL, both in the flawed belief that a lack of expressive power makes your programs easier, rather then harder, to read, and in the sense that both languages seemed completely ignorant of any programming language design ideas outside of their bubble, and just stayed stuck in the past

[1] As of 2022. Prior to that it famously didn't even have generics

demizer
4 replies
15h59m

All these plan9 scientists love their own brand. I started using Go in 2012, but after they killed deps.dev I gave it up. Some years later when I wanted to get work done at work I tried to introduce it on my team and another engineer spent a good amount of time looking into the language and listed all the reasons why it sucked, and he was right. The main takeaway was, yeah it's simple, but it does silly things that makes it a pain to use (error handling and unused imports) to name a few. I personally like the error handling but hated the type system.

cwbriscoe
3 replies
15h17m

Just run goimports on save then there would be no issue with unused imports. I would take go's error handling over try/catch any day of the week.

cangeroo
2 replies
6h53m

You then have to reimport. Every time when you comment out code.

I guess the compiler authors don't comment out code?

And it's intentionally not optional on the compiler.

So you have to modify the source and compile your own compiler to disable it. It's ridiculously sadistic to their users.

scns
0 replies
5h54m

Is it possible to comment out the imports?

cwbriscoe
0 replies
3h12m

If you use goimports (which also runs gofmt) after commenting out code, you just have to save your file and it will remove any unused imports. There is no reason to go to the extreme extent of compiling your own modified version of go just for this. The tooling is already there.

hardwaregeek
3 replies
10h37m

I agree with the programming language sentiments, but in its defense, Go is very much a language for and by people who don't care about programming languages. I mean this in the nicest way. Rob Pike doesn't care that the concurrency is unsafe or that nil is the billion dollar mistake. Neither do most users of Go. Does that make Go a good language? No. But that's besides the point. It's a convenient, good enough language combined with a compelling set of tools that make it easy to use. It's the crocs of programming languages.

bborud
2 replies
7h49m

Well, some of us kind of care, but not enough to pay the price of ending up using a languages where people care more about language design than writing programs. Go is very much a language for getting things done rather than winning beauty contests. And it has proven itself as a productive language.

I've worked with perhaps half a dozen real "language enthusiasts" in my time. People who spend lots of time obsessing over "perfect" language design and non-mainstream languages. People who never stick to a language for more than a year or three, who insist everyone else accommodate their current fascination with some language, and who leave behind codebases full of code that will be hard to maintain because they are a patchwork of languages. Not caring that the organization then has to take the time both train people in thoselanguages and ensure they have enough people experienced with the language to be able to use a portion of their time to train newbies.

On a few occasions I actually researched their job history and found that these people had a tendency to make a lot of noise, but produce very little of consequence. They'd find jobs at the outskirts of projects where it would be easier to indulge in their interests without clashing with the principals. Most of their code would be gone just months after they left the position.

My advice is that if you care about building stuff, don't hire language enthusiasts.

jtasdlfj234
1 replies
4h25m

Okay, circling back here. So your position is that nil safety would reduce the value of Go's "getting things done"? As someone who uses Go in production, I don't agree at all.

bborud
0 replies
3h18m

No, I'm saying that nil safety is a lower priority for me than overall productivity. If Go had nil safety that would be very nice. But it doesn't. And it doesn't bother me all that much. Go set out to be a practical language for writing servers - not to tick all the boxes.

In the 8 or so years I've used Go for production software, it hasn't actually turned out to be a big problem compared to my experience with C and C++. If it was a huge problem it should have manifested as such. But in my experience, nil errors are extremely rare in our production code.

If this hadn't been the case, we would probably have invested in Rust. But I can't deny that when I look at those of my friends who use Rust, I'm not exactly impressed with what they accomplish. Even a couple of years in, I see them spending a lot of time fighting with the compiler, having to backtrack and re-think what they are doing or, sometimes, having to rewrite/replace code (sometimes third party) that isn't as strict as they wish to be.

Is it worth the extra effort?

That being said, sure, I think Go would be less of a "getting things done" if the maintainers had felt an obligation to add every checklist item to the language from the start. As someone else pointed out, the fact that they had the guts to say no to a bunch of things was a really good thing. Just throwing all your favorite ingredients in the pot and stirring it doesn't guarantee you'll end up with a delicious dinner. There's a lot more to it than that.

wg0
2 replies
8h28m

Not a rhetorical question. Genuine question. I don't know so I'm asking question.

nil/null is really problematic, true. But how languages handle this otherwise? Is it that program must be statically analyzed to ensure that no nil/null path exists or there are other solutions as well?

Would be thankful for any pointers.

tubthumper8
1 replies
6h47m

The core problem with null/nil in Go (and Java) is that it is not modeled in the type system. In Java, any reference (which is most types) can be secretly null which is not tracked by the compiler. Go one-ups this and has the same concept for pointers but also introduces a second kind of nil (so nil doesn't always equal nil [0]).

All approaches come down to modeling it in the type system instead of it being invisible.

One approach is modeling it as a sum type [1]. A sum type is a type that models when it's one thing OR another thing. Like it's "a OR b". "null OR not null". So a lot of languages have a type called `Maybe<T>` which models it's a "T OR Nothing". Different languages use different names (like `Option` [2]) but it's the same idea. The key is that null is now modeled using normal / non-special / user-defined types so that it's no longer invisible.

Another approach is using so-called "nullable types" and flow typing [3]. For example, `Foo` is a normal type and `Foo?` is a nullable version of `Foo`. You're not allowed to pass a `Foo?` to a function that requires a `Foo` (the other way is fine). When doing a null check, the compiler narrows a `Foo?` to a `Foo` inside the non-null branch of the check. This is one capability of a more general compiler/language technique sometimes referred to as "narrowing" [4] or "static flow analysis" [5]

[0] https://yourbasic.org/golang/gotcha-why-nil-error-not-equal-...

[1] https://en.m.wikipedia.org/wiki/Tagged_union

[2] https://doc.rust-lang.org/std/option/index.html

[3] https://kotlinlang.org/docs/null-safety.html

[4] https://www.typescriptlang.org/docs/handbook/2/narrowing.htm...

[5] https://learn.microsoft.com/en-us/dotnet/csharp/nullable-ref...

wg0
0 replies
2h37m

Great and articulate explanation! I learned something today. Thank you!

qaq
0 replies
9h50m

I am pretty sure Go is not going anywhere. Pretty much anyone can read Go and understand what is going on which is def. not true for Rust. It's very possible that if Mojo pans out it might be the "mass market" lang. that brings a lot of the Rust goodness to the avg. dev.

jonathanstrange
0 replies
9h3m

I've learned Rust but Rust's user community makes it impossible for me to like the language. This hasn't changed over the years, if at all it has gotten worse. If I need high integrity and safety, I'll use Ada or even Ada/Spark with formal verification. For anything else, Go leads to much more productivity than Rust.

In my opinion, Rust is a prime example of overengineering (just like C++).

SPBS
0 replies
12h3m

First, what's good and bad in a programming language is largely a matter of opinion rather than fact, despite the certainty with which many people argue about even the most trivial features of Go or any other language.

Also, there has already been plenty of discussion about things such as where the newlines go, how nil works, using upper case for export, garbage collection, error handling, and so on. There are certainly things to say there, but little that hasn't already been said.
AnimalMuppet
0 replies
15h1m

I think you're misreading the article. Pike doesn't say that the language is perfect at all. He says that they did better on the community aspects, but they admit the flaws in the language.

leetrout
19 replies
20h36m

I picked up Go in 2012 as a Python dev in need to doing some bit twiddling over the wire for Modbus. I never shipped that code but it blew my mind how easy it was to just twiddle those bits and bytes and it just worked.

A decade later and a couple almost full time Go jobs under my belt and it still surprises me how well most things Just Work™.

I love the Go language and I love the Go community.

I appreciate what Rob, Ian, Russ and the others do for Go and I appreciate that this talk / blog is honest about the "bumps in the road" working with the community. There's not much point in beating a dead horse around this but having lived through it I find it very hard to believe they didn't know exactly how they were behaving, especially in regards to the package management debacle. Never the less the blog is also correct that we have landed at a very good solution (Drew's legitimate complaints aside).

Here's to another 10 years of Go and the inspired / similar languages (Zig, Deno, etc) and hoping we continue to grow as a healthy community.

biomcgary
15 replies
19h51m

My favorite thing about the core Go team is their willingness to say "No" (to all sorts of stuff) and "Wait for the right implementation" (for generics).

I'm a computational biologist rather than a programmer, so my use of Go waxes and wanes, but when I come back to Go, my code compiles and the language works the way I expect.

That being said, I do appreciate Rob Pike's willingness to admit mistakes on the learning curve on community engagement without capitulating on adding all the shiny objects.

zozbot234
12 replies
19h48m

Wrt. generics they followed in Java's footsteps: they asked the PLT community to come up with a reasonably elegant model that would mesh well with the rest of the language, and then largely stuck to that.

erik_seaberg
10 replies
19h22m

It took years to root out and torch flawed APIs from the JVM ecosystem. After that example, it’s hard to defend neglecting the problem again and launching with no solution.

philosopher1234
7 replies
17h40m

The idea that it would be better if go waited an additional year or two to launch with generics is laughable. That extra year probably makes the difference for the languages success.

erik_seaberg
6 replies
16h52m

If you mean that another language with more planning might have caught on, I think that would have been a better outcome.

philosopher1234
5 replies
16h22m

Why would a better planned language win and not one that rushed and beat go to the finish?

valenterry
4 replies
13h1m

That's not what he said. He said IF that WOULD have happened, it would have been better for everyone except for the Go designers.

philosopher1234
3 replies
12h31m

My point is that if you delayed the release of Go, there’s no reason to believe you end up with a better world. So his argument is bunk.

valenterry
2 replies
12h23m

Why not? I think his reasoning makes sense. Unless you mean that having a language with better design (but being released later) is not an improvement.

philosopher1234
1 replies
10h41m

My argument is that timing is very important to adoption. Unix is not the best OS to have been designed by far, but it was the first free one. If go had been delayed, something else may have filled the slot, and there’s no reason to believe it would have been a better something else. I.e when Go did release 2 years later, but with generics, it’d be too late, and no one would care.

bborud
0 replies
8h13m

I'm not sure the creators of Unix would have agreed. Unix was a step in a very different direction at the time. It was a reaction to baroque operating systems that did a lot of stuff and were quite complex. The key to Unix is simplicity and, if you shave it down, that it was really a system interface definition - which enabled other people to create Unixen by offering the same system call interface with the same semantics.

It was neither free, nor do I think the timing played much of a role since there wasn't any comparable OS being made at the time. The important bit of Unix was a set of key ideas.

zdragnar
1 replies
18h46m

No language launches perfectly.

If the cost of adding generics later is "existing code still works, and now this can work too" versus "the old version of genetics is flawed, burn everything down and use this version instead" or "generics didn't work out the way we wanted, use this new thing that's totally-not-generics-wink-wink"...

Well, I'd say waiting is the right call. You're already used to adding three extra lines of boilerplate after every single line of code for error handling, living without generics wasn't that hard.

Hell, I recall half the go community was convinced they weren't needed at all by the time they came around.

masklinn
0 replies
8h8m

No language launches perfectly.

There's a gulf between "not launching perfectly" and launching with obvious, blatant deficiencies.

Go did the latter. Generics were always going to have to be implemented eventually, at least two languages in basically the space Go was targeting had had to bite that bullet in the decade before it was published. Support for third-party external dependencies only made that more dire.

Instead of doing the work up-front and launching with generics, they decided to launch with ad-hoc generic builtins and "lalala can't hear you" for a decade, then finally implement half-assed generics.

thayne
0 replies
13h11m

I think that generics is really difficult to do well when added to a language later. At the very least you will have a lot of pre-existing code, including the standard library, that would have benefitted from generics, but doesn't because it was written before generics existed. And you will almost certainly have cases where generics don't mesh well with other features. I think that if go was designed from the beginning with generics, then generics probably would have worked better in go (and similarly for java).

za3faran
1 replies
16h56m

"Wait for the right implementation" (for generics).

Ironically, they still don't have it right. Last time I checked you couldn't have generics on "methods".

randomdata
0 replies
15h21m

There shouldn't be anything in the chosen implementation that prevents generics on methods. The work just hasn't been done yet. There are only so many hours in the day. Feel free to jump in if you have some to spare.

A wrong implementation would hamstring making such improvements in the future. It is possible that will still happen in some unforeseen way anyway, but the earlier proposals visibly exhibited problems from the get-go. They were clearly wrong. So far, this one does look right.

zozbot234
1 replies
19h53m

Zig and Deno are not very comparable to Golang I think. Elixir and ReasonML would be more like it. From Google themselves there's also Dart.

antod
0 replies
18h13m

Being that Deno is tooling rather than a language, I think it is safe to say it is inspired by / comparable to Go tooling. To me it feels like writing JS/TS with Go tooling.

theshrike79
0 replies
10h8m

A Python convert here too.

Currently I've been using different AI tools (Bard, GPT-4) to just straight-up convert my old python utilities to Go.

There are a few that worked right out of the box, for a few I've had to adjust stuff mostly because of the AI model's information about APIs being a bit out of date.

But the fact that I can just scp a program to a server and it Just Works is amazing compared to the "what's the current venv system du jour" dance I had to do with Python every time.

softirq
18 replies
19h29m

The fact that he doesn't mention context as a huge failing of Go is very suspect...

I also find the post a little too self-congratulatory for what was essentially a reinvention of C with a GC at the right time, and not just C the language, but C's philosophy on programming.

I think as Go has become more popular, the core of C has been drowned out by people coming from other languages who insist on too many libraries, too many abstractions, generic solutions at every level, and more features. Go today has essentially moved much closer to Java, and some projects like Kubernetes are without a doubt just Java projects with a slightly different syntax.

Concurrency and interfaces I think are also a big fail in Go.

Interfaces because they failed to add enough of them to the standard library for simple things like logging, filesystem access, etc, causing numerous incompatible implementations, which is something that wouldn't have happened if they hadn't been so gung-ho on interfaces being defined where they are used instead of having community interfaces.

Concurrency is harder to summarize, but day to day you still get locking issues, libraries that don't expose interfaces that are easy to work with via coroutines (which is ironic given rob's finger pointing at async/await coloring), and as I said context is really a prime example of why CSP IS a bad model compared to mailboxes and the Erlang model of concurrency. Every function has to take an extra noisy argument, every function has to wait for a ctx cancellation, instead of just baking the semantics of cancellation into the language itself.

OkayPhysicist
11 replies
17h58m

The fact that Erlang has existed since the 80's really begs the question "Why do language designers keep fucking up concurrency?". It's a feature that's really painful to tack onto to a language after the fact (looking at you, Python and JavaScript), but is absolutely necessary for any programming language.

I see goroutines as a solid "next best thing" after BEAM Processes, both of which are miles ahead of async/await, which is admittedly an improvement over any lower-level thread manipulation.

techdragon
4 replies
17h30m

Because erlang is “weird” and not enough programmers learn it for it to have breached the cultural ramparts regarding what concurrency is in a programming language. I learned Erlang and Elixir and I’ve never been happy with any other languages’ concurrency mechanisms and primitives since then. Between multitasking features that let you avoid concurrent tasks causing CPU starvation, message passing allowing proper decoupling of concurrent tasks in both an asynchronous and synchronous ways and all the other little ways it does concurrent programming right… nothing holds a candle to it. There are some nice libraries in other languages but it’s not the same because it’s not built into the language at the same level.

Async/Await, asynchronous IO, none of this is really the same kind of “concurrency”… it’s why I wish I had more chance to use the Erlang/Elixir+Rust combo… low level safety and high level concurrency are a match made in heaven.

zozbot234
3 replies
17h23m

Rust is getting async-in-traits in its newest release, that can also be seen as implementing this "message passing" model in an async context. It's super impressive how the language keeps evolving so fast and improving its relevance over time.

OkayPhysicist
2 replies
17h17m

A huge amount of the value-add of BEAM languages is the fact that the languages have built-in support for message-passing concurrency built in at very fundamental levels, which means that the "pretty path", the comfortable way to write code in those languages, simply supports the concurrency model.

You cannot achieve this by tacking on half-baked support onto an already designed language. You can tack on Async/await, which is exactly why it's so popular.

aragilar
1 replies
12h27m

What's the BEAM languages' FFI situation like? I've got the impression that the more higher-level concepts a language supports, the worse it is to do FFI.

masklinn
0 replies
2h43m

Complicated. There are essentially 5 possibilities:

- ports are subprocesses acting as erlang processes

- erl_interface is a more efficient version of the same as the communication uses BEAM's external term format

- C nodes are what it sounds like, basically a separate program acting as a node in a beam cluster

- port drivers have a shared library acting as a process, it's way more efficient but jettisons safety as a crash in the library kills the runtime

- finally NIFs are synchronous functions called in the context of an existing process, they are the fastest option but not only do they kill the emulator they can also screw up the VM state, and they can't be managed by the scheduler without their cooperation (which can severely degrade system stability), BEAM has a concept of "dirty NIFs" which are long-running, can't be suspended, and can't be threaded, if those are appropriately flagged they are run on dedicated "dirty schedulers", dirty schedulers have more overhead but less restrictions, in essence cgo is a more impactful version of that (although I believe dirty schedulers were invented a while after cgo existed, previously "dirty nifs" would just see you drawn and quartered)

sesm
3 replies
16h37m

With async/await you can describe your logic with loops and conditions intermixed with async IO. In Erlang you’ll have to design a state machine, split the logic between event handlers and redesign the state machine each time the logic changes.

zozbot234
2 replies
16h27m

Async-await compiles down to a state machine anyway. It's the same thing.

metaltyphoon
0 replies
13h24m

Yes but you’re not the one doing it, the compiler is.

WorldMaker
0 replies
15h55m

But that's also the gorgeous benefit. Of course it is state machines all the way down, this is Turing Completeness at its finest where various forms of Universal Turing Machines are plentiful. The beauty is that compilers are great at taking what look like iterative step-by-step, higher level descriptions and rewriting (monadically transforming, to use the fun words for it) that to state machines for us, taking most of the mental overhead out of the process of building a state machine tiny, dumb lego by tiny, dumb lego.

(That's partly why I find all the discussions about "async/await" "colors" silly: they aren't colors, they are types. You build types for other state machines, right? You don't complain about Regular Expressions [which are also often rewritten to simpler state machines] having their own types and that they can only embed other Regular Expressions and that consequent state complexity as having "colors", do you? About the only real difference between async/await and Regular Expressions is that the types for async/await implementations are all guaranteed to be monads in the async/await languages and might not be for Regular Expressions. Though monad Regular Expressions libraries exist.)

za3faran
0 replies
17h8m

Fortunately, Java has virtual threads now, with structured concurrency in the works, making it superior to golang's approach.

whateveracct
0 replies
17h3m

ML is from the 70's and Go doesn't have sum types with exhaustive matching. It has iota which is sugar for .. number constants lol.

cangeroo
3 replies
18h43m

Interestingly, several posts have already been flagged for having a similar perspective.

I'd love to know why we're not allowed to discuss and elaborate on "what we got wrong" in fairly neutral technical terms.

tptacek
1 replies
17h41m

It's language-war drama, which is deeply disfavored on HN, because it's boring and spreads like kudzu.

cangeroo
0 replies
11h50m

Fair enough.

And the world would be a better place with a positive-bias.

Except that this article purports to be self-critical, so arguably the very topic of this thread should cover both the positive and negative of the language, and our comments are merely expanding on it.

But thanks for the explanation, I appreciate it.

sesm
0 replies
16h33m

Slightly off-topic, but the most honest programming language book that describes “what we got wrong” is “Effective Java” by Josh Bloch. It also used to be the best book to learn Java before 1.7 (maybe still the best today, but I’m not following Java anymore).

voidhorse
0 replies
16h34m

It really is just C with a GC and slightly better support for generic programming, which is why I like it quite a lot as a default language for writing basic programs.

I like Go, but I didn't like the post for somewhat similar reasons. I felt it pats itself on the back for several language design wheel reinventions that are basically lesser versions of counterparts in other languages. For example, interfaces are touted as some brilliant solution for polymorphism as though Haskell wasn't already doing the "sets of methods" (but with type arguments from the start) idea via type classes as early as 1988...sure interfaces are a bit different since they are implicitly implemented, but the basic idea is the same. The whole Erlang being a good prior in the concurrency space is another example. In general it left a bad taste in my mouth since it sort of felt like the language was designed in ignorance of the vast field of ideas already present in programming language theory. It really comes off as though the team thought that Java, C, and C++ were the only languages that existed before Go. Wadler's (eventual) involvement suggests otherwise (but then again, maybe they only knew about his work in relation to Java), and I realize there are constraints to giving live talks that force one to trim things down, but I really dislike this tendency to think in a vacuum, celebrate your own (re)discoveries as brilliance, and then to present it all without hardly a mention of the large body of research that went before you, and that you should have consulted during your creative process. Given the access we have to research today, there's little excuse for it. I doubt that Rob Pike actually falls into this camp or lacks this knowledge, but the talk (in essay form) really makes the team of language designers seem like it was a team of people that knew little about language design and happened to stumble onto analogues of good ideas that already existed in more mature form.

I will say, though, that I like that the writing of a spec is touted as one of their best decisions, as I really wish more novel languages would bother to define specs these days.

the_gipsy
0 replies
18h25m

I also find the post a little too self-congratulatory

Yes. I mean, the language is a huge success if you measure how far adoption has come. But I expect more from a "what we got right and what we got wrong" post.

IMHO what went right is compiler/tooling speed. The async story ultimately is just crappy, the "no colored functions" is a lie that blows up in production. Go's interfaces also sound great in theory but don't really work so great when using them. They're mostly used for DI in UTs. Sometimes I wonder if I'd use any interfaces at all if there was a way to monkeypatch deps just for UTs (yes I would but 99% less).

google234123
16 replies
20h36m

The value it’s brought to Google has definitely not been worth the cost. It did not really replace other languages. If you join Google now as a new engineer, you will likely be writing C++, Java, or maybe a web language

paulddraper
4 replies
20h15m

You won't be writing Go?

rrdharan
3 replies
19h43m

You might be, but as the parent said, it’s not likely. I think that’s a fair statement, out of 100K engineers or whatever it is, I’d estimate less than 10K of them are writing primarily golang code.

iainmerrick
2 replies
18h34m

What do you base that estimate on?

rrdharan
0 replies
15h0m

I didn’t pull any stats but I’ve been working at Google for most of the last 10 years (in two stints).

And actually I think even 10K is a very generous upper bound, really if I were betting money I’d peg it at like maybe 2000 engineers that use golang as their primary language and a big chunk of those are SWE-SRE?

There's just _way_ more Java and C++ than there is golang…

bananapub
0 replies
13h31m

on the fact that most of the code in prod isn't Go. anyone working there can look at the CL statistics dashboard to see what's being written, and a different dashboard to see the distribution of binary sources.

Go is very popular in some niches (a semi-mandate in SRE produced a bunch of it) and not used at all elsewhere.

scythe
3 replies
19h42m

A friend of mine who works for Google noted that they hardly use Go internally, and quipped, "Google invented Go to sabotage its competitors."

erik_seaberg
1 replies
16h54m

That used to be the joke about open sourcing GCL and borgmon, I guess some of which ended up in k8s. Good luck out there!

bananapub
0 replies
13h33m

or as actually happened, jsonnet and prometheus!

I still wonder how many potential competitors the mapreduce and bigtable papers eliminated. perhaps it doesn't matter since the people that blindly adopted such things without enough thought might have got mired in some other nonsense anyway.

mrkeen
0 replies
10h20m

I thought the same thing about Facebook and "move fast and break things" !

leetrout
1 replies
20h31m

Can you say more about this?

Was Go inspired / derived from the use of sawzall and it just performs its intended uses in other parts of the company?

iainmerrick
0 replies
18h37m

Go was designed specifically as a better C++ (that is, a better successor to C). Its use as a Sawzall replacement was a lucky accident, I think -- that just happened to be the first niche where it took hold.

Sawzall was not a widely-liked language, so people had already been trying to replace it with Python, but Go had better performance and stronger typing so it was a better fit. Go really found its footing as a better Python (for some use cases) rather than a better C++.

dummyvariable
1 replies
19h58m

This in incorrect, I have been at Google more then 5 years and the work has been in go. Most of the teams around me use go as well. But this is not to say that there is lesser java/cpp use.

rrdharan
0 replies
19h39m

Your experience doesn’t invalidate the likeliness part of the parent comment.

You just happen to be in the subset / cluster / areas that write in golang. Probabilistically speaking across the entire Google engineering population both you and the teams around you are outliers. That doesn’t mean golang is insignificant.

As to whether golang was worth it I disagree with the parent, I think it was probably worth the resources invested, even if e.g. usage has leveled off internally, but at any rate this is a hard thing to measure so we’re all just opining.

yegle
0 replies
18h41m

I think this is easy to verify w/o resort to anecdotes. There's an (easy to find) internal dashboard with language metrics, showing the number of CLs (changelists) breaking down by languages and the team.

nadermx
0 replies
20h28m

That really depends on how beneficial the things they use golang for are. Hard to quantify if it allowed for large benefit in say YouTube or such

Thaxll
0 replies
20h15m

How do you know about the cost? Do you have numbers? It's the root of many core and used services.

VirusNewbie
13 replies
20h26m

I’m surprised they didn’t mention the horrible error handling as something they regret…

Thaxll
6 replies
20h20m

It's not horrible, it's ok and a bit repetitive. Really the whole thing about error handling in Go is overblown.

righthand
3 replies
20h14m

It’s a bit repetitive but so is writing `for i, v := range n {}` everywhere. Code loops and repeats syntax all the time, so what? Argument doesn’t hold water for me (not attacking).

lmm
2 replies
17h36m

Repetitive code is a language flaw IMO. There are certainly more concise alternatives to the common case of that loop construct in many languages.

righthand
0 replies
16h44m

Then people should use those languages. From the beginning Go has been about simplicity and utility, not about providing multiple patterns and alternate syntax.

I would also argue that repetitive code is a failure of the engineer who implemented it. If writing code is repetitive then people may want to look into generating it instead.

aatd86
0 replies
13h35m

It's an interesting opinion. I tend to believe there can be value in repetitive code patterns for recognition.

It can make a language easier to read and understand although a bit more tedious to write.

It's especially interesting nowadays when LLM models can write parts of it. It alleviates the writing part.

eweise
1 replies
19h57m

Because its repetitive its leads to incorrect code. I've seen more swallowed errors in Go than other languages.

13415
0 replies
19h39m

Honestly, I've seen way more problems with global exception handlers, which often lead to incomprehensible end user messages.

righthand
5 replies
20h23m

Because it’s neither horrible nor regrettable, it just doesn’t cater to your perfect idea of what it should be. It’s a smart way to encourage people to handle errors.

bananapub
3 replies
13h36m

you think it's "smart" to encourage a large fraction of function calls to have

  if err != nil {
    return nil, err
  }
after it? really? you don't think it's just a very simple way to do it without having to do a lot of work in the language? and then make everyone use a linter to avoid bugs? oh and you can't always literally do 'return nil, err' because strings have to be ""?

it's certainly defensible as a minimalist approach, claiming it's actually good or smart or ideal is a pretty weird take.

righthand
2 replies
11h14m

You think it’s “a lot of work” to write an if statement? Any of the proposed variations have shown to be either different syntax (weak valueless change) or would allow people to ignore handling errors.

You can’t always return nil? is that a complaint that you are required to do things differently for data types that require it inside of a complaint about doing things the same?

It’s not a weird take, things have tradeoffs, this is a tradeoff of using Go.

unscaled
1 replies
10h40m

It's not a lot of work, you can just let the IDE spew it off for you and feel like a certified J2EE programmer circa 2003 who lets the IDE write everything for them. But yeah, writing code is not the problem. Reading the code is.

I think having 60% of your lines of code being:

  if err != nil {
    return err
  }
Does make the code a tad bit less readable.

righthand
0 replies
10h33m

So then it’s less readable because you have to process a simple error return? I won’t even comment on the 60% of code is error handling, that sounds like code that needs refactoring.

unscaled
0 replies
10h45m

More precisely because "they" are Rob Pike, and Rob Pike doesn't seem to think error handling in Go is horrible.

Other members of the Go Team (Robert Griesemer, Russ Cox) trialed out several solutions for this problem, but this issue was just too controversial within the community.

tptacek
9 replies
19h47m

Interesting bit here about the decision to use Ken Thompson's C compiler rather than LLVM --- something that people grumbled about, and that resulted in (especially earlier versions) less optimal generated code. The flip side of that decision is that they were able to do segmented stacks quickly; they might not have done them at all if they'd had to implement them in LLVM and fit the LLVM ABI.

(He cites this as an example of the benefit of that decision, not the only benefit).

pcwalton
7 replies
19h31m

That part of the interview is incorrect about LLVM. I implemented segmented stacks via LLVM for Rust. It's actually pretty easy, because there is support for them in X86FrameLowering already (and it was there at the time of Go's release too). If you enable stack segmentation, then LLVM emits checks in the function prolog to call into __morestack to allocate more stack as needed. (The Windows MSVC ABI needs very similar code in order to support _chkstk, which is a requirement on that platform, so the __morestack support goes naturally together with it.)

Actually, getting GDB to understand segmented stacks was harder than any part of the compiler implementation. That's independent of the backend.

What I think the author might be confusing it with is relocatable stacks. That was hard to implement at the time, because it requires precise GC, though Azul has implemented it now in LLVM. Back then, the easiest way to implement precise GC would have been to spill all registers across function calls, which requires some more implementation effort, though not an inordinate amount. (Note that I think the Plan 9 compiler does this anyway, so that wouldn't be a performance regression over 6g/8g.) In any case, Azul's GC support now has the proper implementation which allows roots to be stored in registers.

akira2501
6 replies
18h22m

is incorrect about LLVM.

He didn't say it was not possible, but that it would have required too much effort in modifying the ABI and the garbage collector.

because it requires precise GC

Which is why they avoided LLVM for a much smaller and easier to manipulate existing compiler. Their point is that it would have slowed things down too much to even try it inside someone else's project. Sometimes "roll your own" is the best idea.

pcwalton
5 replies
18h10m

He didn't say it was not possible, but that it would have required too much effort in modifying the ABI and the garbage collector.

__morestack doesn't really have an ABI. It's just a call to an address emitted in the function prolog. LLVM and 6g/8g emit calls to it the exact same way. I suppose you could consider the stack limit part of the ABI, but it's trivial: it's just [gs:0x18] or something like that (also it is trivial to change in LLVM).

The garbage collector is irrelevant here as the GC only needs to be able to unwind the stack and find metadata. Only the runtime implementation of __morestack has any effect on this; the compiler isn't involved at all.

Which is why they avoided LLVM for a much smaller and easier to manipulate existing compiler. Their point is that it would have slowed things down too much to even try it inside someone else's project.

I was suggesting that Rob Pike possibly confused segmented stacks with relocatable stacks. Segmented stacks have only minimal interaction with the GC, while relocatable stacks have major interaction.

Assuming good faith, either (1) Rob misremembered the problem being relocatable stacks instead of segmented stacks; or (2) the Go team didn't realize that LLVM had segmented stack support, so this part of the reasoning was mistaken. (Not saying there weren't other reasons; I'm only talking about this specific one.)

akira2501
4 replies
18h1m

Segmented stacks have only minimal interaction with the GC, while relocatable stacks have major interaction.

Okay.. this is where I'm losing your argument. Can you quantify the difference here between 'minimal' and 'major' from the 2012 perspective this was framed in?

pcwalton
3 replies
17h35m

The GC needs to unwind the stack to find roots. The only difference between segmented stacks and contiguous stacks as far as unwinding the stack is concerned is that in segmented stacks the stack pointer isn't monotonically increasing as you go up. This is usually not a problem. (The only time I've seen it be a problem is in GDB, where some versions have a "stack corruption check" that ensures that the stack pointer is monotonically increasing and assumes the stack has been corrupted if it isn't. To make such versions of GDB compatible with segmented stacks, you just need to remove that check.)

Relocatable stacks are a different beast. With relocatable stacks, pointers into the stack move when the stack resizes. This means that you must be able to find those pointers, which may be anywhere in the stack or the heap, and update them. The garbage collector already knows how to do that--walking the heap is its job, after all--so typically resizable stacks are implemented by having the stack resizing machinery call into the GC to perform the pointer updates.

Note that, as an alternative implementation of relocatable stacks, you can simply forbid pointers into the stack. This means that your GC doesn't need to be moving. I believe that's what Go does, as as far as I'm aware Go doesn't have moving GC (though I'm not up to date and very much could be wrong here). This doesn't help Pike's argument, though, because in that scenario the impact of relocatable stacks on the GC is much less.

As an aside, in my view the legitimate reasons to not use LLVM would have been (1) compile times and (2) that precise GC, which Go didn't ship with but which was added not too long thereafter, was hard in LLVM at the time due to SelectionDAG and lower layers losing the difference between integer and pointer.

zozbot234
1 replies
17h14m

SelectionDAG and lower layers losing the difference between integer and pointer

Doesn't pointer-provenance support address this point nowadays? AIUI, a barebones version of what amounts to provenance ("pointer safety" IIRC) was even included in the C++ standard as a gesture towards GC support. It's been removed from the upcoming version of the standard, having become redundant.

pcwalton
0 replies
17h5m

I think CHERI addresses the issue, but I don't know how much of that is in upstream LLVM. Pointer provenance as used for optimization mostly affects the IR-level optimizations, not CodeGen ones.

In any case, Azul's work addresses the GC metadata problem nowadays.

typical182
0 replies
14h6m

you can simply forbid pointers into the stack. This means that your GC doesn't need to be moving. I believe that's what Go does

I might have misunderstood your comment, but FWIW, Go does allow pointers into the stack from the stack.

When resizing/moving/copying a stack, the Go runtime does indeed find those pointers (via a stack map) and adjust them to point to the new stack locations. For example:

https://github.com/golang/go/blob/b25f5558c69140deb652337afa...

(The growable stacks I think replaced the segmented stacks circa Go 1.3 or so; I can't speak to whether they were contemplating growable stacks in the early days whilst considering whether to start their project with the Plan 9 toolchain, LLVM, or GCC, but to your broader point, they were likely considering multiple factors, including how quickly they could adapt the Plan 9 toolchain).

loeg
0 replies
19h27m

They eventually moved away from segmented stacks, right? In Go 1.3, released 2014. (Due to the "hot spot" issue.[1]) So while the ability to experiment was valuable, this specific example is not, like, perfect.

[1]: https://go.dev/doc/go1.3#:~:text=Go%201.3%20has%20changed%20...

gregwebs
8 replies
15h48m

This seems like a more personal account of the ACM article they published [1]. In both they recognize that they didn't make a great new programming language in terms of a language specification but instead did a great job building up all the things around programming languages that may end up being even more important.

In the submitted article they talk about inventing an approach to using interfaces and also an approach to concurrency. Go routines are identical to Haskell threads and interfaces are very similar to Haskell typeclasses (now that they support generic arguments). Haskell's preceded Go- it's interesting to see procedural programmers independently discover the power of ideas from functional programming.

Go's one language innovation is to not require an interface implementation to declare the interface it implements. This is awful from a safety perspective but in practice it causes few issues and gets rid of awful circular dependency issues experienced in Haskell and now Rust.

[1] https://cacm.acm.org/magazines/2022/5/260357-the-go-programm...

hota_mazi
3 replies
13h27m

Go's one language innovation is to not require an interface implementation to declare the interface it implements.

Uh? How is that innovation? 100% of mainstream languages that I can think of that predate Go do this.

Can you name one programming language that we should care about which, once you define an interface, FORCES YOU to provide an implementation of said interface?

lmm
1 replies
11h40m

The point is that in most languages something doesn't implement an interface unless it declares that it does so; in Java or C# if you don't explicitly write "extends Writer" then your type doesn't implement Writer, even if you implemented all the methods of Writer. Whereas Go offers something similar to e.g. Python's behaviour where things are "duck typed": you don't have to explicitly reference a particular interface, you just implement the right methods. Of course in (traditional) Python that works because the language doesn't have real ("static") types at all. Having "static duck typing" is pretty rare - TypeScript now has it (and Python itself sort of has it), but when Go did it it was something that was pretty much new for mainstream languages.

(IMO it's a misfeature; having explicit interfaces communicates intent and allows you to do things like AutoCloseable vs Closeable in Java - but that's a matter of judgement)

masklinn
0 replies
5h21m

The point is that in most languages something doesn't implement an interface unless it declares that it does

But that's not a requirement. Haskell did nominative "interfaces" (typeclasses) and post-creation conformance 20 years before Go happened.

mb7733
0 replies
12h12m

I think you misread the sentence you quoted.

munificent
2 replies
14h23m

I think you're understating how important the fact that interfaces are structurally typed is to the overall effect of the feature on the language and—even more—its idioms and ecosystem.

Go would be a deeply different language if types had to declare the interfaces they implement ahead of time. It's one of Go's main distinguishing features (or at least was until TypeScript came out and also had structurally typed inferfaces, for different reasons).

masklinn
1 replies
8h3m

Go would be a deeply different language if types had to declare the interfaces they implement ahead of time.

GP literally mentions haskell which uses nominative typing and does not require types declaring the interfaces they implement (typeclasses they instantiate) ahead of time.

If anything, Go's solution is worse because you have to conform the interface you declare to whatever already exists on the type. Type classes make the "type class interface" independent from the underlying type. And then it turns out Go's structural interfaces also play hell with generics, leading to the current... thing.

munificent
0 replies
55m

My understanding is that in Haskell, all instances of type classes are explicitly declared. They don't have to be declared at the type declaration, but they must be explicitly declared. Unlike Rust, Haskell does allow orphan instances, so you can approximate some of the flexibility of structural typing, but it's still not structural like interfaces are in Go.

That's a significant difference in the design space. And, in particular, it makes generics harder. With explicit instance declarations, you have a natural place to define the type parameters of the instance itself and then specify how those map to the type arguments of the type class being implemented. With implicit interface implementations, there's no place for association to be authored.

I'm not saying Go's solution is better or worse, just that it's not them half-assed cribbing type classes. It's a deliberately designed different feature.

(My actual opinion is that I think interfaces in Go are more intuitive for users and easier to learn and use, at the expense of losing some expressiveness for more complex cases. Whether that's the right trade-off depends a lot on the kinds of programs and users the language is targeting. In the case of Go, I think that trade-off makes sense for their goals.)

I get so tired of functional programmers claiming to have invented everything first and assuming that what other languages do are just failed imitations instead of deliberate differences with important subleties they are overlooking. In particular, in these kinds of discussions, the "Haskell/Lisp/Smalltalk did it first" folks rarely take into account trade-offs, usability, or context when evaluating features.

shakow
0 replies
11h5m

Go's one language innovation

Structural typing is already more than 25 y.o., and was already used e.g. in OCaml and Scala.

kitsune_
7 replies
19h55m

I know I sound salty here, but 10 years ago I got ridiculed on go-nuts, with dismissive comments from Rob Pike, because I dared to suggest that the way go get and module imports over the wire 1. worked, 2. were advertised in all their docs for beginners, and 3. how they were subsequently used throughout the community was ultimately harmful / shortsighted.

treyd
2 replies
19h51m

The way Go's package system works, especially before modules, really feels like it was a hack over an earlier and even more limited system that was designed to be used entirely inside the Google monorepo that was made to work outside. The weird global namespace tree makes sense there, and the emphasis on checked-in codegen also make sense there when you consider that Google also includes build artifacts in their monorepo.

capital_guy
1 replies
16h39m

This was exactly what happened. Rob Pike mentioned in another talk that they overfitted the pre-module system to how Google deals with packages. So I think he/they have conceded this was a mistake

p_l
0 replies
11h6m

If one used Plan9, it becomes quite clear how the module system happened (it also matches nicely with google monorepo).

shp0ngle
2 replies
19h39m

It's interesting that what they came up with is better than what's out there for other languages.

Yeah, you have the "v2" / forever v0 problem. But it's still better than what I need to deal with when using npm or (doing sign of the cross) anything with python.

djha-skin
1 replies
15h15m

Russ Cox's "minimum version selection" was a complete reinvention of Apache Ivy's "Latest" version resolver.

They basically reinvented maven, if you examine the version resolution plumbing of both tools.

nulltype
0 replies
12h11m

Looking at https://ant.apache.org/ivy/history/latest-milestone/ivyfile/... I don't see how this is the same as minimal version selection.

hota_mazi
0 replies
13h26m

Just Rob Pike expressing to the world how ignorant he is of programming language theory that was discovered after 2000.

vrnvu
6 replies
15h0m

The biggest win for Go is its approach based on composition rather than inheritance.

There isn’t any “architect engineer” building cathedrals with interfaces and abstract classes. There’s no cult behind needing to follow DDD in an event-driven architecture powered by a hexagonal architecture in all projects, or you are tagged as a bad engineer. We don’t have thousands of traits to define every possible code interaction, yes. From a type system point of view, Go is lacking compared to HM based type system, yes. Yes, it’s all pros and cons with this decision. We can agree on that.

I’ve seen that the predominant enemy for a software project is software engineers. Go keeps them in line for the sake of the project.

lmm
2 replies
11h37m

There isn’t any “architect engineer” building cathedrals with interfaces and abstract classes. There’s no cult behind needing to follow DDD in an event-driven architecture powered by a hexagonal architecture in all projects, or you are tagged as a bad engineer.

Isn't Go the big driver of Kubernetes? It feels like overarchitecturing is still there in Go projects, they've just made it distributed.

liampulles
0 replies
9h59m

Kubernetes is a complex behemoth, and would be even if it was written in another language.

FridgeSeal
0 replies
10h5m

(Anecdote up to the eyeballs)

I’ve read comments from people before about how go’s lack of generics has caused significant amounts of extra code to be written in the K8s codebase. I do wonder if we could snap our fingers and magically have a K8s written in Rust, a lisp, Zig tomorrow, what that would look like, what it would be like to maintain and build and what the codebases would be like in comparison. Would make an interesting intellectual exercise.

munificent
0 replies
14h9m

> There isn’t any “architect engineer” building cathedrals with interfaces and abstract classes. There’s no cult behind needing to follow DDD in an event-driven architecture powered by a hexagonal architecture in all projects, or you are tagged as a bad engineer.

My experience is that you can also write simple non-over-engineered code in other languages too. Yes, it can require pushing against the wind of the prevailing culture sometimes but it's not, like, impossible.

campbel
0 replies
13h32m

I’ve seen that the predominant enemy for a software project is software engineers. Go keeps them in line for the sake of the project.

Brilliant

bb88
0 replies
13h43m

If one chunk of code depends upon one and only one other chunk of code, then forcing the programmer to put an interface between them for unit testing does a disservice to the programmer.

I really do prefer languages that make it easy to write unit tests.

norir
5 replies
17h10m

Also, writing a compiler in its own language, while simultaneously developing the language, tends to result in a language that is good for writing compilers, but that was not the kind of language we were after.

I have seen this sentiment a few times recently. First of all, it raises the question is a language that is not compiled in itself a bad language for writing compilers? My intuition is usually yes. Secondly, the implication is that a good language for compilers will not be good for other applications. I really don't understand this because a compiler will use most of the same building blocks that are used for other programs.

I would really like more context into what the author is trying to say though.

tmerr
1 replies
15h2m

The ideal set of building blocks depends on the problem.

If the building blocks make it easy to write concurrent code (Go, Erlang), then it becomes easier to write a server. If they make it easy to represent "A or B or C" and pattern match on trees (ML-like languages), then it becomes easier to write a compiler.

Add to that: if you are trying to make an easy to onboard language, you want to look at how beginners use it, not experts. Someone writing a compiler for language X is certainly an expert in X.

tubthumper8
0 replies
6h16m

To me, being able to represent "A or B or C" is a bare minimum of a type system.

munificent
1 replies
14h6m

For what it's worth, Dart's intended domain (client UI apps) is much farther from compilers than Go's intended domain (servers and "systems programming") but we write almost all of our tools and compilers in Dart.

Dart isn't always the best language for compilers (we have a lot of Visitor patterns floating around, which are always tedious), but it's plenty good enough and it keeps the whole team working in our own language eight hours a day, which I think is invaluable.

Also, it means that when we make our implementations or compilers faster, we get a compounding effect because our tools get faster too.

jtasdlfj234
0 replies
12h44m

Dart evolved into a nice language with null-safety and exhaustive patterns.

Hopefully in Go 3, we could see these features added as well.

TheDong
0 replies
16h14m

Writing compilers is mostly aided by having a robust type-system and elegant tooling for parsers and AST transformation and so on.

Writing a compiler requires computer science knowledge, requires thought.

Haskell I think is a perfect example. It is a language that is well suited for writing compilers, but also very well suited for building services, backend applications, really anything if your developers are of average intelligence.

Go, however, wants to optimize for developers who think a for loop is easier to understand than an applicative functor, who think that generics are unnecessarily complex.

If you're trying to build a language for "the lowest common denominator, the average googler", that's the opposite of building a language for compiler writers, so in that case building a language that can represent such a hard CS problem well is counter productive.

leafmeal
5 replies
17h8m

Maybe I'm misunderstanding here, but it sounds like he's claiming they invented "interfaces". The Go interfaces seem like the same thing as a Haskell typeclass which predates them by a long shot. Either way a great invention that should be in more languages.

plorkyeran
2 replies
16h8m

The early days of Go appeared to be the work of a group of people who had not ventured out of their bubble in a very long time and were unaware of several decades of PL research, so it would be somewhat surprising if any of them knew what a typeclass was at the time.

This is significantly less true now.

LispSporks22
1 replies
15h57m

As I recall the Go GC is a primitive 70s design as well. No idea if it still is.

geodel
0 replies
13h12m

No, it went back to 30s now.

philosopher1234
1 replies
14h43m

Go interfaces are unique in that they are implicit. Duck typed, if you will. That is not present in Haskell.

masklinn
0 replies
3h0m

That is not unique, OCaml had structural (sub)typing in the 90s.

synergy20
4 replies
18h32m

I really wanted to love Go, spent time learning it, bought a few books read it,etc.

It did not work out for me for embedded field, it's too large in binary size, no real time due to GC, and I still need to interface with C code here and there.

In the end, I'm back to c and c++, I consider my time on Golang is actually totally a waste.

Go has its own uses e.g. native cloud or something, even there lots of alternatives exist.

It's pretty hard to replace existing popular languages, as those languages are also evolving fast.

nmz
1 replies
16h35m

Did you try tinygo?

synergy20
0 replies
15h39m

I don't do MCU level embedded, so no tinygo for me, but yes I'm aware of it.

zik
0 replies
18h1m

Anything with a GC is an immediate killer for small embedded projects. Anything STM32 sized or less is going to struggle with GC due to lack of memory.

But in the intervening years the average embedded device has got larger and more powerful so now I'm doing embedded programming on devices with Linux in languages like python and Go. So maybe just wait and your Go learning will be useful again?

starttoaster
0 replies
18h0m

I'd say Go shines more on server application work. I'd be surprised to find it in an embedded system.

oconnor663
4 replies
13h0m

The post mentioned a few things that Go has done that seem to be "obvious" choices today:

- automatic formatting

- unified tooling

- module/library support

I'd add more items to that list, which aren't unique to Go of course, but where Go has clearly contributed to the new consensus:

- composition over inheritance

- compiling to native binaries

- error values over exceptions

- array slicing

- no automatic integer type conversions

There's plenty I don't like about Go, and I rant about it sometimes, but I also respect it. It moved the art forward.

...now can I please compile my code with unused variables when I'm deliberately trying to make my tests fail? :-D

pitaj
3 replies
12h54m

Interestingly, all of those (except for automatic formatting) also apply equally to Rust. Maybe that's why there's such a "Go vs Rust" culture.

tubthumper8
2 replies
6h31m

Rust also has automatic formatting [0]

I don't think the tooling is why there is a perceived "Go vs Rust" culture, I think that's due to (somewhat) overlapping use cases, or more probably that they were developed and came out around roughly the same time. There really doesn't need to be a "Go vs Rust" culture though.

I think it should be obvious to most people that Go had a big influence on Rust and other modern languages for the benefit of having unified tooling, formatter, linter, etc.

[0] https://www.rust-lang.org/tools

pitaj
1 replies
3h10m

It's my understanding that the Go compiler will format your code every time you compile. For Rust, its a separate tool invocation.

masklinn
0 replies
3h2m

It's my understanding that the Go compiler will format your code every time you compile.

I do not believe that is the case. You have to invoke `gofmt` or `go fmt` on the project.

You may hook it to precommit, or your editor might be configured to automatically run it on save, but afaik neither `go build` nor `go run` will auto-format.

munificent
4 replies
14h12m

> Critics often complained we should just do generics, because they are "easy", and perhaps they can be in some languages, but the existence of interfaces meant that any new form of polymorphism had to take them into account.

I've been noodling on a statically typed hobby language and one of the things I'm trying to tackle is something like interfaces plus generics. And I have certainly found first-hand that Rob is right. It is really hard to get them to play nicely together.

I still think it's worth doing. Personally, I'd find it pretty unrewarding to use a statically-typed language that doesn't let me define my own generic types. I used to program in BASIC where you had GOSUB for subroutines but there was no way to write subroutines where you passed arguments to them. I don't care to repeat that experience at the type system level.

But I can definitely sympathize with the Go team for taking a long time to find a good design. Designing a good language is hard. Designing a good language with a type system is 10x harder. Designing a good type system with generics is 10x harder than that.

FridgeSeal
1 replies
10h13m

All the ML and functional languages don’t seem to have this problem, and a lot of them have Type Systems that are far more sophisticated and capable that go’s.

munificent
0 replies
2h50m

SML actually has a very simple, unsophisticated type system that isn't anywhere near as expressive as generics in most other languages (Java, C#, Go, etc.).

In SML, there's no way to define a generic hash table that works with any type that implements a hashing operation and uses that hash function automatically. Type parameters are entirely opaque types that you can't really do anything with. To make a generic hash table, the user has to explicitly pass in a hash function each time they create it.

In other languages, you can place a bound on the type parameter that defines what operations are supported and then the operations are found (either at runtime or monomorphization time) based on the type argument.

If you don't have bounds, lots of things get easier for the language designer. But lots of things get much more tedious for the language user. It's probably not a coincidence that every language newer than ML with parametric polymorphism has some sort of bound/trait/constraint/type class thing. But bounds are where most of the complexity comes from.

hota_mazi
0 replies
13h31m

But I can definitely sympathize with the Go team

I don't.

The hardest part in implementing generics is when you support inheritance of implementation.

Go doesn't.

Go had the easiest job in implementing generics.

The only reason why they didn't was not technical: it was ideological and purely based in ignorance, and the fact that most Go designers stopped paying attention to the field of PLT in the late 90s.

62951413
0 replies
14m

Keep in mind that Scala had been released before golang was. And became popular around the time golang was released. Without GOOG-scale resources behind it.

cogman10
3 replies
18h40m

I gotta say, reading this tells me exactly why Go struggles to evolve as a language.

Go ahead, skimming the article tell me what go got wrong? Stumped? Yeah, well I was too. That's because a lead designer can certainly write paragraphs on paragraphs of "Look at how amazing and kool we are for being so smart" but doesn't seem capable of writing more than a half a sentence of "we got this wrong".

I saw 2 things Pike believes go got wrong. Documentation and packaging. The rest is a lot of self congratulatory "Look at how smart we are and how dumb our critics are".

No language is perfect, every language has it's own weaknesses (and go has plenty). Yet for some reason Pike can't help but gush over how perfect it is. Even with the 2 faults listed, the documentation fault very much has an undertone of "our functions are so simple but the dumb users weren't smart enough to understand the amazingness of our elegant code and libraries."

The key missing piece was examples of even the simplest functions. We thought that all you needed to do was say what something did; it took us too long to accept that showing how to use it was even more valuable.

Go is a language designed in the last 20 years that decided "You know what wasn't bad about C? Pointers and null". Yet the only part pike can really fault is "well, we didn't know how to do package management so we sort of messed that up".

He even goes so far as to talk about how interfaces are so good and generics are dumb even though go has the notorious `interface {}` used to try, in an type unsafe way, to pass around random objects. (But hey, pike hand waves that away with "the dumb community just isn't smart enough to accept the brilliance of our type system".)

How can a language evolve when the lead designer puts on blinders to the community raising faults? When a "what we got right, what we got wrong" post is filled with how awesome the language is with scant reference to what is wrong? Even just openly dismissing concerns with the language?

Contrast that with a lead language architect I really like, Brian Goetz (seriously, go watch his talks about evolving Java). His approach is nearly the polar opposite. It's "Ok, we've seen that a lot of people using java run into problem X, so we are going work hard to find language or library solutions which help improve things for the developers". They very rarely just dismiss problems and when they do it's more along the lines of "Well, we'd love to have it but ultimately there's not a good way we can think of to do this which is also backwards compatible."

And you can really see that in the way java has evolved from 8->21. From adopting a faster release process, a ton of new language features, and a LOT of active development into the most painful parts of dev work. Virtual threads is a shining example of this. Java now has go like concurrency even though it took several iterations and trials to bring that in, they worked hard to evolve the language in a way that fits perfectly. Valhalla is another great example of the language working hard to solve real problems. A 10 year project open to the community working to fundamentally redefine the JVM memory model because the one designed in the 90s doesn't fit with modern hardware.

But hey, if you like go, that's great. Just don't expect even an iota of evolution. Barring a change in leadership, generics is almost certainly the last new language feature (at least in this decade). A feature that landed primarily because of over a decade of articles and community screaming about how horrible the lack of generics impacts everything.

unscaled
0 replies
10h35m

To be fair, Go pointers do not allow pointer arithmetic and unsafe casting. They are really not much different than Java's references.

The main issue is null. And you can't even say Rob Pike is not deeply familiar with Tony Hoare, and yet the billion dollar mistake was very eagerly repeated.

geodel
0 replies
2h33m

Brian is great in its own way and doesn't need to be compare to Pike at all. And he has to admit lot of problems as Java is doing things quite opposite of what was done decade or so back. Go leads are not doing this because Go is not changing course when they do they may admit more.

Valhalla and Loom example you gave shows how important lightweight concurrency and memory layout of object is. Go had this from day 1. For faster release after Go 1.1 /1.2 they always had 6 month release schedule which Java implemented few years back. While Java is running impressive projects for last 10 years to add these features.

Before Go came along Java never really published or committed to performance numbers for GC. It was always Here are few GCs, use whatever works for you. Now they give far more details after Go pushed for numbers.

Barring a change in leadership, generics is almost certainly the last new language feature

Pike is already retired many years back, he is not in any official position in Go project.

For sure, you like Java and I am working fine in Java like forever. But you seem to be losing all sense of perspective here.

booleandilemma
0 replies
14h21m

Just don't expect even an iota of evolution. Barring a change in leadership, generics is almost certainly the last new language feature (at least in this decade)

Great! :)

aldousd666
3 replies
19h3m

I was following the mailing list as this was being developed way back then, but it always frustrated me that the team chose such a trite, un-googleable name for the language. I mean, if you want to find the community and the docs, you have to search for 'golang' which let me tell you, was not an automatic solution to the problem. That's actually my only complaint.

chuckadams
1 replies
12h52m

As opposed to the language whose name is literally a single letter?

tubthumper8
0 replies
3h59m

Are you referring to C?

I think the comment's point would be that when C came out there were no search engines, so nobody would've considered that. When Go came out, not only were search engines a critical everyday software, the Go language designers worked for a company whose original and flagship product is a search engine, so they should've known.

(I also think the same criticism could be directed to other languages that are English words such as Swift, Rust, etc. but to a lesser extent)

xpressvideoz
0 replies
17h15m

100% agreed. In addition to this, due to some people not capitalizing the first letter of Go, distinguishing between the verb go and the noun go takes more time than usual when reading articles related to Go. Maybe to native speakers it doesn't matter?

robaato
2 replies
19h3m

Curious as to no mention of the choices around interoperability and C FFI.

"Rewrite in go" as the answer closes off a whole chunk of options.

randomdata
0 replies
15h30m

Perhaps because there isn't just one choice? The Go team maintains two compilers, and each treat that interoperability differently. You have even more options if you reach out into the larger Go ecosystem (e.g. tinygo does things differently again).

bb88
0 replies
18h28m

That was the Java solution as well circa late 1990's. But in many cases that's true of python, and other languages.

It's just easier if you can install a native library rather than one with a cumbersome build process.

jtasdlfj234
2 replies
20h35m

I would have more respect if they at least admitted to the flawed type system but instead say it is not a problem. It is disappointing to see past mistakes repeated in a new programming language. Even the Java language creator was humble enough to admit fault for the null pointer problem. The Go devs do not have such humility.

https://github.com/uber-go/nilaway

grumpyprole
0 replies
19h49m

It's interesting that they brought in Phil Wadler to help retrofit polymorphism, it literally is history repeating itself (Wadler did Generics retrofit for Java over 20 years ago).

coolgoose
0 replies
20h28m

The nil handling (or the lack of) in go is right now my biggest annoyance with the language :)

einpoklum
2 replies
19h39m

The structure of that post is weird, in that it's quite difficult to figure out where are the parts which were done right and where are those done wrong.

Can someone tl;dr the done-wrong items?

tubthumper8
0 replies
4h3m

There's a (possibly non-exhaustive) list from a Reddit comment here. I can paste it if needed, otherwise will just link it:

https://www.reddit.com/r/golang/s/wt1vpPtLgc

iainmerrick
0 replies
18h44m

It’s a talk transcript, not originally written as a blog post.

bsaul
2 replies
19h52m

I’m surprised the fact that they manage to keep the language small and minimal isn’t mentionned as a huge success. To me that is the number one reason to use this language : it forces you to not be distracted by language constructs (there aren’t enough for that), and focus on what is it exactly you’re trying to build. Even as an educationnal tool, this is excellent. Maybe they don’t realize it because they come from C, but when you come from more recent languages that include everything and the kitchen sin, this is a godsend.

It’s now to the point that whenever i develop a feature in a language, i ask myself « how would i do that in go » to ensure i don’t go fancy with the type system for no good reasons.

troupo
1 replies
13h21m

and focus on what is it exactly you’re trying to build.

Instead, you're focusing on fighting the language limitations on your way to build what you want ;)

bsaul
0 replies
8h49m

in my personal experience, go language "limitations" have always forced me to clarify and simplify my design. In the end, my code is way better than in my original intent.

Animats
2 replies
9h22m

the existence of a solid, well-made library with most of what one needed to write 21st century server code was a major asset.

Yes. Go was funded by Google because Google had a problem. They needed to write a lot of server-side application code. Python is too slow, and C++ is too brittle. Go does very well in that niche. A big bonus was that Google people wrote the libraries you need for that sort of thing, and uses them internally. So, when you used a library, it was something where even the error cases were heavily exercised.

I have some technical arguments with the language, mainly around the early emphasis on using queues for everything, including making mutexes out of queues. They got the "green thread" thing right for their use case. The "colors of functions" thing is a problem, and the arcane tricks needed to make async and threads play together in Rust are just too much. Go gives up a little performance for great simplicity.

I'm amused at the old hostility to threads. I started out on UNIVAC mainframes, which had threads starting in 1967. (They were called "activities") By 1972, there were not only user-space threads, but they ran on symmetrical multiprocessors. There was async I/O, with user-space completion functions. There were built-in instructions for locks. The operating system was threaded internally.

I thought of threads and multiprocessors as normal, and felt the loss of them when moving to UNIX. It was decades before UNIX/Linux caught up in this area. Several generations of programmers had no clue how to use a shared-memory multiprocessor. The early concurrency theory from Dijkstra was re-invented, with different terminology and often worse design than the original. The Go people understood Dijkstra's bounded buffers, and understood why bounded buffers of length 0 and 1 are useful. It was nice seeing that again. With the right primitives, concurrency isn't that hard. If you try to do it by sprinkling locks around, which was the classic pthreads mindset, it will never work right. It didn't help that UNIX/Linux had an early tradition of really crappy CPU dispatchers, so that unblocking a thread worked terribly.

mike_hearn
0 replies
7h39m

> Yes. Go was funded by Google because Google had a problem. They needed to write a lot of server-side application code. Python is too slow, and C++ is too brittle.

I've said this elsewhere on the thread but will repeat: this doesn't match my memory of working at Google during this time period. This belief feels a bit like "go was for systems programming", a popular meme that doesn't make sense when examined.

In 2009 Python wasn't really used for servers at all at Google outside of a few internal-facing utilities, so that idea can be disposed of immediately (exception: the YouTube acquisition). The biggest Python server developed by Google itself was iirc Mondrian, the code review tool, written by Guido van Rossum himself. It was later replaced by Critique, written in Java.

At the time Google had a fairly strict three languages policy, designed to kill programming language fights. You could use C++, Java or Python. Python was used for scripts, C++ for infrastructure and Java for web servers. There was some overlap: Java was introduced to Google later than C++, and some web servers were still written in C++ because it didn't make sense to try and rewrite them, even though there were some initiatives around trying to do incremental ports to Java that hadn't really taken off. Also some infrastructure servers were written in Java (e.g. a database called Megastore). Core libraries were mostly C++ with JNI and Python bindings.

There was also a loophole in that policy, in that some teams invented their own languages for internal infrastructure, so in practice the Google codebase also had some use of custom config languages and in particular Sawzall, also by Rob Pike. It looked syntactically a bit like Go.

But overall people had a lot of freedom to choose, and new web servers were being mostly written in Java, with new database engines and similar being mostly written in C++.

Writing async code did suck, and Go definitely got that right. But Go wasn't a project initiated by the senior management to solve a problem Google had. That's not because management was afraid of initiating such projects. Huge numbers of internal infrastructure projects were created and staffed by the relevant teams to directly solve developer pain points (e.g. the giant networking upgrades they embarked on at that time). But they were telegraphed in advance.

I don't recall much complaining about this state of affairs. Build times were an issue until they got Bazel/Blaze working with remote build clusters, and at that point you could compile giant codebases in seconds because everything was cached remotely. Local compiler speed became largely irrelevant, especially as javac was very fast anyway. I don't recall exactly when this was, but I developed C++ servers around that era and Pike's 45 minute compile wasn't a common occurrence for me after Blaze caching appeared. I can imagine that would happen if you changed a core library and then recompiled something very high up the stack.

The announcement of Go came as something of a surprise. If it had actually been developed to solve Google's problems it'd have been launched internally first and then developed with internal users for years before being exposed publicly, as was normal for Google. But this one launched to the public first. I recall people in my neck of the woods wondering what it was for: not something you'd expect if it'd really been driven by internal demand.

anthk
0 replies
9h17m

Go is not just a Google project, it's basically Plan9's compiler philosophy (and CSP) for Unix/Linux.

tialaramex
1 replies
16h16m

This mentions gofmt as a "what we got right" and I think that's especially worth underscoring.

This seems to many language inventors and proponents like a small thing but it delivers huge value because it eliminates one common bike shedding opportunity entirely from day zero of a Go project. I've seen several newer languages embrace this, either copying it quite intentionally or just figuring hey Go has one so we should make one as well.

I've seen some pretty weird formatting rules but I have very rarely seen rules I couldn't get used to, whereas I have worked on plenty of codebases without enforced formatting rules where as a result it was harder to understand the code.

onionisafruit
0 replies
16h13m

I have a tendency to futz around with code formatting. I like that go fmt makes that moot for the most part.

notpachet
1 replies
19h31m

I enjoy Go as a language, but I have always hated the gopher mascot. It's so derpy.

jtasdlfj234
0 replies
19h24m

It's so derpy.

That's why I love it.

MaKey
1 replies
16h40m

I haven't seen someone mentioning Go's issues with crypto yet. After OpenSSH deprecated SHA1, the Go team took a year (!) to add support for SHA2 to x/crypto/ssh [1]. Gitea was one famous victim [2]. Furthermore it doesn't instill confidence to see a crypto maintainer bashing on GnuPG [3] and trying to discredit Dan Bernstein [4].

[1]: https://github.com/golang/go/issues/49269

[2]: https://github.com/go-gitea/gitea/issues/17798

[3]: https://twitter.com/FiloSottile/status/1127643698676797441

[4]: https://twitter.com/FiloSottile/status/1555669786826244096

raggi
0 replies
8h8m

the x/ packages languish a lot, the problem is they follow a development model that slows the rate at which new contributors succeed to send patches

The stdlib crypto package I would describe as another of Go's big successes. OpenSSL has been a disaster for a very long time, and Go and perhaps most significantly agl managed to build and ship a very broadly exercised alternative implementation with an average very high quality, particularly from the perspective of having significantly fewer footguns in the public API.

4death4
1 replies
17h40m

The anecdote about writing the compiler in C is very interesting. LLVM is obviously very popular these days, so it’s refreshing to see a counter example. I also love that the compiler was decidedly mediocre. It just goes to show that often the user ((or developer) experience is typically more important than the technical merits of a product.

sapiogram
0 replies
4h24m

It just goes to show that often the user ((or developer) experience is typically more important than the technical merits of a product.

I think Go still won on technical merits, because it never really competed with other compiled languages, instead mostly converting Python and Java programmers. Compared to those, startup time and memory usage of go programs are leagues ahead, and the quality of the codegen doesn't change that much.

t43562
0 replies
6h3m

I made some videos introducing Go. They're not important and the view count isn't impressive at all but it interested me that the one with the most hits is about "Go Project Structure".

I have to say I'm not at all a fan of how modules get imported in go - and how it works with github. It's an extremely confusing and complicated issue and my bete noir was forking a library. The way things are exported, the paths you need to use to access them.......the whole area is far worse than the problems I've ever had with C/C++ and certainly Java or Python.

statquontrarian
0 replies
19h38m

I was surprised at the poor quality of serviceability given its enterprise deployment with k8s. No thread dumps without killing the process (or writing a SIGUSR1 handler). No heapdump reader so you have to use the memory sampler and hope you catch the problem (and that requires adding code), and viewcore is broken in new versions (and it doesn't work with a stripped binary which is most production binaries).

slowhadoken
0 replies
14h3m

I’ve never used golang in production but I toyed with it when it was first released and I enjoyed it. I just wish Google didn’t control it.

pharmakom
0 replies
9h46m

I am not a Go user and the language has never appealed to me. On paper, it offers less than more established ecosystems for generalist backend development, such as C# and Java.

Perhaps some Go users can weigh in?

msie
0 replies
18h57m

I appreciate that he talked about examples in api documentation. I can't believe people are still writing documentation nowadays without examples.

liampulles
0 replies
20h0m

Something that I really like about go is how easy it is to make a monorepo, and how quick and easy it is to build all of the contained apps (go build ./...).

I also find it really easy to make CLI tools in Go that can form part of unix pipelines, since: you just need a single go file and app-named folder to get started, it gives you a self-contained binary as output, and the Reader/Writer interfaces make it easy to stream and handle data line-by-line. We have a couple of CLIs at work that analyze multi-gig logs in a couple of seconds for common error patterns - Go is very handy for such things.

justinclift
0 replies
11h37m

And of course, today there is an LLVM-hosted compiler for Go, and many others, as there should be.

Isn't that the dead llgo effort?

https://github.com/go-llvm/llgo (now archived)

Its readme points to a dead link on the LLVM website, and it looks like there's no matching Go project under the LLVM org.

Does anyone know if there really is still a working LLVM based Go toolchain (other than TinyGo)?

jmyeet
0 replies
17h7m

A key point was the use of a plain string to specify the path in an import statement, giving a flexibility that we were correct in believing would be important.

Wrong.

Importing a string is like a touchscreen UI in a car: it's deferring the problem. It's lazy.

But we didn't have enough experience using a package manager with lots of versions of packages and the very difficult problems trying to resolve the dependency graph.

No one on the team had ever used Java? Really? Maven was ~8 yeras old when Go was released. It came from previous learnings and errors with Ant. Maven did other things too that aren't necessarily relevant or necessary (eg standardized directory structure). But the dependency system was really solid even if it was verbose because it was XML.

Second, the business of engaging the community to help solve the dependency management problem was well-intentioned

This feels ahistorical. It felt more like the problem wasn't understood and/or thought important. This fits in with the importing a string: it's a way of not solving the problem, of kicking the can down the street.

jayd16
0 replies
10h16m

First, he was generalizing beyond the domain he was interested in [...]

And then they proceed to dump on async/await. It's not a target concern for Go but often you want to run code specifically on a UI thread or call into a foreign function on some specific OS thread. AFAICT that's most easily done with async/await.

chewxy
0 replies
19h28m

GopherConAU organizer here. Here's the whole playlist. I am not sure why I cannot make it public. https://www.youtube.com/playlist?list=PLN_36A3Rw5hFsJqqs7olO...

HackerThemAll
0 replies
7h36m

Why on earth this date/datetime formatting rule?

01/02 03:04:05PM '06 -0700

Why all the awkward syntax?