I really, really appreciate key people taking the time for retrospectives. It makes a huge difference to people now who want to make a real difference.
But I'm not sure Rob Pike states clearly enough what they got right (IMO): they managed the forces on the project as well as the language, by:
- Restricting the language to its target use: systems programming, not applications or data science or AI...
- Defining the language and its principles clearly. This avoids eons of waste in implementing ambiguity and designing at cross-purposes.
- Putting quality first: it's always cheaper for all concerned to fix problems before deploying, even if it's harder for the community or OS contributors or people waiting for new features.
- Sharing the community. They maintained strict control over the language and release and core messaging, but they also allowed others to lead in many downstream aspects.
Stated but under-appreciated is the degree to which Google itself didn't interfere. I suspect it's because Go actually served its objectives and is critical to Google. I wonder if that could be true today for a new project. It's interesting to compare Dart, which has zero uptake outside Flutter even though there are orders of magnitude more application code than systems code.
Go was probably the key technology that migrated server-side software off Java bloatware to native containers. It dominates back-end infrastructure and underlies most of the web application infrastructure of the last 10 years. The benefit to Google and the community from that alternative has been huge. Somehow amidst all that growth, the team remained small and kept all its key players.
Will that change?
Go has a GC and a very heavy runtime with green threads, leading to cumbersome/slow C interop.
It certainly isn't viable as systems programming language , which is by design. That's an odd myth that has persisted ever since the language called itself as such in the beginning. They removed that wording years ago I think.
It's primarily a competitor to Java et al, not to C or Rust, and you see that when looking at the domains it is primarily used in, although it tends to sit a bit lower on the stack due to the limited nature of the type system and the great support for concurrency.
I totally agree that Go is best suited outside of systems programming, but to me that always seemed like a complete accident - its creators explicitly said their goal was to replace C++. But somehow it completely failed to do so, while simultaneously finding enormous success as a statically typed (and an order of magnitude faster) alternative to Python.
It's not at "an accident" and Go didn't "somehow" fail to replace C++ at its systems programming domain. The reason why go failed to replace C and C++ is not a mystery to anyone: Mandatory GC and a rather heavyweight runtime.
When the performance overhead of having a GC is less significant than the cognitive overhead of dealing with manual memory management (or the Rust borrow checker), Go was quite successful: Command line tools and network programs.
Around the time Go was released, it was certainly touted by its creators as a "systems programming language"[1] and a "replacement for C++"[2], but re-evaluating the Go team's claims, I think they didn't quite mean in the way most of us interpreted them.
1. The Go Team members were using "systems programming language" in a very wide sense, that include everything that is not scripting or web. I hate this defintion with passion, since it relies on nothing but pure elitism ("Systems language are languages that REAL programmers uses, unlike those "Scripting Languages"). Ironically, this usage seems to originate from John Ousterhout[3], who is himself famous for designing a scripting language (Tcl).
Ousterhout's definition of "system programming language" is: Designed to write applications from scratch (not just "glue code"), performant, strongly typed, designed for building data structures and algorithms from scratch, often provide higher-level facilities such as objects and threads.
Ousterhout's definition was outdated even back in 2009, when Go was released, let alone today. Some dynamic languages (such as Python with type hints or TypeScript) are more strongly typed than C or even Java (with its type erasure). Typing is optional, but so it is in Java (Object), and C (void*, casting). When we talk about the archetypical "strongly typed" language today we would refer to Haskell or Scala rather than C. Scripting languages like Python and JavaScript were already commonly used "for writing applications from scratch" back in 2009, and far from being ill-adapted for writing data structures and algorithms from scratch, Python became the most common language that universities are using for teaching data structures and algorithms! The most popular dynamic languages nowadays (Ruby, Python, JavaScript) all have objects, and 2 out of 3 (Python and Ruby) have threads (although GIL makes using threads problematic in the mainstream runtimes). The only real differentiator that remains is raw performance.
The widely accepted definition of a "systems language" today is "a language that can be used to systems software". Systems software are either operating systems or OS-adjacent software such as device drivers, debuggers, hypervisors or even complex beasts like a web browser. The closest software that Go can claim in this category is Docker, but Docker itself is just a complex wrapper around Linux kernel features such as namespaces and cgroups. The actual containerization is done by these features which are implemented in C.
During the first years of Go, the Go language team was confronted on golang-nuts by people who wanted to use go for writing systems software and they usually evaded directly answering these questions. When pressed, they would admit that Go is not ready for writing OS kernels, at least not now[4][5][6], but GC could be disabled if you want to[7] (of course, there isn't any way to free memory then, so it's kinda moot). Eventually, the team came to a conclusion that disabling GC is not meant for production use[8][9], but that was not apparent in the early days.
Eventually the references for "systems language" disappeared from Go's official homepage and one team member (Andrew Gerrand) even admitted this branding was a mistake[10].
In hindsight, I think the main "systems programming task" that Rob Pike and other members at the Go team envisioned was the main task that Google needed: writing highly concurrent server code.
2. The Go Team members sometimes mentioned replacing C and C++, but only in the context of specific pain points that made "programming in the large" cumbersome with C++: build speed, dependency management and different programmers using different subsets. I couldn't find any claim that go was meant as a general replacement for C and C++ anywhere from the Go Team, but the media and the wider programming community generally took Go as a replacement language for C and C++.
When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers. For the rest of the industry, Java was (and perhaps still is) the most popular language for this task, with some companies opting for dynamic languages like Python, PHP and Ruby where performance allowed.
Go was a great fit for high-concurrency servers, especially back in 2009. Dynamic languages were slower and lacked native support for concurrency (if you put aside Lua, which never got popular for server programming for other reasons). Some of these languages had threads, but these were unworkable due to GIL. The closest thing was frameworks Twisted, but they were fully asynchronous and quite hard to use.
Popular static languages like Java and C# were also inconvenient, but in a different way. Both of these languages were fully capable of writing high-performance servers, but they were not properly tuned for this use case by default. The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought. Java had Maven and Ivy and .Net had NuGet (in 2010) and MSBuild, but these where quite cumbersome to use. Deployment was quite messy, with different packaging methods (multiple JAR files with classpath, WAR files, EAR files) and making sure the runtime on the server is compatible with your application. Most enthusiasts and many startups just gave up on Java entirely.
The mass migration of dynamic language programmers to Go was surprising for the Go team, but in hindsight it's pretty obvious. They were concerned about performance, but didn't feel like they had a choice: Java was just too complex and Enterprisey for them, and eeking out performance out of Java was not an task easy either. Go, on the other hand, had the simplest deployment model (a single binary), no need for fine tuning and it had a lot of built-in tooling from day one ("gofmt", "godoc", "gotest", cross compilation) and other important tools ("govet", "goprof" and "goinstall" which was later broken into "go get" and "go install") were added within one year of its initial release.
The Go team did expect server programs to be the main use for Go and this is what they were targeting at Google. They just missed that the bulk of new servers outside of Google were being written in dynamic languages or Java.
The other "surprising use" of Go was for writing command-line utilities. I'm not sure if the original Go team were thinking about that, but it is also quite obvious in hindsight. Go was just so much easier to distribute than any alternative available at the time. Scripting languages like Python, Ruby or Perl had great libraries for writing CLI programs, but distributing your program along with its dependencies and making sure the runtime and dependencies match what you needed was practically impossible without essentially packaging your app for every single OS and distro out there or relying on the user to be a to install the correct version of Python or Ruby and then use gem or pip to install your package. Java and .NET had slow start times due to their VM, so they were horrible candidates even if you'd solve the dependency issues. So the best solution was usually C or C++ with either the "./configure && ./make install" pattern or making a static binary - both solutions were quite horrible. Go was a winner again: it produced fully static binaries by default and had easy-to-use cross compilation out of the box. Even creating a native package for Linux distros was a lot easier, so all you add to do is package a static binary.
[1]: https://opensource.googleblog.com/2009/11/hey-ho-lets-go.htm...
[2]: https://web.archive.org/web/20091114043422/http://www.golang...
[3]: https://users.ece.utexas.edu/~adnan/top/ousterhout-scripting...
[4]: https://groups.google.com/g/golang-nuts/c/6vvOzYyDkWQ/m/3T1D...
[5]: https://groups.google.com/g/golang-nuts/c/BO1vBge4L-o/m/lU1_...
[6]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/NH0j...
[7]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/M9r1...
[8]: https://groups.google.com/g/golang-nuts/c/qKB9h_pS1p8/m/1NlO...
[9]: https://github.com/golang/go/issues/13761#issuecomment-16772...
[10]: https://go.dev/talks/2011/Real_World_Go.pdf (Slide #25)
I'm impressed. That's the most thorough and well-researched comment I've seen on Hackernews, ever. Thank you for taking the time and effort in writing it up.
It compares NuGet with Maven calling the former cumbersome. It's a tell of gaps in research made but also a showcase of overarching problem where C# is held back by people bundling it together with Java and the issues of its ecosystem (because NuGet is excellent and on par with Cargo crates).
NuGet was only released in 2010, so I wasn't really referring to it. I was referring to Maven (the build tool part, not the Maven/Ivy dependency management part, which was quite a breeze) the build system and MSBuild. Both of which required wrangling with verbose XML and understanding a lot of syntax (or letting the IDE spew out everything for you and then get totally lost when you need to fix something or go beyond what the IDE UI allows you to do). If anything, MSBuild was somewhat worse that Maven, since the documentation was quite bad, at least back then.
That being said, I'm not sure if you've used NuGet in its early days of existence, but I did, and it was not a fun experience. I remember that I the NuGet project used to get corrupted quite often and I had to reinstall everything (and back then, there was no lockfile if my memory serves me right, so you'd be getting different versions).
In terms of performance, ASP.NET (not ASP.NET Core) was as bad as contemporary Java EE frameworks, if not worse. You could make a high performance web server by targeting OWIN directly (like you could target the Servlet API with Java), but that came later.
I think you are the one who are bundling things together here: You are confusing the current C#/.Net Core ecosystem with the way it was back in the .Net 4.0/Visual Studio 2008-era. Windows-centric, very hard to automate through CLI, XML-obsessed and rather brittle tooling.
C# did have a lot of good points over Java back then (and certainly now): Less verbose language, better generics (no type erasure), lambda expressions, extensions methods, LINQ etc. Visual Studio was also a better IDE than Eclipse. I personally chose C# over Java at the time (when I could target Windows), but I'm not trying to hide the limits it had back then.
Fair enough. You are right and apologize for rather hasty comment. .NET in 2010 was a completely different beast and an unlikely choice in the context. It would be good for the industry if the perception of that past was not extrapolated onto the current state of affairs.
Thank you! I really appreciate it, since it did take a while writing this ;)
I agree. As someone unfamiliar with Go's history, that was incredibly well written. It felt like I cathartically followed Go's entire journey.
Unfortunately I have to quibble a bit, although bravo for such a high effort post.
> When you read through the lines, it becomes clear that the C++ replacement angle is more about Google than it is about Go. It seems that in 2009, Google was using C++ as the primary language for writing web servers
I worked at Google from 2006-2014 and I wouldn't agree with this characterisation, nor actually with many of the things Rob Pike says in his talk.
In 2009 most Google web servers (by unique codebase I mean, not replica count) were written in Java. A few of the oldest web servers were written in C++ like web search and Maps. C++ still dominated infrastructure servers like BigTable. However, most web frontends were written in Java, for example, the Gmail and Accounts frontends were written in Java but the spam filter was written in C++.
Rob's talk is frankly somewhat weird to read as a result. He claims to have been solving Big Problems that only Google had, but AFAIK nobody in Google's senior management asked him to do Go despite a heavy investment in infrastructure. Java and C++ were working fine at the time and issues like build times were essentially solved by Blaze (a.k.a. Bazel) combined with a truly huge build cluster. Blaze is a command line written in ... drumroll ... Java (with a bit of C++ iirc).
Rob also makes the very strange claim that Google wasn't using threads in its software stack, or that threads were outright banned. That doesn't match my memory at all. Google servers were all heavily multi-threaded and async at that time, and every server exposed a /threadz URL on its management port that would show you the stack traces of every thread (in both C++ and Java). I have clear memories of debugging race conditions in servers there, well before Go existed.
> The common frameworks of the day (Spring, Java EE and ASP.net) introduced copious amounts of overhead, and the GC was optimized for high throughput, but it had very bad tail latency (GC pauses) and generally required large heap sizes to be efficient. Dependency management, build and deployment was also an afterthought.
Google didn't use any of those frameworks. It also didn't use regular Java build systems or dependency management.
At the time Go was developed Java had both the throughput-optimized parallel GC, and also the latency optimized CMS collector. Two years after Go was developed Java introduced the G1 GC which made the tradeoff more configurable.
I was on-call for Java servers at Google at various points. I don't remember GC being a major issue even back then (and nowadays modern Java GC is far better than Go's). It was sometimes a minor issue requiring tuning to get the best performance out of the hardware. I do remember JITC being a problem because some server codebases were so large that they warmed up too slowly, and this would cause request timeouts when hitting new servers that had just started up, so some products needed convoluted workarounds like pre-warming before answering healthy to the load balancer.
Overall, the story told by Rob Pike about Go's design criteria doesn't match my own recollection of what Google was actually doing. The main project Pike was known for at Google in that era was Sawzall, a Go-like language designed specifically for logs processing, which Google has phased out years ago (except in one last project where it's used for scripting purposes and where I heard the team has now taken over maintenance of the Sawzall runtime, that project was written by me, lol sorry guys). So maybe his primary experience of Google was actually writing languages for batch jobs rather than web servers and this explains his divergent views about what was common practice back then?
I agree with your assessment of Go's success outside of Google.
Thank you. I don't much about the breakup of different services by language in Google circa 2009, so your feedback helps me put things in focus. I knew that Java was more popular than the way Rob described it (in his 2012 talk[1], not this essay), but I didn't know how much.
I would still argue that like replacing C and C++ in server code was the main impetus for developing Go. This would be a rather strange impetus outside of big tech company like Google, which was writing a lot of C++ server code to begin with. But it also seems that Go was developed quite independently of Google's own problems.
I can't say anything about Google, but I also found that statement baffling. If you wanted to develop a scalable network server in Java at that time, you pretty much had to use threads. With C++ you had a few other alternatives (you could develop a single threaded server using an asynchronous library Boost ASIO for instance), but that was probably harder than dealing with deadlocks race conditions (which are still very much a problem in Go, the same way they are in multi-threaded C++ and Java).
Yes, I am aware of that part, and it makes it clearer for me Go wasn't trying to solve any particular problem with the way Java was used within Google. I also think Go win over many experienced Java developers who already knew how to deal with Java. But it did offer a simpler build-deployment-and-configuration story than Java, and that's why it attracted many Python and Node.js where Java failed to do so.
Many commentators have mentioned better performance and fewer errors with static typing as the main attraction for dynamic language programmers coming to Go, but that cannot be the only reason, since Java had both of these long before Go came to being.
Frankly speaking, GC was more minor problem for people coming from dynamic language. But the main issue for this type of developer, is that the GC in Java is configurable. In practice most of the developers I've worked with (even seasoned Java developers) do not know how to configure and benchmark Java GC, which is quite an issue.
JVM Warmup was and still is a major issue in Java. New features like AppCDS help a lot to solve this issue, but it requires some knowledge, understanding and work. Go solves that out of the box, by foregoing JIT (Of course, it loses other important optimizations that JIT natively enables like monomorphic dispatch).
[1] https://go.dev/talks/2012/splash.article
The Google codebase had the delightful combination of both heavily async callback oriented APIs and also heavy use of multithreading. Not surprising for a company for whom software performance was an existential problem.
The core libraries were not only multi-threaded, but threaded in such a way that there was no way to shut them down cleanly. I was rather surprised when I first learned this fact during initial training, but the rationale made perfect sense: clean shutdown in heavily threaded code is hard and can introduce a lot of synchronization bugs, but Google software was all designed on the assumption that the whole machine might die at any moment. So why bother with clean shutdown when you had to support unclean shutdown anyway. Might as well just SIGKILL things when you're done with them.
And by core libraries I mean things like the RPC library, without which you couldn't do anything at all. So that I think shows the extent to which threading was not banned at Google.
As an aside:
This principle (always shutdown uncleanly) was a significant point of design discussion in Kubernetes, another one of the projects that adapted lessons learned inside Google on the outside (and had to change as a result).
All of the core services (kubelet, apiserver, etc) mostly expect to shutdown uncleanly, because as a project we needed to handle unclean shutdowns anyway (and could fix bugs when they happened).
But quite a bit of the software run by Kubernetes (both early and today) doesn’t always necessarily behave that way - most notably Postgres in containers in the early days of Docker behaved badly when KILLed (where Linux terminates the process without it having a chance to react).
So faced with the expectation that Kubernetes would run a wide range of software where a Google-specific principle didn’t hold and couldn’t be enforced, Kubernetes always (modulo bugs or helpful contributors regressing under tested code paths) sends TERM, waits a few seconds, then KILLs.
And the lack of graceful Go http server shutdown (as well as it being hard to do correctly in big complex servers) for many years also made Kube apiservers harder to run in a highly available fashion for most deployers. If you don’t fully control the load balancing infrastructure in front of every server like Google does (because every large company already has a general load balancer approach built from Apache or nginx or haproxy for F5 or Cisco or …), or enforce that all clients handle all errors gracefully, you tend to prefer draining servers via code vs letting those errors escape to users. We ended up having to retrofit graceful shutdown to most of Kube’s server software after the fact, which was more effort than doing it from the beginning.
In a very real sense, Google’s economy of software scale is that it can enforce and drive consistent tradeoffs and principles across multiple projects where making a tradeoff saves effort in multiple domains. That is similar to the design principles in a programming language ecosystem like Go or orchestrator like Kubernetes, but is more extensive.
But those principles are inevitably under communicated to users (because who reads the docs before picking a programming language to implement a new project in?) and under enforced by projects (“you must be this tall to operate your own Kubernetes cluster”).
This. I worked at Google around the same time. Adwords and Gmail were customers of my team.
I remember appreciating how much nicer it was to run Java servers, because best practice for C++ was (and presumably still is) to immediately abort the entire process any time an invariant was broken. This meant that it wasn't uncommon to experience queries of death that would trivially shoot down entire clusters. With Java, on the other hand, you'd just abort the specific request and keep chugging.
I didn't really see any appreciable attrition to golang from Java during my time at Google. Similarly, at my last job, the majority of work in golang was from people transitioning from ruby. I later learned a common reason to choose golang over Java was over confusion about the Java tooling / development workflow. For example, folks coming from ruby would often debug with log statements and process restarts instead of just using a debugger and hot patching code.
Yeah. C++ has exceptions but using them in combination with manual memory management is nearly impossible, despite RAII making it appear like it should be a reasonable thing to do. I was immediately burned by this the first time I wrote a codebase that combined C++ and exceptions, ugh, never again. Pretty sure I never encountered a C++ codebase that didn't ban exceptions by policy and rely on error codes instead.
This very C oriented mindset can be seen in Go's design too, even though Go has GC. I worked with a company using Go once where I was getting 500s from their servers when trying to use their API, and couldn't figure out why. I asked them to check their logs to tell me what was going wrong. They told me their logs didn't have any information about it, because the error code being logged only reflected that something had gone wrong somewhere inside a giant library and there were no stack traces to pinpoint the issue. Their suggested solution: just keep trying random things until you figure it out.
That was an immediate and visceral reminder of the value of exceptions, and by implication, GC.
Android GPU debugger, USB Armory bare metal unikernel firmware, Go compiler, Go linker, bare metal on maker boards like Arduino and ESP32
Usually a problem only for those that refuse to actually learn about Java and .NET ecosystems.
Still doing great after 25 years, now being copied with the VC ideas to sponsor Kubernetes + WASM selling startups.
so nowadays when we say "c++" we mostly mean the works should be replaced by rust, but back then, it's not like that.
I would argue that go successfully replaced c++ in specific domains (network, etc.), and changed your perspective on what "c++" means.
That's nothing new, Java successfully replaced C++ in enterprise code in the mid-to-late 1990s. Because it was safe from memory bugs.
Mid 2000s in my experience. And not because it was safe from memory bugs so much as safe from memory leaks. Still had plenty of NPEs.
Java kind of gets around the memory leak problem by allocating all of the leak up front for the JVM. ;)
I'm a JVM guy, but this is a good one :-)
NPE isn't a memory corruption bug.
Those are safe.
And it’s not like Go didn’t just copy nulls (plus even has shitty initialization problems now, e.g. with make!)
Except it did replace C++ in the domains it claimed it would replace C++ in. It made clear from day one that you wouldn't write something like a kernel in it. It was never trying to replace every last use of C++.
You may have a point that Python would have replaced C++ in those places instead if Go had never materialized. It was clear C++ was already on the way out, with Python among those starting to push into its place around the time Go was conceived. Go found its success in the places where Python was also trying to replace C++.
I don't think Python was starting to occupy C++ space; they have entirely different abilities. Of course, I am also glad it didn't happen.
I don't think so either, but as we move past that side tangent and return to the discussion, there was the battle of the 'event systems'. Node.js was also created in this timeframe to compete on much the same ground. And then came Go, after which most contenders, including Python, backed down. If you are writing these kinds of programs today, it is highly likely that you are using Go, Node.js, or some language that is even newer than Go (e.g. Rust or Elixir). C++ isn't even on the consideration list anymore.
What domains are those? It seems to mostly be an alternative to what people have use(d) Java or C# for.
The original Go announcement spells it all out pretty nicely.
I'm not sure what you were meaning by "it".
The main domain the original team behind Go were aiming at was clearly network software, especially servers.
But there was no consensus whether kernel could be a goal one day. Rob Pike originally thought Go could be a good language for writing kernels, if they made just a few tweaks to the runtime[1], but Ian Lance Taylor didn't see real kernels ever being written in Go[2]. In the pre-release versions of Go, Russ Cox wrote an example minimalistic kernel[3] that can directly run Go (the kernel itself is written in C and x86 Assembly) - it never really went beyond running a few toy programs and eventually became broken and unmaintained so it was removed.
[1]: https://groups.google.com/g/golang-nuts/c/6vvOzYyDkWQ/m/3T1D...
[2]: https://groups.google.com/g/golang-nuts/c/UgbTmOXZ_yw/m/NH0j...
[3]: https://github.com/golang/go/tree/weekly.2011-01-12/src/pkg/...
Python is increasingly an easy to use wrapper over low-level C/C++ code.
So in many use cases it is faster than Go.
Than pure Go code, sure. But not really faster than Go code that's a wrapper over the same low-level C/C++ code.
That depends. C function call overhead for Go is quite large (it needs to allocate a larger stack, put it on its own thread and prevent pre-emption) and possibly larger than for CPython, which relies on calling into C for pretty much everything it does, so obviously has that path well-optimized.
So I wouldn't be surprised if, for some use cases, Python calling C in a tight loop could outperform Go.
I don't have experience with Python, but I can definitely say switching between Go and C is super slow. I'm using a golang package which is a wrapper around SQLite: at some point I had a custom function written as a call-back to a Go function; profiling showed that a huge amount of time was spent in the transition code marshalling stuff back and forth between Go and C. I ended up writing the function in C so that the C sqlite3 library could call it directly, and it sped up my benchmarks significantly, maybe 5x. Even though sqlite3 is local, I still end up trying to minimize requests and data shipped out of the database, because transferring data in and out is so expensive.
(And if you're curious, yes I have considered trying to use one of the "pure go" sqlite3 packages; in large part it's a question of trust: the core sqlite3 library is tested fantastically well; do I trust the reimplementations enough not to lose my data? The performance would have to be pretty compelling to make it worth the risk.)
I think in general discouraging CGo makes sense, as in the vast majority of cases a re-implementation is better in the long run; so de-prioritizing CGo performance also makes sense. But there are exceptions, particularly for libraries where you want functionality to be identical, like sqlite3 or Qt, and there the CGo performance is a distinct downside.
Do you have an example of that? What I’ve heard over and over in comments here is that a) C interop in Go is slow, and b) Go devs discourage using it.
(Java is a similar story in my experience.)
In Python, (b) at least is definitely not true.
It may help to understand the context. At the time Go was created you could choose between three languages at Google: Python, C++ and Java.
Well, to be honest, if you chose Python you were kind of looked down on as a bit of a loser() at Google, so there was really two languages: C++ and Java.
Weeeell, to be honest, if you chose Java you would not be working on anything that was really performance critical so there was really just one language: C++.
So we wrote lots and lots of servers in C++. Even those who strictly speaking didn't have to be very fast. That wasn't the nicest experience in the world. Not least because C++ is ancient and the linking stage would end up being a massive bottleneck during large builds. But also because C++ has a lot of sharp edges. And while the bro coders would never admit that they were wasting time looking over their shoulder, the grown ups started taking this seriously as a problem. A problem major enough to warrant having some really good people look into remedying that.
So yes, at Google, Go did replace lots of C++ and did so successfully.
(
) Yes, that sentiment was expressed. Sometimes by people whose names would be very familiar to you.Out of curiosity, what languages can you currently choose from at Google?
Just you guess: Python, C++ and Java… and Go.
Or JavaScript, Dart, Objective-C, Swift, Rust; even C#. But then it depends on the problem domain. Google it huge, so it depends. And that's even if you pick Python, C++, Java or Go. So your team will already have it decided for you.
I haven't worked there for a long time so I wouldn't know. I don't even know if they are still as disciplined about what languages are allowed and how those languages are used.
Can someone still at Google chime in on this?
(sorry about the asterisk causing everything to be in itallics. forgot about formatting directives when adding footnote)
And yet most popular tools written in Go used to be written in C++, Kubernetes, Databases and the like.
Kubernetes mostly displaced tools written in Ruby (Puppet, Chef, Vagrant) or Python (Ansible, Fabric?). While a lot of older datastores are written in C++, new ones that were started post-2000ish tended to be written in Java or similar.
Kuberentes has nothing to do with Ruby / Python from your example it's far more complex and needs performance, what you described are not what k8s is doing.
Kubernetes is the equivalent of Borg /Omega at Google which is written in C++.
Go appears to be made with radical focus on a niche that isn't particularly well specified outside the heads of its benevolent directorate for life. Opinionated to the point of "if you use Go outside that unnamed niche you got no-one to blame but yourself". Could almost be called a solution looking for a problem. But it also appears to be quite successful at finding problem-fit, no doubt helped by the clarity of that focus. They've been very open about what they consider Go no to be or ever become. Unlike practically every other language, they all seem to eventually fall into the trap of advertising themselves with what boils down to "in a pinch you could also use it for everything else".
It's quite plausible that before Go, its creators would have chosen C++ for problems they consider in "The Go Niche". That would be perfectly sufficient to declare it a C++ replacement in that niche. Just not a universal C++ replacement.
Consider this, the authors have fixed some of the Plan 9 design errors including the demise of Alef, by creating Inferno and Limbo (yeah it was a response to Java OS, but still).
Where C is only used for the Inferno kernel, Limbo VM (with a JIT) and little else like Tk bindings, everything else in Inferno is written in Limbo.
Replace Limbo with AOT compiled Go, and that is what systems programming is in the minds of UNIX, Plan 9 and Inferno authors.
So, it’s Java 1.2, but worse. Cool contribution!
I think that is a far clearer goal if you look at C++ as it is used inside Google. If you combine the Google C++ style guide and Abseil, you can see the heritage of Go very clearly.
Man, arguments about the definition of "systems programming" are almost as much fun as the old "dynamic" vs "static" language wars.
IIRC, Google tried to use Go for their Fuchsia TCP stack and then backtracked. Not a systems programming language for sure.
Sure it backtracked, because the guy pushing for Go left the team, and the rest is history.
Is writing compilers, linkers, IoT and bare metal firmware systems programming?
I worked on Fuchsia for many years, and maintained the Go fork for a good while. Fuchsia shipped with the gvisor based (go) netstack to google home devices.
The Go fork was a pain for a number of reasons, some were history, but more deeply the plan for fixing that was complicated due to the runtime making fairly core architectural assumptions that the world has fd's and epoll-like behavior. Those constraints cause challenges even for current systems, and even for Linux where you may not want to be constrained by that anymore. Eventually Fuchsia abandoned Go for new software because folks hired to rewrite the integration ran out of motivation to do so, and the properties of the runtime as-written presented atrocious performance on a power/performance curve - not suitable for battery based devices. Binary sizes also made integration into storage constrained systems more painful, and without a large number of components written in the language to bundle together the build size is too large. Rust and C++ also often produce large binaries, but they can be substantially mitigated with dynamic linking provided you have a strong package system that avoids the ABI problem as Fuchsia does.
The cost of crossing the cgo/syscall boundary remains high, and got higher over the time that Fuchsia was in major development due to the increased cost of spectre and meltdown mitigations.
The cgo/syscall boundary cost shows up in my current job a lot too, where we do things like talk to sqlite constantly for small objects or shuffle small packets of less than or equal to common mtu sizes. Go is slow at these things in the same way that other managed runtimes are - for the same reasons. It's hard to integrate foreign APIs unless the standard library already integrated them in the core APIs - something the team will only do for common use cases (reasonably so, but annoying when you're stuck fighting it constantly). There are quite a few measures like this where Go has a high cost of implementation for lower level problems - problems that involve high frequency integration with surrounding systems. Go has a lower cost of ownership when you can pass very large buffers in or out of the program and do lots of work on them, and when your concurrency models fit the channel/goroutine model ok. If you have a problem that involves higher frequency operations, or more interesting targets, you'll find the lack of broader atomics, the inability to cheaply or precisely schedule work problematic.
All valid reasons, however as proven by USB Armory Go's bare metal unikernel, had the people behind Go's introduction stayed on the team, battling for it, maybe those issues would have been sorted out with Go still in the picture, instead of a rewrite.
Similar to Longhorn/Midori versus Android, on one side Microsoft WinDev politics managed to kill any effort to use .NET instead of COM/C++, on the other side Google teams collaborated to actually ship a managed OS, nowadays used by billions of people across the world.
On both cases, politics and product management vision won over relevance of the related technical stacks.
I always take with a grain of salt why A is better than B, only on technical matters.
I see you citing usb armory a lot, but I haven’t yet seen any acknowledgement that it too is a go fork. Not everything runs on that fork, some things need patching.
It’s interesting that you raise collaboration points here. When Russ was getting in go modules design he reached out and I made time for him giving him a brain dump of knowledge from working on ruby gems for many years and the bundler introduction into that ecosystem and forge deprecation/gemcutter transition, plus insights from having watched npm and cargo due to adjacencies. He took a lot of notes and things from that showed up in the posts and design. When fuchsia was starting to rumble about dropping go I reached out to him about it, hoping to discuss some of the key points - he never got back to me.
It is written in TamaGo, originally developed by people at F-Secure.
I don't see the issue it being a fork, plenty of languages have multiple implementations, with various degrees of plus and minus.
As for the rest, thanks for the sharing the experience.
> the old "dynamic" vs "static" language wars.
We used to argue about dynamic vs static languages. We still do, but we used to, too.
then there's this https://en.wikipedia.org/wiki/Dynamic_programming
which has nothing to do with types nor variables but with algorithm optimization
The trick is to throw memory at it. if memoization helps, that’ll work without the memory hit!
RIP Mitch, you were one of the greats.
:) ...Mitch
I don’t recall anything but a single definition of the term until Google muddied the waters.
Seconding this. Go also has some opinionated standard libraries (want to differentiate between header casings in http requests because your clients/downstream services do? Go fuck yourself!) and shies you away from doing hacky, ugly, dangerous things you need in a systems language.
It’s absolutely an applications language.
Headers are case-sensitive?
Only per the HTTP spec, and this is the same misunderstanding that the golang developers have. Because it's so common to preserve header casing as requests traverse networks in the real world, many users' applications or even developers' APIs depend on header casing whether intentionally or not. So if you want to interact with them, or proxy them, you probably can't use Go to do so (ok, actually you can, but you have to go down to the TCP level and abandon their http request library).
Go makes the argument that they can format your headers in canonicalized casing because casing shouldn't matter per the HTTP spec. That's fine for applications I guess, though still kind of an overreach given they have added code to modify your headers in a particular way you might not want to spend cycles on - but unacceptable for a systems language/infrastructure implementation.
I thin you wanted to say that headers are not case sensitive according to the HTTP spec, but some clients and servers do treat header names as case-sensitive in practice.
What Go does here is kinda moot nowadays, since HTTP/2.0 and HTTP/3.0 force all header names into lower-case, so they would also break non-conformant clients and servers.
That is in fact what I meant to say, and I thought I said it. Anyway, HTTP/1.1 is still in use a lot of places.
I think most people here don’t have any experience building for the kind of use cases I’m considering here (imagine a proxy like Envoy, which btw does give you at least the option to configure header casing transformations). When you have customers that can’t be forced to behave in a certain way up/down stream, you have to deal with this kind of stuff.
The Go standard library is probably being too opinionated here, but it's in line with the general worse-is-better philosophy behind Go: simplicity of implementation is more important than correctness of interface. In this case, the interface can even be claimed to be correct (according to the spec), but it cannot cover all use-cases.
If my memory serves me right, we did use Traefik at work in the past, and I remember having this issue with some legacy clients, which didn't expect headers to be transformed. Or perhaps the issue was with Envoy (which converts everything to lowercase by default, but does allow a great deal of customization).
Wait, are the headers canonicalized if you retrieve them from r.Header where r is a request?
I mean, if the safest is to conform to the html spec, there should be an escape hatch for the rarer cases easier than going all the way to the tcp level?
No, and that wasn't the claim being made. The claim being made was that there can be engineering value in preserving the case of existing headers.
Example: An HTTP proxy that preserves the case of HTTP headers is going to cause less breakage than one that changes them. In a perfect world, it would make no difference, but that isn't the world we live in.
Are you sure they are discarded and unrecoverable? Can't that be simply recovered by using textproto.MIMEHeader and iterating over the header map?
Seems that it could be a middleware away, I don't see the big deal if so.
They aren't, but because you can send Foo-Bar as fOo-BaR on the wire, someone somewhere depends on it. People don't read the specs, they look at the example data, and decide that's how their program works now.
Postel's Law allows this. A different law might say "if anything is invalid or weird, reject it instantly" and there would be a lot less security bugs. But we also wouldn't have TCP or HTTP.
But... You end up doing hacky and ugly things all the time because Go is such a restricted language with so many opinions about what should and should not be done. Generics alone...
> It certainly isn't viable as systems programming language
It is perfectly viable as a systems programming language. Remember, systems are the alternative to scripts. Go is in no way a scripting language...
You must be involved in Rust circles? They somehow became confused about what systems are, just as they became confused about what enums are. That is where you will find the odd myths.
It’s all admittedly a somewhat handwaving discussion, but in ‘systems programming’ ‘systems’ is generally understood to be opposite to ‘applications’, not ‘scripts’.
All software is application. That’s what software is for!
I wouldn't consider a driver an application
application: the action of putting something into operation
What's a driver if not something that carries out an action of putting something (an electronic device, typically) into operation?
We live in an age in which a PC running an OS, which has drivers in it, is something that can be done by Javascript in a browser.
Indeed - I’ve seen this refrain about “systems programming” countless times. I’m not sure how one can sustain the argument that a “system” is only an OS kernel, network stack or graphics driver.
let's just pretend that when go lang people say "systems programing" they mean smething closer to "network (systems) programming" which is where go shines the brightest
And yet Google replaced the go based network stack in Fuchsia with rust for performance reasons.
After the guy responsible for it left the team.
This sounds more like for perf reasons that for performance reasons.
Hold on a minute.
You are confusing the network stack (as in OS development) and network applications. Go is the undisputed king of backend, but no reasonable person has ever claimed its a good choice for OS development.
I understand ysofunny's comment to have meant basically microservices/contemporary web backend.
For people of Pike's generation, "systems programming" means, roughly, the OS plus the utilities that would come with an OS. Well, Go may not be useful for writing the OS, but for the OS-level utilities, it works just fine.
Has it found success in OS-level utilities? What popular utilities are written in Go?
Not sure these are really popular, but I cannot resist advertising a few utilities written in Go that I regularly use in my daily workflow:
- gdu: a NCDU clone, much faster on SSD mounts [1]
- duf: a `df` clone with a nicer interface [2]
- massren: a `vidir` clone (simpler to use but with fewer options) [3]
- gotop: a `top` clone [4]
- micro: a nice TUI editor [5]
Building this kind of tools in Go makes sense, as the executables are statically compiled and are thus easy to install on remote servers.
[1]: https://github.com/dundee/gdu
[2]: https://github.com/muesli/duf
[3]: https://github.com/laurent22/massren
[4]: https://github.com/xxxserxxx/gotop
[5]: https://github.com/zyedidia/micro
Being self hosted?
Gokrazy userspace?
gVisor?
Docker and Podman.
Not sure what should be counted as OS-level.
Is docker cli and OS level? What about lazygit? chezmoi? dive? fzf?
Actually many popular utilities are written in Go
Early versions of Rust were a lot like Golang with some added OCaml flavor. Complete with general GC, green threading etc. They pivoted to current Rust with its focus on static borrowcheck and zero-overhead abstractions very late in the language's evolution (though still pre-1.0 obviously) because they weren't OK with the heavy runtime and cumbersome interop with C FFI. So there's that.
AFAIK there was never "general GC". There was a GC'd smart pointer (@), and its implementation never got beyond refcounting, it was moved behind a feature gate (and a later-removed Gc library type) in 0.9 and removed in 0.10.
Ur-Rust was closer to an "applications" language for sure, and thus closer to Go's (possibly by virtue of being closer to OCaml), but it was always focused much more strongly on type safety and lifting constraints to types, as well as more interested in low-level concerns: unique pointers (~) and move semantics (if in a different form) were part of Rust 0.1.
That is what the community glommed onto, leading to "the pivot": there were applications language aplenty, but there was a real hunger for a type-heavy and memory-safe low level / systems programming language, and Rust had the bones of it.
I didn't know I wanted this, but yes, I did want this and when I got it I was much more enthusiastic than I'd ever been about languages like Python or Java.
I bounced off Go, it's not bad but it didn't do anything I cared about enough to push through all the usual annoyances of a new language, whereas Rust was extremely compelling by the time I looked into it (only a few years ago) and has only improved since.
Both Rust and Go are descendants of Limbo, Pike's prior language, although while Limbo's DNA remains strong in Go it's much more diffuse in Rust.
While those influences are important to Rust's history, they were mostly removed from the language before 1.0, notably green threads and the focus on channels as a core concurrency primitive. Channels still exist as a library in stdlib, but they're infinitely buffered by default, and aren't widely used.
A GC can work fine. At the lower levels, people want to save every flop, but at the higher levels uncounted millions are wasted by JS, Electron apps etc. etc. We can sacrifice a little on the bottom (in the kernel) for great comfort, without a difference. But you don't even have to make sacrifices. A high performance kernel only needs to allocate at startup, without freeing memory, allowing you to e.g. skip GC completely (turn it off with a compiler flag). This does require the kernel to implement specific optimizations though, which aren't typically party to a language spec.
Anyway, some OS implemented with a GC: Oberon/Bluebottle (the Oberon language was designed specifically to implement the Oberon OS), JavaOS, JX, JNode, Smalltalk (was the OS for the first Smalltalk systems), Lisp in old Lisp machines... Interval Research even worked on a real time OS written in Smalltalk.
Indeed, GC can work in hard real time systems e.g. the Aonix PERC Ultra, embedded real time Java for missile control (but Go's current runtime' GC stops are unpredictable....)
Particularly when we consider modern hardware problems (basic OS research already basically stopped in the 90s, yay risc processor design...), with minimal hardware support for high speed context switching because of processor speed vs. memory access latency... Well, it's not like we can utilize such minuscule operations anyway. Why don't we just have sensible processors which don't encourage us to unroll loops, which have die space to store context...
There were Java processors [2] which implement the JVM in hardware, with Java bytecode as machine code. Before llvm gained dominance, there were processors optimized to many languages (even Forths!)
David Chisnell, a RTOS and FreeBSD contributor recently went into quite a bit of depth [1] ending with:
[1] https://lobste.rs/s/e6tz0r/memory_safety_is_red_herring#c_gf...
[2] https://www.electronicdesign.com/technologies/embedded/artic...
The nice thing about Java is you can choose which GC to use
Not only the GC, the JIT compiler, the AOT compiler, the full implementation even.
Maybe the better term for go would be server-systems programming.
Server programming?
The term "systems programming" seems to be interpreted very differently by different people which in practice renders it useless. It is probably best to not use it at all to avoid confusion.
Niklaus Wirth, rest his soul, would disagree.
Like would the folks at WithSecure, selling the USB Armory with Go written firmware.
https://www.withsecure.com/en/solutions/innovative-security-...
Back in my day, writing compilers and OS services were also systems programming.
The shells scripts that bring up a machine are also "systems programming".
"systems" can mean "distributed systems", "network systems" etc. both of which Go is suitable for. It's obviously not a great choice for "operating systems" which is well known.
Interesting point of view - Golang might be pithily described as "Java done right". That has little to do with "systems programming" per se but can be quite valuable in its own terms.
Java has a culture of over-engineering, to the point where even a logging library contains a string interpolator capable of executing remote code. Go successfully jettisoned this culture, even if the language itself repeated many of the same old mistakes that Java originally did.
[looks at the code bases of several recent jobs] [shakes head in violent disagreement]
If I'm having to open 6-8 different files just to follow what a HTTP handler does because it calls into an interface which calls into an interface which calls into an interface (and there's no possibility that any of these will ever have a different implementation) ... I think we're firmly into over-engineering territory.
(Anecdata, obviously)
But don't those exist primarily for unit testing? That was my understanding of why interfaces were there.
If you wanted a mock, you needed an interface, even if there would ever only be one implementation of it in production.
I hate this pattern. Needless indirection for insignificant benefit.
Golang "punishes" you for wanting to write a unit test around code.
You need to refactor it to use an interface just to unit test it.
No, you don’t, unless you’re of the opinion that actual data structures with test data should not be used in a unit test.
It's indeed horrible when debugging. OTOH, there's merit to the idea that better testing means less overall time spent (on either testing or debugging), so design choices that make testing easier provide a gain -- provided that good tests are actually implemented.
I believe that's why people insert them everywhere, yes, but in the codebases I'm talking about, many (I'd say the majority, to be honest) of the interfaces aren't used for testing because they've just been cargo-culted rather than actually considered.
(Obviously this is with hindsight - they may well have been considered at the time but the end result doesn't reflect that.)
Why would those require 6 separate files?
They don't; you could put it all in one file but people tend to separate interfaces from implementation from types from ...
Interface, actual implementation, Factory, FactoryImpl, you get the idea.
Java lends itself to over-engineering more than most languages. Especially since it seems that every project has that one committer who must be getting paid per line and creates the most complex structures for stuff that should've been a single static function.
I stand corrected!
Java is a beautiful and capable language. I have written ORMs in both Java and Go, and the former was much easier to implement. Java has a culture problem though where developers seem to somehow enjoy discovering new ways to complicate their codebases. Beans are injected into codebases in waves like artillery at the Somme. Errors become inscrutable requiring breakpoints in an IDE to determine origin. What you describe with debugging a HTTP handler in your Go project is the norm in every commercial Java project I have ever contributed to. It's a real shame that you are seeing these same kinds of issues in Go codebases.
Agree. Open source OAuth go libraries have this too. It's like working with C++ code from the bad old days when everyone thought inheritance was the primary abstraction tool to use.
I asked Rob why he didn't like Java (Gosling was standing nearby) and he said "public static void main"
Nice. My reply would have been something like: it combines the performance of Lisp with the productivity of C++. These days Java the language is much better though, thanks to Brian Goetz.
Is that supposed to be a jab? Because IME SBCL Lisp is in the same ballpark as Go (albeit offering a fully interactive development environment), and C++ is far from being the worst choice when it comes down to productivity.
Hopefully you agree Lisp is more productive than C++? Lisp is however not quite fast or efficient enough to displace C++ completely, mainly because, like Java and Go, it has a garbage collector. C++ was very much the language in Java's crosshairs. Java made programming a bit safer, nulls and threads not withstanding, but was certainly not as productive as Lisp. Meanwhile Lisp evolved into Haskell and OCaml, two very productive languages which thankfully are inspiring everyone else to improve. Phil Wadler (from the original Haskell committee) has even been helping the Go team.
I agree; my point is simply that C++ is still OK-ish w.r.t. productivity, and Lisp is OK when it comes down to performances.
Common Lisp, is also among the GC based languages that offer mechanisms to do GC free allocations and has value types.
The only reason is the AI winter, and companies moving away from Lisp.
The performance of the JVM was definitely a fair criticism in it's early years, and still is when writing performance-critical applications like databases, but it's still possibly the fastest managed runtime around, and is often only a margin slower than native code on hot paths. It seems the reputation has stuck though, to the point that I've seen young programmers make stock jokes about Java being slow when their proposed alternative is Python
still better than case deciding visibility
i love this feature to be honest
I consider that a bad practice, because it doesn't make things obvious. I guess it works so well in Go, because the language itself is small, so that you don't have to remember much of these "syntax tricks". Making things explicit, but not too verbose, is the best way in my opinion. JetBrains has done amazing work in this area with Kotlin.
I like the `for..in` which reads like plain English. Or `vararg` is pretty clear - compare that to "*" and the like in other languages. Or `constructor` can not be more explicit, there is no need to teach anyone that the name must be the same as the class name (and changed accordingly). Same is true for companion object (compare with Scala).Well, that's taken care of: https://openjdk.org/jeps/463 (well you need `void main`)
I've always found it eye rolling how often this is given as some sort of "mic drop" against Java. Yeah it's a little weird having to have plain functions actually be "static methods", but it's a very minor detail. And I really hope people aren't evaluating their long-term use of a language based on how tersely you can write Hello World
It seems the goal of Java is to have one executable line of code per file.
Thus, the Java exception trace in the log file is almost like an interactive debug trace.
Whether that is a bug or a feature is an exercise for the reader.
Go code is equally over-engineered when taken into the hands enterprise architects.
You just happen to be looking at the wrong spot, see Kubernetes, YAML spaghetti, and plenty of stuff originated from Go in the enterprise space.
From looking at what the Go team had to say about Go in its earliest days, Go had very little to do with Java, and they weren't very concerned with fixing Java's issues.
The "Bloated Abstractions" issue in Java is more of a cultural thing than an issue of the language. You could even say it's partially because early Java (especially before Java 1.5) was too much like Go!
Java used to have the same philosophy around abstractions, and Sun/Oracle were pretty conservative about adding new language features. To compensate for lack of good language-level abstractions, Java developers used complicated techinques and design patterns, for example:
1. XML configuration, because there were no attributes. 2. Attributes because there were no generics and closures. 3. Observers/Visitors/Strategies/etc. because there weren't any closures. 4. Heavy Inheritance, because there was no delegation. 5. Complicated POJOs and beans, since Java didn't have properties or immutable records.
No, not even "pithily" is it "Java done right".
More like java 1.2 sold in a worse package.
I was a Java dev and love using Go now, but I have to say I'm not sure if many of my Ex-Java-Colleagues would like Go. Go is kind of odd in that even when it was new, it was kind of boring.
I think a lot of people in the Java world (not least myself) enjoy trying to refactor a codebase for new Java features (e.g. streams, which are amazing). In the Go world, the enjoyment comes from trying to find the simplest, plainest abstractions.
As a Java dev, I love boring. That's why I picked Java. Boring means less outages.
Not sure I'd give that medal to Go.
When did go get abstractions? (Only half joking)
Isn’t the entire language designed explicitly to prevent programmers from building their own sophisticated abstractions that could confuse other programmers who don’t understand that other persons code… as I understand it, if you can read go and understand basic programming you should be competent with go, and if you know your algorithms you should be proficient.
I hated old Java but the modern language isn’t as bad now some people have added some better syntax shortcuts the libraries are nearly twenty years more polished, and the IDE can nearly half write my code for me so the boilerplate and mind numbing aspect isn’t so bad… I loathe go because using it feels like programming with my hands tied behind my back trying on a keyboard with sandpaper keycaps, despite that, I didn’t bother “learning” go, I could just read it based on my Python/C/Basic/Java/C# experience instead of needing any extra learning.
My experience with reading Go is that the language not giving tools to build good abstractions has failed to stop developers from trying to do so anyway. There's never a line of code where I just plain don't know what's even going on syntactically as some languages can have, but understanding what it's actually doing can still require hopping through several files.
In short: a simple (programming) language does means that every small part/line is simple. But it doesn't mean that the combination of all parts/lines is simple. Rather the opposite.
Very true! I think a lot of the accidental complexity of early Java systems were rooted in the not so powerful language. If the language is too powerful (like Scala 2) developers do insane things with it. If the language is not powerful enough, developers create their own helpers and tricks everywhere and have to write a lot of additional code to do so.
Just compare Java streams with how collections are handled in Go and scratch your head how someone can come up with such a restricted language in this century.
And most importantly: you have to read a lot of code like this, and understand it's assumptions, failure modes, runtime behavior and bugs, which are different every time. Instead of just reading "ConcurrentHashMap", and be done with your day.
K8s just so happens to be coded in Golang. A quick look at that overall codebase should be enough to disabuse people of this notion that Golang developers cannot possibly come up with confusing or overly sophisticated abstractions.
Maybe because k8s was originally written in Java ;)
Eh, not really. Go’s philosophy around abstractions is quite poor. Duck typing begs engineers to create poor abstractions that simply reading a codebase does not necessarily lead to understanding. The bolted-on generics implementation actually makes this worse.
This is why I personally love Go too :)
There's very little room for fancy tricks, in most cases there is just one way to do things. It might be verbose, but writing code is the least time consuming part of my job anyway.
Java was designed to be boring, too. That’s why, for example, it doesn’t have unsigned integers: it means programmers need not spend time choosing between signed and unsigned integers.
It evolved away from that.
Yeah, Java has been trying to add every feature under the Sun recently and it's definitely not a boring language anymore (since Java 21 at least, it's impossible to claim otherwise with things like pattern matching being in the language).
As a Java guy, I think this is looking like a desperate attempt to remain relevant while forgetting why the language succeeded in the first place.
That’s an absolutely bad take. Java is still very very conservative with every change, and they almost always have only local behaviors, so not knowing them still gives you complete understanding of a program.
Like, records are a very easy concept, fixing the similar feature in, say, c# where they are mutable. Sealed classes/interfaces are a natural extension over the already existing final logic. It just puts a middle option between none and all (other class) being able to inherit from a superclass.
C# records default to immutability. However, struct records being a lower level construct default to mutable (which can be changed with readonly keyword):
no unsigned integers gave us signed bytes. Not sure if that made things simpler.
Almost every golang program I've seen was ugly. It's strange given that they designed the language from scratch with all the ugliness ingrained in its structure from day one.
Different people different taste, different context, different standards of beauty.
Only thing strange seems here is you are putting opinion as some kind of fact.
Every statement about aesthetics is subjective, you don't have to remind me of that. BTW, what did YOU write about Amazon shows 6 days ago? No one was pontificating about your opinion, right?
Please repent :-)
Working with deeply nested data structures in Go is still frustrating, it’s one place where the Java still wins thanks to the streams API.
There are no area where java would fare worth than go.
Not sure I would agree with the community leading aspect. It still feels like Google decides.
My particular point would be versioning. At first Go refused to acknowledge the problem. Then, when there was finally growing community consensus, Go said forget everything else, now we are doing modules.
I also recall the refusal to make montatomicly-increasing-time a public API until cloudflare had a daylight savings time outage.
Personally their handling of versioning, generics and ESPECIALLY monotonic time (in all 3 cases, seemingly treating everyone raising concerns about the lack of a good solution as if they were cranks and/or saying fix it yourself) definitely soured me on Go and I would never choose it for a project or choose to work for a company that uses it as language #1 or language #2.
It just left a bad taste in my mouth to see the needs and expertise of actual customers ignored by the Go team that way since the people in charge happened to be deploying in environments (i.e. Google) where those problems weren't 'real' problems
Undeniable that people have built and shipped incredible software with it, though.
Package management is a very blatant entry for this list too.
Backend people use Go in my company. They do great things with it. It works well enough when the interface between a Go program and another one is a socket kind of thing.
But we also have a couple system utilities for embedded computers written in Go. I still get frustrated that I have to go and break my git configuration to enable ssh-based git clones and set a bunch of environment variables for private repos. Then there is CGO stuff like reading comments as code interfaces. Those things are incredible waste of time of the embedded developers and it makes harder for no reason to onboard people. Go generally spits out cryptic errors when building private repos like those.
I always wanted and still want to create a wrapper that lauches a container, makes whatever "broken" configuration that makes Go compiler happy, figures out file permissions and runs the compiler. The wrapper should be the only go executable in my host system and each repo should come with a config file for it.
Just curious.. But why would you disable ssh-based git authentication? It's significantly more convenient when interacting with private repositories than supplying a username and password to https git endpoints.
Set up a private Go module proxy. Use something like Athens. The module proxy can handle authentication to your private module repositories itself, then you just add a line in your personal bashrc that specifies your proxy.
In general I don't have complaints with the things you take issue with so I'll digress on those.
Ask the Go developers. AFAIK the only package manager where I have to change my global git configuration to make it work. Even the venerable CPAN and tlmgr behave better.
https://stackoverflow.com/questions/27500861/whats-the-prope...
Yeah, this isn't at all necessary. It might work for you but it's not how we accomplish the same thing. See my original comment for what we do.
I'm sorry, but I didn't catch it; how exactly do you `go get` private github repos?
I don't disable it. However, not every git repo requires ssh to pull. When working with other languages, if there is a library that I purely depend on, it is perfectly okay to use https only and I use https.
However to use private repos with Golang, one has to modify their global git configuration to reroute all https traffic into ssh because Golang's module system uses only https and the private repos are ssh-authenticated. There is no way to specify which repo is ssh and which repo is https. Last time I used Go, it was at 1.19.
Why should we put more things on our stack just to make a language, which claims to be modern, work? Why do we have to change the global configuration of a build server to make it work? Rust doesn't require this. Heck our Yocto bitbake recipes for C++ can do crazy things with url including automatically fetching the submodules.
Maybe it would make sense to make that change if we used Go everyday but we don't.
That's not exactly novel, and while I agree that it's meh what really grinds my gears is the claims / assertions that Go doesn't have pragmas or macros, while they're over there using specially formatted comments as exactly that like it's 2001-era Java.
What is an ”actual customer” in the context of the Go programming language?
Anyone who's run an open source project is used to getting feature requests or complaints from groups like:
* people who are merely interested but have no plans to use your project
* people with strong opinions not backed by actual experience
* people with a specific interest (like a new API or feature) who want to integrate it into as many projects as possible
From a naive perspective, it makes sense to treat a request like 'we need monotonic time' as something that doesn't necessarily have any merit. The Go team are very experienced and opinionated, and it seems like it was a request that ran against their own instincts. The design complication probably was distasteful as well.
The problem is, the only reason they never needed monotonic time in the past was that many of them spent all their time working in special environments that didn't need it (Google doesn't do leap seconds). In practice other people shipping software in the wider world do need it, and that's why they were asking for it. Their expertise was loudly disregarded even though the requests came with justification and problem scenarios.
For anyone not familiar with the monotonic time issue, the implementation was found to be incorrect, and the go devs basically closed it and went “just use google smear time like we do lol, not an issue, bye”.
It did eventually get fixed I believe, but it was a shitty way of handling it.
For reference, the GitHub thread is: https://github.com/golang/go/issues/12914
Even the "fix" is... ugh: instead of exposing monotonic time, time.Time contains both a wallclock time and an optional monotonic time, and operations update and use "the right one(s)".
Also it's not that the implementation was incorrect, it's that Go simply didn't provide access to monotonic time in any way. It had that feature internally, just gave no way to access it (hence https://github.com/golang/go/issues/16658).
I mostly loved what they did with with modules/package mgmt. I think SIV was a mistake but modules was miles better then the previous projects even with SIV. Some people seemed to take it very personally that the Go team didn't adopt their prior solution but idk why they expected the go team to use their package manager. I think the SAT style package manager proposed would have created a lot more usability problems for developers and would have been much harder to maintain.
They took it personally because Go led the community and those project leaders on for years that it would be looking, learning, and communicating...
And then dropped the module spec and implementation and mandated it all in about two days. With no warning or feedback rounds or really any listening at all, just "here it is, we're done", out of nowhere.
They have every right to be personally insulted by that.
2 days? ISTR discussions going on for at least 6 months comparing dep with what the 'official' one would do.
I feel for this, but only to an extent. It's hard to work in any service industry and retain any notion of, "the customer intelligently knows what they want," as a part of your personal beliefs. At the end of the day, you had an idea for a product, and you have to trust your gut on that product's direction, which is going to make some group of people feel a little unheard.
I think Go's language leadership is one of the worst if not the worst I've ever seen when it comes to managing a language community/PR. Both Ian and Rob come off as dismissive of the community and sometimes outright abrasive in some of the interactions I've seen. Russ Cox seems like a good person, though.
They probably think being hardheaded "protects" the language from feature creep and bad design, but it has also significantly delayed progress (see generics) and generally made me completely turned off from participating in language development or community in any meaningful way, even though I actually like the language. I think there are ways to prevent feature creep and steer the language well without being dismissive or a jerk.
Not my experience (except for the Russ bit :o)
I've actually been quite impressed by Ian's patience.
People are at different levels of understanding and sometimes it's hard to communicate.
Go is used extensively in server-side business applications. Arguably it shouldn't be, but it is.
Why shouldn’t it be?
Too low level and lacks the power to cleanly model the business domain.
"Too low level" "lacks the power" - I don't understand what this means. What are things that are hard to do in Go business applications that other languages do better?
It lacks modeling capability that you'd find even in languages like Java and C#. Enums, records, pattern matching, switch expressions, and yes even inheritance where it makes sense.
Streams, meta-programming, enums, ‶modern″ switches, generators, portable types in CGo, streamlined error handling, safer concurrency primitives, nil, partial struct initialization, ...
Here is an example. Go let’s structs be passed by value or by reference. The programmer needs to decide, and that adds complexity that is largely irrelevant for modeling complex business logic. Java does not provide a choice, which keeps it simple.
Go has pretty powerful composition, reuse, higher-order functions etc. for dealing with byte arrays and streams. Not so much for business domain entities.
I believe even the language's own designers would agree with that sentiment. There's just generally a lot of things about Go that are great for low-level microservices but not great for 1M+ line of code business applications maintained by large teams.
I can't speak for others, but personally if I'm writing software with complex business logic, I'd want null safety, better error handling, a richer type system, easier testing/mocking... I've also never liked that a panic in one goroutine crashes the whole application (you can recover if it's your own code, sure, but not if it happened in a goroutine launched by some library).
I'd disagree with most of that, but the panic in goroutines really hits home. It's so annoying to remember implementing recover in every goroutine started to avoid crashing your application. I don't get why there's no global recover option that can recover from panic's in goroutines as well.
The only Go I ever touched in industry was the backend of a web-app at Salesforce. I'm not sure this counts as "systems programming".
https://engineering.salesforce.com/einstein-analytics-and-go...
Rob describes go as a language for writing server applications, and I think that is a much more applicable term than systems programming.
Drew DeVault called it an "internet" language back in 2021. And to that I more or less agree.
Read footnote 1 for context.
https://drewdevault.com/2021/04/02/Go-is-a-great-language.ht...
But then again the internet is everywhere now: desktop, servers, watches, washing machines, industrial systems, sensors ... So "internet language" is a somewhat pointless term.
That term isn’t meant to include mobile apps, desktop apps, web apps (even though those all use the internet, of course). Nobody is using Go for any of those, as far as I know.
So I think it is a useful term, and captures the things Go is good at surprisingly well.
Yep, I'd say one of the things they could have done better is in making this distinction more clear to people. I spent multiple years being confused about what made go a "systems" language, when it didn't seem very good for that at all. When all the devops / infrastructure tooling started being written in it, its niche suddenly became more clear to me.
> It's interesting to compare Dart, which has zero uptake outside Flutter
Caveat: I work on Dart.
I don't see that that's a very damning critique of Dart. Every language needs libraries/frameworks to be suited for a domain. Flutter is a framework for writing client apps in Dart. Without Flutter, no matter how much you like Dart the language, you'd be spending a hell of a lot of time just figuring out how to get pixels on screen on Android and iOS. Few people have the desire or time for that.
Anyone writing applications builds on a stack of libraries and frameworks. The only difference I see between Go and Dart with Flutter is that Go's domain libraries for networking, serialization, crypto and other stuff you need for servers are in the standard library.
Dart has a bunch of basics in the built in libraries like collections and asynchrony, but the domain-specific stuff relies on external packages like Flutter.
That's in large part because Dart has had a robust package management story from very early on: many "core" libraries written and maintained by the Dart team are still shipped using the package manager instead of being built-in libraries because it makes them much easier to evolve.
I prefer that Flutter isn't baked into Dart's standard libraries, because UI frameworks tend to be shorter-lived than languages. Flutter is wonderful, but I wouldn't be surprised if twenty years from now something better comes along. When that happens, it will be easier for Dart users to adopt it and not use Flutter because Flutter isn't baked in.
I don’t disagree with all that but it seems tangential to the point being made, that people just aren’t using Dart except for Flutter apps. So compared to Go it’s very much a niche language (although maybe it’s a really big niche, I don’t know).
One might as well say "People just aren't using Go except for server apps."
Go is used for command-line tools too, e.g. esbuild.
It's a question of whether the tail is wagging the dog. Flutter is more important than Dart, unless Dart finds a way to expand into another niche. I don't think any one Go framework is bigger than the language itself (even if you were to include the standard library networking utils as a framework).
By that same token, Go's standard library and runtime is more important than Go.
Is that an indictment of Go, or just an observation that a usable platform is an interconnected set of tools?
They gutted the key FOSS teams during the layoffs, the c-suite hates real FOSS, it doesn’t look good on Ruth’s spreadsheet.
Of course, pretend FOSS like Android they strategically tolerate, but beyond that unless it results in an ad click it’s useless.
Who is Ruth?
Google CFO
Most employees think she’s the CEO to be fair.
Bloatware? That’s already an uninformed and loaded term, and Google still has and writes more Java than Go, if I’m not mistaken.
Java is so bad, and .NET as well, that now WASM startups are re-inventing application servers on top of Kubernetes + WASM.
On one side the ecosystems get bashed, on the other, the same ones complaining get to re-invent them, badly.
Where I stand, we keep enjoying Java and .NET bloatware, with powerful programming languages, thank you very much.
Go is only used in DevOps scenarios where there is no way around it.
The only place I advocate for it, is a C replacement, for the same role as Limbo in Inferno, or Oberon in 1992, not Java or .NET ecosystems.
What they really got right in my opinion: show, don't tell and modesty.