return to table of content

More powerful Go execution traces

Zuiii
38 replies
1d11h

Go's standard library is a shining example of what all standard libraries should strive for. Yet, we still have some languages who's developers refuse to include even a basic http API in their standard libraries in an age where even embedded systems have started to speak http. Imagine if if the same had happened with TCP and UDP...

Here's to the continued success of Go and other sanely-designed languages.

pjmlp
18 replies
1d9h

Python (1991), Java (1995), .NET (2001), Smalltalk (1972), Common Lisp (1984), Ruby (1995), Perl (1993).

Go is following not leading.

To this day ISO C and ISO C++ still don't include TCP and UDP on their standard library, that comes from POSIX, the UNIX APIs that didn't make it into neither ISO C nor ISO C++.

jchw
12 replies
1d7h

Go is not quite the same but definitely on a similar level of abstraction to C, C++, Rust, etc. If that statement makes you raise an eyebrow, I'd suggest that while Go requires more runtime than C or Rust, it does still give you:

- Direct memory access through `unsafe`. Python kinda does through `ctypes`?

- Tightly integrated assembly language; just pop Go-flavored assembly into your package and you can link directly to it.

- Statically-compiled code with AOT; no bytecode, no interpreters, no JITs.

Therefore what really sets Go apart is that it gives you all of these rich standard library capabilities in a relatively lower level language.

Of course I kind of understand why Rust and C++ don't put e.g. a PNG decoder in the standard library, I think this is somewhat an issue of mentality; those are things that are firmly the job of libraries. But honestly, I wish they did. It's not like the existence of things in the standard library prevents anyone from writing their own versions (It doesn't stop people from doing so in Go, after all), but when done well it provides a ton of value. I think nobody would claim that Go's flags library is the best CLI flag parsing library available. In fact, it's basically just... fine. But, on the other hand, it's certainly good enough for the vast majority of small utilities, meaning that you can just use it whenever you need something like that, and that's amazing. I would love to have that in Rust and C++.

And after experiencing OpenSSL yet again just recently, I can say with certainty that I'd love Go's crypto and TLS stack just about everywhere else, too. (Or at least something comparable, and in fairness, the rustls API looks totally fine.)

pjmlp
7 replies
1d6h

Direct memory access through `unsafe`. Python kinda does through `ctypes`?

Already available in ESPOL and NEWP (1961), Modula-2 (1978), Ada (1983), Oberon (1987), Modula-3 (1988), Oberon-2 (1991), C# (2001), D (2001) and plenty others I won't bother to list.

Tightly integrated assembly language; just pop Go-flavored assembly into your package and you can link directly to it.

Almost every compiler toolchain has similar capabilities

- Statically-compiled code with AOT; no bytecode, no interpreters, no JITs.

Like most compiled languages since FORTRAN.

jchw
4 replies
1d6h

Huh? I think you misinterpreted what I meant to suggest that those features individually were unique or unusual. I was only using them to demonstrate that Go is on a similar level of abstraction to the underlying machine as C, C++ and Rust.

Like most compiled languages since FORTRAN.

Yes. But you didn't list "compiled languages since FORTRAN", you listed:

Python (1991), Java (1995), .NET (2001), Smalltalk (1972), Common Lisp (1984), Ruby (1995), Perl (1993).
pjmlp
3 replies
1d5h

First of all I never mentioned that I wrote an exaustive list of compiled languages since the dawn of computing, rather a reply to

"Go's standard library is a shining example of what all standard libraries should strive for. Yet, we still have some languages who's developers refuse to include even a basic http API in their standard libraries in an age where even embedded systems have started to speak http. Imagine if if the same had happened with TCP and UDP...

Here's to the continued success of Go and other sanely-designed languages."

You then moved the goal posts by talking about stuff that wasn't in that comment.

As such I am also allowed to move my goal, mentioning that

"Already available in ESPOL and NEWP (1961), Modula-2 (1978), Ada (1983), Oberon (1987), Modula-3 (1988), Oberon-2 (1991), C# (2001), D (2001) and plenty others I won't bother to list."

Are all languages that compile to native code.

"Ah but what about C#?!?", it has had NGEN since day one, Mono/Xamarin toolchain has supported AOT since ages, Windows 8 Store Apps used MDIL toolchain from Singularity, replaced by .NET Native for Windows 10 store apps, Unity compiles to native via their IL2CPP toolchain, and nowadays we have Native AOT as well.

And I will had that Java has had native AOT compilers since around 2000, even if only available as commercial products, with Excelsior JET, Aicas, Aonix, Webspehre Real Time, PTC, and unsafe package as well, even if not enjoying an official supported state (nowadays replaced with Panama for exactly the same unsafe kind of stuff and low level system accesses)

jerf
1 replies
1d3h

I do not understand how you and some other people seem to see a claim to be first in every feature a language ever says it has. Who is claiming primacy or uniqueness for almost any feature in any language ever? When has a Go designer ever claimed to be the first ones to implement a feature?

Programming languages have been largely just shuffling around features other languages have for the last 50 years now, and I can only go back that far because when you get back to the very first languages, they're unique and first by default. Even when a language is first to do something, it's generally only the first for a feature or two, because how would anyone even make a programming language that was almost entirely made out of new things anymore? Even if someone produced it, who would or could use it?

You seem to spend a lot of time upset about claims nobody is actually making.

pjmlp
0 replies
1d

It is the way people insist on writing such arguments.

jchw
0 replies
1d5h

I don't know what your goal in this discussion is. I don't think anyone is claiming Go invented having a nice standard library, nor is anyone claiming that Go invented compilers or anything weird like that. I think you misunderstood the entire discussion point.

On a similar note, iPhone did not invent cameras, MP3 players, cellular broadband modems, touchscreens, slide to unlock, or applications.

Thaxll
1 replies
23h34m

Statically-compiled code with AOT; no bytecode, no interpreters, no JITs

Except in reality in does not work, like you can't easily create a single binary out of most C/C++ project.

You always going to fight with make / GCC / llvm and other awful tools with errors that no one understand. It does not matter if the underneath tool / language is supposed to support it, can a developer make it work effortless or not.

In Go you download any repo type go build . and it just works. I can download a multi millions line repo like Kubernetes and it's going to work.

pjmlp
0 replies
22h24m

Depends on the C compiler one decides to use.

If you believe that regarding Kubernetes you're in for a surprise regarding reproducible container builds.

jmaker
3 replies
1d3h

I think if the Rust team had the capacity they might have considered adding—and maintaining—more stdlib functionality. I never asked for details but I guess that the core Rust enjoys only a fraction of the funding Go and .NET have enjoyed. It’s not only a merits based decision case I think.

Regarding C++, it’s based on a standard, the situation is a bit different. You have a variety of implementations. Imposing anything beyond the typical use case the compiler implementations have catered to would induce an insurmountable overhead on the implementation coherence. Therefore, I believe it’s more reasonable to have application-level logic in the realm of community maintained libraries. In addition, C++ is huge syntactically and the stdlib is immense, but it’s more focused on algorithmic tasks rather than on quickly building mini-servers.

Besides, the Go community has a myriad of reinvented wheels, ranging from logging over caching to maps, and until recently HTTP server libraries. The logging story for example has just recently led to a discovery of the patterns desired by the community, mature enough to accommodate structures logging in Go’s stdlib. Similarly for error handling. Robust and settled approaches turned a de facto standard make total sense to be included.

.NET is different again, having a center of gravity with Microsoft and the .NET Foundation, with typically one preferred library for a given task, contrary to Java and Go. Centralized and decentralized, the classical dichotomy.

norman784
2 replies
1d

I think Rust team would still have a very small stdlib even if they have more funding, here's a talk[0] where they aim to be relevant for the next 40 years, in that time span there would be a lot of changes and to be backward compatible, you cannot have features in your stdlib that will be irrelevant or change in the next 5, 10, 15, 20 years.

[0] https://www.youtube.com/watch?v=A3AdN7U24iU

jmaker
0 replies
23h19m

That’s a great point indeed. In Java and C++, there have been quite a few deprecations over the years and decades. In the Rust community, there’s unfortunately quite a lot of interesting but abandoned crates.

badrequest
0 replies
23h14m

What changes are set to be made to the file format of PNGs that would prevent it from being relevant in the standard library 40 years from now?

tptacek
4 replies
1d3h

I have deep respect for Python and like working in it, but Python's standard library has only one of the attributes listed on the comment you're responding to. The Go standard library is different in a variety of ways from Python's standard library, and, to the original commenter's point, it is far more common to replace portions of the Python standard library with third-party packages than it is with Go's standard library.

pjmlp
3 replies
1d

I love the, yes but, kind of arguments when people get shown how they aren't exactly right.

badrequest
1 replies
23h16m

I take it you're using urllib2 to make HTTP requests in Python?

pjmlp
0 replies
22h26m

When the standard library does the job, there is no need to look elsewhe, unless forced to by third party libraries dependencies.

tptacek
0 replies
22h59m

They're really not comparable standard libraries. For the most part, the "batteries included" features in the Go standard library are the idiomatically accepted versions of those features in the community, which is not something you can say about Python's standard library. It's a major difference. Using something other than net/http would be a code smell in Go code, but using urllib would maybe almost be a code smell in Python.

nindalf
12 replies
1d10h

I think it’s a bit extreme to characterise it as sanity vs insanity. I think you need at least one of a large stdlib or good dependency management.

I love Go, used it since the 1.0 release and have used it at work for years. No complaints about the language. But for the longest time Go didn’t have quality dependency management and pulling in dependencies was annoying. Building your programs with the large, high quality stdlib was the path of least resistance.

Since Go 1.0 (2012) most new languages have coalesced on good dependency management. Rust, for example, copied Ruby’s bundler idioms since before 1.0 (2015). People who needed good quality libraries were able to pull them in with minimal hassle. That’s why they didn’t need a large standard library to succeed. I’ve written more about the tradeoff here - https://blog.nindalf.com/posts/rust-stdlib/

To you, I’d suggest using less charged language like sanity. In this case there’s a good technical case for both ways, and it’s not productive or nice to imply that people who choose a different path from you are insane.

norman784
3 replies
1d

I think there's a distinction between Rust and Go in that sense, Go has a clear objective, be a good (if not the best) language for writing web services (that's why Google developed it in the first place), that's why their stdlib is so rich to write this kind of applications, while Rust was developed to be the system language for the next 50 years, that's why the difference in stdlib.

The only downside of Rust is that you have to pull more than 100 dependencies to build a simple hello world server (Tokio + Axum | Actix), then you need a database driver, most likely something that depends on SQLx (not sure here how many, dependencies), but one of my projects got near 500 transitive dependencies with only 20~ish direct dependencies.

the_gipsy
0 replies
10h4m

Go's objective was "C but with GC and async runtime". It got adopted for web services by chance, a better alternative to nodejs or python in some aspects, but the zero-values end up being a huge footgun to serialization which is the bread and butter of web stuff.

badrequest
0 replies
23h17m

Rust dependency management is the best parts of Go with the worst parts of NPM.

WuxiFingerHold
0 replies
12h54m

You hit the nail with Go's initial purpose. That not only explains the high quality of the std lib in terms of http, but also the lower quality in other areas.

By the way: You still need a DB driver with Go. They just provide an interface, which is cumbersome and error prone to use. That's why the community package sqlx exists.

To create a hello world in Rust, one just needs to add two crates, which in terms pull many other crates. So yes, too many total deps for my taste, but an Axum server is much more feature full than a server build with Go's std lib.

bheadmaster
3 replies
1d10h

To you, I’d suggest using less charged language like sanity.

To the grandparent, I'd suggest keeping the language as it is. Status quo in the current programming ecosystem is often insane, and when it is, we should call it out.

it’s not productive or nice to imply that people who choose a different path from you are insane.

In a general sense, I agree. However, when it comes to programming, insanity is so widely accepted as normality that I think it would be counterproductive to blunt the language used to describe it.

As an example of insanity for those who may (justifiably) think I'm talking out of my bottom - in NodeJS ecosystem the standard library is so bad that it is considered acceptable to use dependencies for even the simplest tasks, which caused the `leftpad` incident which broke countless programs.

lifthrasiir
2 replies
1d7h

Leftpad is not appropriate here because Node.js didn't add any additional String methods anyway. Therefore it could have been prevented if JS proper had enough String methods beforehand, regardless of attitude towards dependencies. Grandparent's points in comparison are more about the scope of standard libraries. (And in terms of convenient String routines, Go is still a bit inferior to Python!)

bheadmaster
1 replies
1d2h

Therefore it could have been prevented if JS proper had enough String methods beforehand, regardless of attitude towards dependencies. Grandparent's points in comparison are more about the scope of standard libraries.

Leftpad could've just as well be a part of some standard library, like...

in terms of convenient String routines, Go is still a bit inferior to Python

... Go strings library :) https://pkg.go.dev/strings

lifthrasiir
0 replies
18h30m

Maybe I should've been more clear about "scope".

Standard libraries can pick their scope of functionalities and depth (or completeness) for each functionality. Nowadays every programming language is expected to come with a good string support, which is about the scope. But there are a lot of string operations you can support. PHP for example has `soundex` and `metaphone` functions for computing a phonetic comparison key. Should other languages support them as well? I don't think so, and that's about the depth or completeness because you can never support 100% of use cases with standard libraries alone. Ideally you want to cover (say) ~90% of use cases with a minimal number of routines.

Leftpad was clearly due to the lack of depth in JavaScript and Node.js standard libraries. JavaScript now has `String.prototype.padStart`, and an apparent name difference suggests a good reason that some standard library may want to avoid it: internationalization is complex. A common use case is to make it aligned by padding space characters, but that obviously breaks for many non-Latin scripts [1]. And yet many people tried to use it, so `left-pad` was born with a suboptimal interface, and we know the rest.

HTTP support is different. I totally agree that HTTP is something you want to support in a sorta native fashion, but a standard library is not the only way to do that. In fact it is not a good place to do that because it is generally slower to change (Go is a notable exception but only because its core team is very well funded and strongly minded). Python did support HTTP in its standard library for a long time, but it doesn't support HTTP/2 or HTTP/3 at all and Requests or urllib3 are now the de facto standard in Python HTTP handling. Modern languages try to balance this issue by having a non-standard but directly curated set of libraries. Rust `regex` is a good example, which may be surprising given than even C++ has a native regex support. But by not being a part of the standard library, `regex` was able to move much faster and leverage a vast array of other Rust libraries, and it is now one of the best `regex` libraries throughout all languages. That's what nindalf wanted to say by different "ways".

[1] For example, `"한글".padStart(5)` will give you `" 한글"` but its visual width is larger than 5 "normal" characters. This is not merely a matter of fonts and Unicode has a dedicated database for the character width ("East Asian Width"). Some characters are still ambiguous even in this situation (e.g. ↑), and the correct answer with respect to internationalization would be: don't, use a markup language or (for terminals) a cursor movement instead.

begueradj
2 replies
1d2h

It's funny that your comment here applauds Go but your blog post praises Rust.

nindalf
1 replies
1d2h

It is possible for more than one language to be good. Both languages succeeded with their approach.

littlestymaar
0 replies
18h49m

“But how can you support two opposing sport teams, that doesn't make sense”

Too many people on HN when discussing about programming languages unfortunately…

trimethylpurine
0 replies
9h54m

sanity - Soundness of judgment or reason.

This doesn't sound extreme. The statement, "designed with sound judgement," doesn't carry the implications that you're defending from.

jsjohnst
3 replies
1d2h

Yet, we still have some languages who's developers refuse to include even a basic http API in their standard libraries in an age where even embedded systems have started to speak http.

There’s also the philosophy that the language core should be as minimal as possible. I’m not 100% sold on it, but there’s definitely a valid argument to be made in favor of it.

norman784
1 replies
1d

As I wrote in another comment, it depends on the goal of the language, Go was build for writing web services, that's why they have most, if not everything, you need to write this type of apps.

jsjohnst
0 replies
1h18m

Go was build for writing web services

Yes, I very much agree. The comment I was replying to however was referring to “other languages”.

WuxiFingerHold
0 replies
13h31m

My take as well. It's not black or white. At times I wish Rust's std lib was greater (by the way, it's not small either), but other times I'm blown away by the quality of some community crates.

WuxiFingerHold
0 replies
13h36m

I really like that Go has a great std lib, but it comes at the cost of quality or at least feature fullness. Some examples:

- The csv package. Compare Go's csv package with Rust csv crate. The package in Go's std lib is close to useless compared to the Rust community crate.

- Logging. No tracing, limited providers, just recently added structured logging.

- The xml package is very slow and cumbersome to use compared to modern element-tree-like implementation. For larger docs you probably need to resolve to a community package.

- If I'm not mistaken, Java has a build in http server, which usage is probably very low. Instead people are using Jetty / Netty / Tomcat (? my Java times are long ago)

To repeat myself: I personally like the std lib approach of Go. I disagree to any narrow view on this topic.

bsaul
34 replies
1d18h

i've sometimes heard that the JVM had best in class tooling for server troubleshooting. How does go compare to it now ?

Xeoncross
16 replies
1d18h

For good reason, the JVM has a ton more knobs that need adjusting. You can't just run Java code. The JVM has a lot of tricks you have to customize for based on your workload.

For years until 1.19 the Go GC has had only one tuning parameter (GOGC).

rileymichael
9 replies
1d17h

I don't see the connection you're making between knobs that adjust runtime behavior and tooling. As an aside, "you can't just run java code" is a bit hyperbolic, plenty of people "just run" java apps and rely on the default ergonomics. The modern JVM also offers more automated options, such as ZGC which is explicitly self tuning.

iamcalledrob
8 replies
1d8h

With Go, you never spend hours/days debugging broken builds due to poorly documented gradle plugins. As an example :)

You really, truly, just run.

pjmlp
7 replies
1d6h

That is the problem right there, using Gradle without having learned why no one uses Ant any longer.

As for Go, good luck doing just run when a code repo breaks all those URL hardcoded in source code.

philosopher1234
4 replies
1d3h

That’s not a real issue:

* the go module proxy ensures repos are never deleted, so everything continues to work

* changing to a new dep is as easy as either a replace directive or just find and replace the old url

pjmlp
3 replies
1d

It requires work to keep things working, exactly the same thing.

philosopher1234
2 replies
9h32m

The module proxy is used by default and requires no work. I don’t think what you’re saying makes much sense.

pjmlp
1 replies
5h57m

So there is some magic pixie dust that will fix url relocations for the metadata used by the proxy, without having anyone touch its configuration?

philosopher1234
0 replies
1h5m

I think I’m missing something, because I’m pretty sure you understand the go module proxy (having seen you around here before) but I really don’t understand what problem you’re talking about.

If a module author deletes or relocates their module, the old module is not deleted or renamed from the module proxy. It is kept around forever. Code that depends on it to not break does not break.

If they relocated and you want to update to the new location, perhaps for bug fixes, then you do have to do a bit extra work (a find and replace, or a module level replace directive) but it’s a rare event and generally a minor effort, in my opinion, so I don’t think this is a significant flaw.

For most users most of the time they don’t need to think about this at all.

badrequest
1 replies
23h6m

good luck doing just run when a code repo breaks all those URL hardcoded in source code

You're on a tear in this thread being wrong about how Go works, but I'm really curious what extremely specific series of events you're imagining would have to happen to lead to this outcome. If I use a dependency, it gets saved in the module proxy, I can also vendor it. You would have to, as a maintainer, deliberately try to screw over anybody using your library to accomplish what you describe.

pjmlp
0 replies
22h30m

Not when one git clones an existing project, only to discover the hardcoded imports are no longer valid.

Being a mantainer has nothing to do with some kind of gentlemens code of condut.

tsimionescu
4 replies
1d12h

Note that Go needs those knobs just as much as the JVM does, at least some of them. They just didn't want to expose them.

philosopher1234
3 replies
1d3h

Which knobs does go need?

tsimionescu
2 replies
1d2h

Fine-grained control over various GC phases and decisions (such as the level of parallelism, max pause times). Until Go 1.19, it was even missing a max memory limit, and even now it only has a soft memory limit.

Additionally, of course it would be nice to have more GC options, such as choosing between a throughput-oriented GC design and a latency-oriented one, having the option of a compacting GC of some kind to avoid fragmentation, or even having a real time GC option.

Go has chosen a very old-fashioned GC design with very few tuning parameters possible, but even so it only exposes a very basic form of control.

kbolino
0 replies
2h52m

I agree with most of your points but how is it "old-fashioned"? To me, that means things like reference counting or long stop-the-world pauses, neither of which are true of Go.

cwbriscoe
0 replies
18h56m

I would rather just write code and trust the existing GC than mess around with knobs all day. I suppose there are < 1% of projects that may see some benefit in messing with the GC.

Ironlink
0 replies
1d12h

You can't just run Java code. The JVM has a lot of tricks you have to customize for based on your workload.

This sounds like something you would hear 10 years ago in relation to the CMS garbage collector. Since Java 9, G1 has been the default gc for multi core workloads and is self-tuning. The CMS gc was removed 4 years ago. If you do need to tune G1, the primary knob is target pause time. While other knobs exist, you should think carefully before using them.

We run all of our workloads with vanilla settings.

Thaxll
12 replies
1d18h

Go is one of the best language in term of tooling, it has just one binary that contains everything, I don't know any other language that includes out of the box all of that:

- testing

- benchmark

- profiling

- cross compilation ( that work from any platform, like compiling a windows .exe from your raspberry pi for example )

- some linting

- documentation

- package mgmt

- bug report

- code generation

- etc...

Java is probably more advanced in some fields ( like tracing / profiling ) but it lacks others.

preommr
9 replies
1d17h

Golang also has some of the worst tooling because everything is based off of what comes builtin, and because they're not specialized projects, they're very limited in capability and configuration.

Coming from ts, tooling like tsconfig has a lot of options, but sensible defaults can be set with a single flag like strict mode. If some org has some specialized needs, they can dive into the configuration and get what they need.

With golang, not only would it be a lot for any single team to offer all those features at a decent level of polish, the golang culture in particular is very, very resistant to small bits of comfort because of dogma like "worse is better". It's kind of similar to Haskell's "avoid success at all cost".

dilyevsky
3 replies
1d17h

I can’t believe we’re seriously comparing go polished tooling with bloatware and half baked crap in the js/ts land especially when it comes to package management and such.

Also there’s a lot of go tooling that doesn’t come from go team itself because go standard library exposes first class code introspection utilities. go vet is an example of this

icholy
2 replies
1d16h

Why spend your time coding when you could be fiddling with configuration files all day? I love re-learning how to make package.json, tsconfig, esbuild, eslint, prettier, mocha and webpack play nice every time I start a new project.

klabb3
1 replies
1d15h

Missing peer dependency. Have you tried nuking your node_modules?

mickael-kerjean
0 replies
1d6h

No I'm busy upgrading webpack and rewriting all my tests from jest to vitest

random_mutex
0 replies
1d12h

TS tooling excels at preventing you from getting work done

cdelsolar
0 replies
1d15h

No?

badrequest
0 replies
1d15h

I've been involved in the Go community for almost 10 years, and I've never once heard anybody say "worse is better". Comparing Go's tooling to Typescript feels like a farce, especially since you've neglected to mention the horror that is npm.

Thaxll
0 replies
1d17h

Which tools are you talking about?

BillyTheKing
0 replies
1d12h

This is a classic example of a 'contrarian' take for what feels like the sake of it. TS/JS tooling is a total and utter disaster at this point.

The commonJS/module transition is a nightmare. The fact that something like 'prisma' exists - a c-written 'query engine' that turns prisma js into sql.. wut?

This ecosystem is on a highway to hell literally.. I really hope Bun works out, because I do like Typescript, I do like programming in it - but I'm absolutely done with spending hours upon hours figuring out configs in tsconfig, jest, package.json eslintrc, prettier, vstest and whatever the next 'new' abstraction is. In Go I can just focus on the code and forget about the rest

pjmlp
0 replies
1d12h

There is really nothing in Go that Java ecosystem lacks, in their 30 years of existence.

The only thing one could arguably argue that Go does better is value types, but even that requires careful coding so that escape analysis is triggered, and in that sense, it is only a matter to use a JVM implementation like GraalVM, OpenJ9, Android ART, PTC, Aicas, Azul.

Alifatisk
0 replies
1d3h

single binary

cross compilation ( that work from any platform, like compiling a windows .exe from your raspberry pi for example )

This, I think is one of Go's best selling point.

tptacek
1 replies
1d3h

I really like Go's tooling, and while I'm fluent in Java I've never been on a full-time Java team. I've remarked to Java friends in the past that I think Go has best-in-class tooling and had my ass handed to me, in detail and at length. By many accounts, Java is the gold standard for tooling in language ecosystems. It's a compliment to Go that you'd even consider the comparison. :)

lycopodiopsida
0 replies
1d3h

Java's tooling is indeed very good but comes with a huge downside of being bound to an IDE.

tsimionescu
0 replies
1d12h

I think Go is catching up, but it's still significantly behind. For example, Go memory profiles are much, much worse than Java's - they don't even have them integrated with the GC to show the ownership graph (they can only show where each object was allocated, instead of which other objects are holding a reference to it). The CPU profiling parts seemed more up to par. This tracing thing is nice, I'm not as familiar with this are of Java. Also, I don't think Java has a built-in race detector (except perhaps for the detection of concurrent write and iterations in collections?).

Also, the OpenJDK JVM supports live debugging and code hotswap, going so far as to de-optimize code you're debugging on the fly to make it readable. Go doesn't support live code reload even in local builds.

atombender
0 replies
22h27m

One of the weakest areas is analyzing heap dumps.

The current format has very limited tooling; "go tool" has some extremely rudimentary visualization. There is gotraceui [1], which is much better, though you need to use Go trace regions to get much useful context.

There's a proposal to support Perfetto [2], but I don't know if anything has come of it.

[1] https://gotraceui.dev/

[2] https://perfetto.dev/

lagichikool
13 replies
1d19h

Really cool work. I'm still slightly paranoid about the overhead of leaving tracing enabled all the time but if it really is 1-2% that'd be totally worth it in many cases.

More awesome work from the Go team. Thanks!

heyoni
9 replies
1d16h

Go has a race detector that you can leave on for a while in like a dev environment and it’ll flag race conditions. There was documented overhead and at some point even a memory leak but they spent months looking into it and eventually plugged the leak and the saga is now part of their official docs. It’s really interesting to see that kind of stuff laid bare so I would trust that this feature will at least run reasonably well and if it doesn’t, they’ll likely fix it: https://go.dev/doc/articles/race_detector https://go.dev/issue/26813

klabb3
5 replies
1d15h

The race detector is an invaluable complement to a language like Go that lacks compile time safety. It meaningfully increases the confidence in code and is anecdotally both good at finding races while also giving straightforward error information to plug them. It won’t find everything though, since it’s runtime analysis.

In general, Go tooling makes up for a lot of the intrinsic issues with the language, and even shines as best in class in some cases (like coverage, and perhaps tracing too). All out of the box, and improving with every release.

pjmlp
4 replies
1d9h

Only if we leave Smalltalk, Erlang, Java and .NET ecosystems out of the picture.

badrequest
3 replies
23h11m

Ah yes, the notoriously popular and actively contributed Smalltalk ecosystem.

pjmlp
2 replies
22h28m

Doesn't change the fact of being there first, with a rich standard library, and one of the ecosystems that drove the design of rich developer tooling, decades before Go was even an idea.

arp242
1 replies
18h20m

No on claims Go was "first"; I don't get why you're so obsessed by that. The previous poster just said "The race detector is an invaluable complement". And it is. There was no comment at all about any other language; it just says "this is a nice thing about Go".

If you don't like Go then fine, no problem. Everyone dislikes some things. No hard feelings. But why comment here? What's the value of just stinking up threads like this with all these bitterly salty comments that barely engage with what's being said? Your comments alone consist 13% of this entire thread, and more if we could all the pointless "discussions" you started. It's profoundly unpleasant behaviour.

pjmlp
0 replies
5h53m

The Go community is quite deep into it being the first into fast compilers, co-routines, slices, static compilation, and rich standard libraries.

The commenter said more than that.

"In general, Go tooling makes up for a lot of the intrinsic issues with the language, and even shines as best in class in some cases (like coverage, and perhaps tracing too). All out of the box, and improving with every release."

Best in class?!?

swatcoder
2 replies
1d16h

The cost of race detection varies by program, but for a typical program, memory usage may increase by 5-10x and execution time by 2-20x.

That's "let's run the race detector and get some coffee" overhead, not "let's leave it on in dev" overhead.

Still cool that they have it available!

cdelsolar
0 replies
1d15h

I am always running everything in race mode when developing locally and have caught stuff before.

TheDong
0 replies
1d14h

The main value of the race detector is enabling it for tests (go test -race), and then writing sufficiently exhaustive unit tests to cover all code paths.

I do think most gophers, instead of tests, use a combination of prayer and automatically restarting crashed processes when they inevitably panic from a race, which seems to work better than you'd expect!

felixge
2 replies
1d11h

but if it really is 1-2% that'd be totally worth it in many cases

As one of the people who worked on the optimizations mentioned in the article, I'm probably biased, but I think you can expect those claims to hold outside of artificially pathological workloads :).

We're using execution tracing in our very large fleet of Go services at Datadog, and so far I've not seen any of our real workloads exceed 1% overhead from execution tracing.

In fact, we're confident enough that we built our goroutine timeline feature on top of execution traces, and it's enabled for all of our profiling customers by default nowdays as well [1].

[1] https://blog.felixge.de/debug-go-request-latency-with-datado...

baq
1 replies
1d8h

I love datadog but you guys really give splunk a run for their money in the invoicing department :/

euph0ria
0 replies
1d3h

lol

the_gipsy
12 replies
1d10h

What about error stack traces? I find it crazy that you're supposed to grep error strings and pray they're all different.

pipe01
7 replies
1d9h

You can do it yourself if you want using `runtime.Stack` and a custom error type

minzi
6 replies
1d6h

Embarrassingly, I’ve been writing Go for a while but never really thought about it. Now that it’s been mentioned I’m curious why this isn’t baked in by default for errors. Does anyone know?

icholy
2 replies
1d4h

Creating stack traces is expensive.

tbarbugli
1 replies
1d

you only do it when there is a not nil err + being able to have a stacktrace is worth more than whatever it costs is CPU

kbolino
0 replies
3h42m

Good error-wrapping discipline works better than stack traces, because not only do you have a trace of the calling context, you also have the values that caused the problem. Granted, a stack trace "always works", but it will never have the values of important variables.

sapiogram
0 replies
1d5h

I’ve been writing Go for a while but never really thought about it.

Don't feel bad, I've tried to do this in some places, but I'm not sure it's worth it. It adds a ton of boilerplate to Go's already verbose error handling, since you need to wrap every error that gets returned from libraries.

kbolino
0 replies
3h55m

Well, how would that work?

Errors are just values. They don't have any special meaning nor is there any special ceremony to create them. A panic must start from calling panic(); there's no function or special statement to create or return an error.

It might be possible to augment every `error`-typed return such that the concrete type is replaced by some synthetic type with a stack trace included. However, this would only work because `error` is an interface, so any function returning a concrete-typed error wouldn't be eligible for this; it would also add a lot of runtime overhead (including, at the very least, a check for nil on every return); and it would cause issues with code that still does == comparisons on errors.

On the whole, I think error-wrapping solves the problem of tracing an error well enough as the language exists today. If errors are going to start having some magic added to them, I think the entirety of error-handling deserves a rethink (which may be due, to be fair).

closeparen
0 replies
1d3h

You’re supposed to prepend the context to an error before you return it, so the final error reads like a top-level stack trace: “handling request: fooing the bar: barring the baz: connecting to DB: timeout.”

Xeoncross
2 replies
1d4h

Errors are different in every stdlib or 3rd party package I've ever used.

The ecosystem as a whole decided that errors should be useful, values, and properly bubbled up - and that your application should not be so hard to reason about that you need a stack trace to know where in the deep recesses of your code something went wrong.

I love stack traces in Java, but I don't miss them at all in Go.

the_gipsy
0 replies
1d2h

Sounds a bit apologetic. I don't think that the ecosystem decided. The language creators decided, and the ecosystem couldn't do much about it.

eweise
0 replies
23h14m

"your application should not be so hard to reason about that you need a stack trace to know where in the deep recesses of your code something went wrong."

Curious how Go is better than other languages in this regard.

tbarbugli
0 replies
1d

we wrap all errors with a stack.Wrap function that adds stacktrace to the error. This allows us to add stacktraces to logs and err reporting tools like Sentry. Huge time saver

imiric
4 replies
1d17h

This is really amazing! I look forward to giving it a try. Kudos to the Go team and contributors.

Say what you will about the language itself, the Go stdlib is one of the most useful and pleasant standard libraries I've ever worked with. It gets better with every release, and great care is taken to make it featureful, performant and well documented. These days for most general things I rarely have the need to reach for an external package.

throwaway894345
2 replies
23h49m

I remember hearing from people in the early days of Go that it was only nice because it hadn't aged. It didn't have the "barnacles" that Python and Java had acquired, but that it surely would after a decade. At the time, Python and Java were about 15 years old (since their 1.0 releases). Well, it's been 12 years and Go is still clean and minimalist. Maybe all of the cruft comes in years 12-15? :shrug:

kbolino
0 replies
3h9m

Go has many barnacles. Off the top of my head:

- context.Context is two different barnacles. The first is the thing itself; do I "need" context here? Probably should add it just to be safe. The second is the myriad of older libraries that don't use it or have tacked it on with a duplicate set of functions/methods. Library authors seem to not understand the point of major versions or else are afraid to pull that trigger.

- http.Client is a barnacle. First, there's the fact that you have to close the response.Body even if you don't use it. Then there's the difficulty of adding middleware or high-level configuration to it. Then there's the fact that they broke their own rules for context.Context and instead of taking it as a parameter in the methods, it's stored as part of a struct. You can work around many of these problems for your own code, but not when using third-party libraries.

- The entire log package is a barnacle. Thankfully we now have log/slog but log existed for so long that lots of third-party libraries use it. Even if the library authors knew to avoid the footguns that are Fatal/Fatalf/Fatalln and Panic/Panicf/Panicln, context and level support are spotty at best.

- The (crypto/rand).Read and (math/rand).Read debacle. Really, the entire math/rand package is a barnacle. Thankfully this has also been addressed with math/rand/v2 but the old one will live on for compatibility.

arp242
0 replies
18h27m

Go did acquire some of that though; a few packages are deprecated, some have different ways of doing the same thing (e.g. the addition of context and fs packages added new paradigms, and later netip and new JSON package added "urllib2"-like "v2" packages), and some things weren't really a good idea to start with.

I do think the situation is better than in Python and overall not bad after 12 years, but it's not as clean and minimalist as it could be.

traceroute66
0 replies
1d3h

I rarely have the need to reach for an external package

Agree 100 % ! And it gets better as time evolves.

For example the recent `log/slog` introduction to stlib kills the need for 99.9999% of people to use third-party logging, because structured logging is now in stlib.

Same with the new http mux. Many people will now be able to migrate over to stdlib because of the richer functionality, and only the small community of outliers who are doing stuff like regex mux parsing will need the third-party libs, but with time no doubt regex mux will make its way to stdlib too.

Veserv
3 replies
1d18h

The usage of the term execution trace really threw me off. I am more familiar with that term meaning a instruction execution trace or maybe just a function trace, so it made no sense why stack unwinding would be a bottleneck. Turns out it is actually a goroutine event (with stack trace at time of event) trace [1]. I guess Go does not have a code execution trace package?

[1] https://pkg.go.dev/runtime/trace

Veserv
0 replies
1d18h

I was talking about execution tracing, specifically, not tracing as a general concept. Tracing as a general concept just means a non-sampled time-series event stream and as commonly used these days generally also includes "stacking" to distinguish it from generic event streams or logging.

When you add a additional term like "execution" you are specializing the term to mean a event stream of "execution". In areas I am familiar with, that would normally mean a trace of the complete execution of the program down to the instruction-level so you can trace the precise "execution" of the program through the code.

What is described here would, in the terminology I am familiar with, be more like... a thread status and system event trace just applied to goroutines and the Go runtime, respectively, I guess? It does also include the stack trace at the time of the event, so it does have more data than that, but that is qualitatively different than a instruction execution trace that allows you to trace the exact sequence of execution of your program.

felixge
0 replies
1d10h

Go doesn't have an instruction (execution) or function call tracer. Go's tracer is primarily tracing scheduler events. So maybe the term scheduler tracer should have been used?

Anyway, using the term "execution tracer" in Go goes back to the initial design doc from 2014: https://docs.google.com/document/u/1/d/1FP5apqzBgr7ahCCgFO-y...

kjqgqkejbfefn
0 replies
18h6m

I've started development of something similar for Clojure's core.async library, which implements Go-like channels. This is something I cobbled together to trace the origin of a subtle sporadic bug that would happen once every 10,000 runs in a program that makes heavy use of channels. I'm not really working actively on it, so it only covers a subset of the functions in core.async but I keep on expanding it, adding debugging support for functions every now and then when I need it.

It represents the flow of data between channels/threads as a sequence diagram using PlantUML. Today I implemented core.async/onto-chan! (it spins a thread that will take items from a sequence and put them onto a channel). Here's what it looks like:

https://pasteboard.co/VhvDroREOvOQ.png

It's especially useful in Clojure, as the experience with channels is not as polished as in Go (or so I heard): when a put or a take hangs, your only recourse is to sit and squint at your code. This tool will color them red, so you can immediately spot what's wrong. It also allowed me to spot a channel spaghetti plate I had not anticipated and wouldn't have remarked otherwise.

For now I've taken a "maximalist" approach which includes inspecting buffers, so it incurs a heavy runtime penalty that's only ok at dev-time (+ it keep tracks of the whole trace, so no flight recording in sight for now).

In the future I'd like to give it a proper interface (it's just a svg file with hoverable elements for now), maybe using constraint-based graph layout to lay out the sequence diagram using cola.js/adaptagrams or the sequence diagram from Kiel university Kieler project when it's integrated into the Eclipse Layout Toolkit. Thesis from the developer of this module:

https://web.archive.org/web/20220519080528id_/https://macau....