return to table of content

CS 6120: Advanced Compilers: The Self-Guided Online Course

mgaunard
32 replies
6h3m

Before writing compilers, I think it's best to understand computer architectures, and what needs to be generated by the compiler to yield the most efficient machine code.

Unfortunately, in my experience, computer architecture and even systems programming are domains that schools/universities appear to systematically increasingly de-prioritize, presumably because it is seen as too technical.

That knowledge however is instrumental in landing some of the best jobs in the industry.

zarathustreal
24 replies
5h51m

In my experience, no one tends to see it as “too technical.” The typical (and imo obvious, logical) reason not to emphasize low-level details is because they’re implementation-specific, immediately outdated, and not particularly generalizable.

As an aside, I also tend not to get along with the “systems programmer” types because they tend to make knowing these kinds of extremely specific factoids their entire professional personality. You end up with people that can write assembly but have no idea what a functor is.

andai
11 replies
5h31m

What's the purpose of functors? It appears to have a different meaning in different programming languages.

mgaunard
9 replies
5h21m

Functors come from category theory. They're relevant in programming languages that are based on that, e.g. Ocaml, Haskell, and other sorts of overly academic and impractical esoteric stuff.

Like most ideas in programming based on theory, it's ultimately a very trivial thing, whose theoretical foundation is quite unimportant outside of helping design a minimal set of operations for your programming language.

It's not particularly important in the grand scheme of things. People even build and use monads with no understanding of the background behind it.

chongli
5 replies
4h58m

People even build and use monads with no understanding of the background behind it.

And then they miss the common pattern between them, so they miss the opportunity for abstraction and a common interface over the pattern. That’s the whole point of category theory: recognizing large classes (in the mathematical sense) of objects by their universal properties. It’s useful in programming language design because it gives you the ability to build useful, generic, water-tight abstractions.

A lot of abstractions built without any regard to the mathematics behind them wind up being leaky and difficult to apply appropriately because they rely on vague intuitions rather than simple mathematical properties.

quickthrower2
2 replies
4h38m

I feel there is impedance mismatch between the mathematical category theory side and what is needed for programs. A case in point is the infamous complex diagram for the Lens library in Haskell. This is abstracting the notion of getters and setters to some extreme and is several hours of work to really fathom. Compare to Go where the tutorial does balanced tree comparisons as an intro example to coroutines, when the language gets out of your way I feel it is much nicer. Haskell is more of a playpen for PL ideas. Some of those ideas get promoted into the mainstream when shown to be very useful! So Haskell is very valuable in that sense but can be difficult to write code in. Especially if you want to use libraries that may use complex category theoretic libraries.

chongli
0 replies
4h33m

A case in point is the infamous complex diagram for the Lens library in Haskell.

That case is due to history and path-dependence. The theory and abstraction of the lens library was developed long after Haskell the language was designed. If Haskell were rebuilt today from the ground up with lens in mind you wouldn’t have that mess. Unfortunately, fixing it now would be too much of a breaking change.

agumonkey
0 replies
3h29m

It's also a common psychological pattern.. you go from fuzzy experience and semi defined patterns, later you see more rigorous and precise definitions but they make you confuse a bit and one day you get it fully. Math, haskell etc have a tendency to live in the latter stage.

zarathustreal
0 replies
4h34m

A lot of abstractions built without any regard to the mathematics behind them wind up being leaky and difficult to apply appropriately because they rely on vague intuitions rather than simple mathematical properties.

Agreed. Likewise, you end up with different names for the same concepts across different facets of the industry which actually makes the profession as a whole harder learn and makes communication across different communities harder.

mgaunard
0 replies
4h18m

Programming practice shows that overly generic constructs aren't that useful and typically lead to over-engineering, difficulty of maintenance and reduced productivity.

There are enough complexities in the problem domain and the system itself, so KISS is king.

Good software is usually built by focusing on the actual problem being solved, and only generalizing solutions once sufficient amount of specific ones have been built and their commonalities identified to be lifted in a generic implementation.

The most impact language purists tend to have is when some of features end up adopted and adapted by practical languages (C++ or even C#).

fn-mote
2 replies
4h53m

[...] impractical esoteric stuff. [...] People even build and use monads with no understanding of the background behind it.

Concepts without names are difficult to reason about.

I think everyone who knows what they are would agree that the definition of "functor" can be learned in ten minutes. Recognizing the same concept being applied in different situations is the value. (As my sister comment says.)

mgaunard
0 replies
3h55m

A functor is a mapping between categories, just like a function is a mapping between sets.

What kind of insights this means for programming is quite up to interpretation and how you choose to formally describe its semantics (and most popular programming languages don't have formal semantics).

User23
0 replies
2h57m

The problem with category theory as it applies to programming is it's using a hydraulic press to crack eggs.

Telling people about the power of hydraulic presses when they just want to make omelets is pretty unhelpful.

gylterud
0 replies
5h3m

In programming, a functor is a structure with “elements” which you can apply a function to.

The idea is that the type of elements is a parameter, called A, say. Then you use functorialty to change all the elements to something of a different type (or a different element of the same type).

For instance List(A) is the type of lists of elements of type A. If you have a function f taking input A and giving output of type B you can “apply the functorial action” of List to transform a List(A) to List(B). For lists “the functorial action is simply mapping the function on each element. But being able to abstract over all functors can give very general implementations of interesting algorithms and patterns which then applies to many situations.

mgaunard
4 replies
5h39m

I don't think they are. So long as you were not trained before the 70s, any knowledge there is still very relevant today.

ISAs are still very similar, and so is the relative performance of the various operations.

Registers, caches, pipelines, superscalar execution... it's all still mostly the same, and still drives what the codegen should look like.

And yet you have so many people that don't know the considerations of what makes a simple loop fast or not.

zarathustreal
3 replies
5h1m

Knowing "the considerations of what makes a simple loop fast or not" is irrelevant to solving a computational problem. We as programmers should not need to worry about details of the hardware or toolchain (e.g. compiler) we're using.

Granted, the reality is that most people are doing the equivalent of digital plumbing, not necessarily solving computational problems.

sakjur
1 replies
4h4m

Plumbers have done more to save lives and enable the luxuries we take for granted than most. I would be lucky if my work could compare to that.

I think we need software engineers coming from electrical engineering, mathematics, humanities, and all walks of life. If you’re not interested in compilers, maybe the comments for a link about compilers isn’t your place to be.

aswanson
0 replies
3h20m

I come from an EE background. I didn't understand the importance of computation in general until entering industry, and I want to understand compilers before I leave this earthly plane, just for the sake of understanding them.

mgaunard
0 replies
4h38m

The role of the compiler is to automatically perform the task of translating a program in a given programming language to a program that runs on a computer, making the best possible use of that computer's resources.

Writing a compiler with shallow understanding of the computer you're targeting is definitely possible, but ultimately wasteful.

anthomtb
2 replies
3h12m

You end up with people that can write assembly but have no idea what a functor is

New challenge to self: write a functor in pure assembly.

galdosdi
0 replies
2h56m

What, such as a linked list? This is a first week assignment in a systems course.

anthomtb
0 replies
2h1m

After some reading I have come to two conclusions:

1) I have written many functors in my life without describing them as such

2) "Pure assembly" functor doesn't make much sense. Functors, both in the CS and mathematics sense, require the concept of type. Which is absent once you are dealing with stacks and registers. My best idea is something when you have mixed-width registers - eg, say you've got some 128 bit and 256 bit registers. Your functor could apply a mapping from one to the other (and would probably be nothing more than a move instruction with the appropriate sign extension).

stevesimmons
0 replies
5h48m

Excellent final sentence!

sspiff
0 replies
4h58m

I have been programming since I was 11 years old and I've been professionally programming in various languages and domains for roughly 15 years, yet I had to Google what a functor was.

Your attack on systems programmers holding specific knowledge as holy while pointing to their ignorance about another bit of knowledge you consider more relevant seems kind of ironic to me.

hnthrowaway0328
0 replies
4h41m

I guess it's poison and meat. Seems to be my idea job. I have always wanted to write C and assembly language programs in my career but don't care much about functional programming.

IshKebab
0 replies
5h45m

There are things that are generally true across almost all architectures and have been for a long time. Like cache lines, cache hierarchies, false sharing, alignment, branch prediction, pipelining, etc.

But I agree it isn't really necessary to write a compiler. Compilers tend to be pretty much the worst case from a microarchitectural point of view anyway - full of trees of small objects etc.

sabas123
2 replies
5h1m

Before writing compilers, I think it's best to understand computer architectures, and what needs to be generated by the compiler to yield the most efficient machine code.

In my opinion one of the biggest industry advancements we have had is that compilers are now available without such level of low-level detail. There is still so much work to be done on a compiler level that really shouldn't concern themselves with the (micro)-architectural level of computers.

mgaunard
1 replies
4h42m

It seems the trend is in increased diversity of hardware, with the dominance of x86 being reconsidered with smartphones, energy-optimal cloud, GPU computing, and ML accelerators.

LLVM allows hardware manufacturers to more easily provide mainstream language support for their platform than before, but the problem here was mostly that GCC was hostile to modular design, not really theoretical advances.

In terms of frontends, I guess we're seeing more languages reach C-level performance thanks to LLVM again.

But in terms of optimizations driven by theory? There were some significant advancements in generic auto-parallelization for imperative languages, about a decade ago I think. And it it doesn't magically solve the codegen problem, and remains hampered by language semantics that are not always parallelization-friendly.

There were a bunch of improvements which were driven by making languages more hardware-aware, e.g. the concurrent C++ model in 2011 which was widely copied to other low-level programming languages.

We're also seeing more and more libraries that are specifically designed to target hardware features better.

So ultimately it looks like most of the advances are driven by better integration of how the hardware works throughout the compiler, language and community.

shrimp_emoji
0 replies
1h46m

I guess we're seeing more languages reach C-level performance thanks to LLVM again.

If they made JavaScript as fast as C, the software industry will become a cheap perversion of what it once was, and I'm picking up my toys and going home.

dmichulke
2 replies
4h31m

That knowledge however is instrumental in landing some of the best jobs in the industry.

Could you elaborate what you think are the best jobs in the industry and why?

JonChesterfield
1 replies
2h32m

Compilers are amplifiers.

Compiler optimisations give you faster/cheaper execution of every program.

Type systems and linters detect errors without writing or running tests.

Programming languages are tools for thought. Matching one to the domain makes the domain easier to reason about, lets people solve bigger problems.

Whether that correlates with "best jobs" is subjective. The value proposition of making other programmers more productive is really good.

If you think that's an exaggeration, consider writing a web browser in machine code.

bee_rider
0 replies
1h50m

I agree that “best jobs” is a bit ambiguous, but I think “landing the best jobs” is unambiguously getting a job that benefits the job-getter to an unusual extent (very well paying, or at least well paying and very stable). It is an expression.

The surprising thing about this, I think, is that hardware is generally expected to be quite underpaid around here, I think, compared to programming.

seanmcdirmid
0 replies
2h39m

GPU related tasks are still fairly in need of lots of computer architecture and compiler skills. Yes, you don’t need to study x86 or MIPs, but CUDA presents an even weirder architecture than those.

simonebrunozzi
14 replies
10h0m

Compilers is one of these field that didn't evolve much for ~30 years. I used to teach a course at Perugia University back in 2004-2006, and the material I could use was easily 15-20-25 years old, no sweat.

Things seem to have changed recently.

grumpyprole
7 replies
9h31m

Probably because most programming languages haven't changed, imperative, loops, pervasive side-effects etc. Writing a compiler for a pure functional language would certainly need new material.

pjmlp
4 replies
7h23m

On top of sibbling comment, Simon Peyton Jones seminal paper is from 1987.

grumpyprole
3 replies
6h39m

He has spent the last 30 years since publishing more papers on compiling functional languages.

pjmlp
2 replies
6h13m

Mostly related to adding new features to Haskell and GHC, not on compiling functional languages in general.

Now functional logic languages like Verse, that is definitely new material.

grumpyprole
1 replies
5h34m

That's selling him very short. I would say most of his papers are applicable to functional programming in general and not just Haskell.

pjmlp
0 replies
3h0m

Only for those that understand functional programming as whatever Haskell does.

dunefox
1 replies
8h26m

Writing a compiler for a pure functional language would certainly need new material.

I mean, Haskell is also over 30 years old at this point.

grumpyprole
0 replies
6h38m

Yes, but it's changed a lot in 30 years.

pfdietz
2 replies
8h49m

There's been a substantial change in compiler testing since then, with high volume randomized testing coming into vogue.

looofooo0
1 replies
7h50m

Recent papers with llm Fuzzing appearing e.g. fuzz4all

sjrd
0 replies
7h1m

I disagree. The landscape has significantly evolved into incremental compilation techniques in the past decade. While theoretical developments are mostly confined into parsers, practical implementations do it across the entire pipeline, all the way to whole-program optimizations and code generation.

mr_foo
0 replies
6h25m

I'm not well versed in compiler design or implementation. But it seems to me that the target of compilers (machine code, assembly, etc.) has changed dramatically over time. The number of instructions and options on each successive processor generation seems to offer new optimizations or combinations for compilers.

jcranmer
0 replies
1h19m

That's just not true.

Just going by the new stuff that you could have taught back in 2004-2006 that you likely didn't, there's SSA construction, SLP vectorization, automatic peephole superoptimization, and that's just things I could name off the top of my head (and were papers for my qualifying exam :-P).

What hasn't changed is the compiler textbooks, which tend to have way too heavy a focus on how to build a parser generator and almost nothing on how you actually architect a compiler, let alone designing modern computer architecture. But this is a gripe which has been around for decades.

fancyfredbot
6 replies
6h37m

Compilers seem a great target for AI. Whole program optimisation using reinforcement learning could be massive, there's plenty of data, and collecting more data is relatively cheap. The course touches on superoptimisers but these don't really use AI. I think a change is coming in this space

layer8
4 replies
6h15m

How would you prove the resulting program is functionally still the same?

danybittel
1 replies
5h54m

You don't have to go that far. A compiler contains a lot of heuristics, I could see ai being useful for that.

sabas123
0 replies
4h57m

Those heuristics are about "when" to apply certain transformations in a situation when two options are already proven equivalent. That is different from transforming correct code into possibly incorrect code.

mgaunard
0 replies
5h54m

AI could be used to choose which of the valid transformations to apply and in which order.

Current compilers do not guarantee that re-running the optimization passes a second time is idempotent. Every pass just does some "useful" transformations, it's best-effort.

fancyfredbot
0 replies
1h31m

As others have commented, it would be more like deciding which optimisation passes to run and in which order. Things like loop unrolling and inlining are not always helpful. Things like polyhedral optimization can be very slow. Running every optimization pass every time it might help is certainly far too slow.

bregma
0 replies
5h51m

AI os great for compiler optimization as long as it's OK that the rule that functionality can't be observably changed gets broken.

Drop in a modern AI as the optimizer and every single line of C++ code can be considered undefined behaviour.

jokoon
4 replies
7h21m

If you want to build a parser, I recommend Lexy, it's a parser combinator.

It's a bit difficult to use because it makes extensive use of templates.

DinaCoder99
3 replies
7h6m

The C++ requirement seems to place this dead in the water. In 2024 there is no shortage of better runtimes.

mgaunard
2 replies
5h52m

While I don't know anything about that project, your comment sounds a bit unwarrantingly anti-C++.

Most compilers and operating systems are built in C++ nowadays, it's the industry standard for any systems-level programming.

IshKebab
1 replies
5h42m

Yeah but only because Rust (and arguably Zig) didn't exist when those compilers and operating systems were written.

C++ is the industry standard for existing software but it would be dubious at best to start a new project in C++. Especially something that has essentially no dependencies like a compiler. GUIs and games, sure.

mgaunard
0 replies
5h28m

ha, you're a Rust zealot; that explains it.

Believing that all new software should be written in Rust rather than C++, and that this situation is inevitable anyway, is one of the core tenets of this church.

dunefox
2 replies
8h25m

CS 6120 is a PhD-level Cornell CS course by Adrian Sampson on programming language implementation.

So, is it very advanced then? I don't think I'm at PhD level when it comes to CS.

f1shy
0 replies
6h59m

PhD does bot mean more difficult than undergrade. Just give it a try.

__rito__
0 replies
8h2m

Never measure your level of preparedness using conventional academic degrees.

Just try it, if you can do it, you are ready, and if you can't, you aren’t.

Delta231
2 replies
9h16m

I wonder if there's a predecessor course to this

mnafees
1 replies
6h27m

This course single-handedly became the reason for me to clear interviews at all the compiler engineering teams of MANGA (Meta, Apple, NVIDIA, Google, Amazon) companies when I was searching for my first full-time job while completing my Bachelors. My University's (University of Waterloo) curriculum around low-level computing systems was already par excellent, and of'course I was also contributing to open source compiler projects. But this course really helped me answer some of the toughest questions during my interviews!

kettleballroll
0 replies
6h20m

par excellent

It's "par excellence", and it feels weird in that sentence structure anyways. A better way to put it would've been "my university has the low level computing systems curriculum par excellence".

dfgdfg34545456
1 replies
5h12m

Is there a course like this but for pure functional languages?

throwaway17_17
0 replies
2h48m

TL;DR — Nope, but here is a list of what such a course would use for reading material.

As referenced in another comment, Simon Peyton Jones has a 1987 book on compiling functional languages, The Implementation of Functional Programming Languages. Follow that up with some of the referenced papers contained therein and you will have a base level to start at. Following the FP academic boom of the 80’s I would look to the following (in no particular order): papers of Stephanie Weirich, Simon Peyton Jones, Simon Marlow, Odersky, Philip Wadler and other authors you would find being referenced in their papers; the webpages for the Koka language, Idris, the Granule project, Haskell, and Ocaml all have references/publication sections that will contain a wealth of material.

It is kind of a shame there is not a more ‘one-stop-shop’ for functional language compiling, but the research is so broad it would be hard to condense. You aren’t going to go wrong starting from SPJ’s book as the basis. Jones has a talk on YouTube about ‘Fitting Haskell into Nine Terms’ which is based on a paper that discusses the compilation strategy for GHC regarding the push to keep the internal System Fc language small and clean that is a good watch, along with the Jones talk on Compiling without Continuations also on YouTube and gives more internal views of GHC.

Sorry the list is a bit scattershot, if you need more specifics I will certainly try to find a more narrow selection.

tripdout
0 replies
10h27m

I favorited this last year and still didn't get around to it. But from what I saw of one of the videos, it looks really good, super interesting stuff.

terrelln
0 replies
2h43m

I went through this course online a few summers ago, and learned a ton. I highly recommend it!

It was very engaging to put up PRs for small issues in the Bril IR, and work with the professor to fix them.

rrgok
0 replies
6h38m

I'm glad this exists. Now I can follow a guided course into advanced topics at my own pace. I've always wanted to have a career as a Compiler Engineer, but sadly, where I live doesn't offer much in terms of education and job opportunities. Looking at the USA, the job market is overwhelmingly competitive, and I honestly don't know how to get into it. The only experience I have is a course I did during my Bachelor's, and I loved every bit of it.

riedel
0 replies
9h59m

I looks like this is still basically most of the stuff covered by the normal compiler construction course I attended held by Gerhard Goos 20 years ago. It links some newer papers, so I might have a look. I liked the book by Steven Muchnick 'Advanced Compiler Design and Implementation' . I have to admit that after 18 years not having looked at compiler source code, I feel not up to speed particularly with a lot of profiling and path based optimisations. Also I guess some more advanced SIMD stuff must be out there looking at all the ML.

pjmlp
0 replies
9h35m

Great to see that also have "A Unified Theory of Garbage Collection", at least these students will get the right picture on RC versus tracing GC.

kungfupawnda
0 replies
5h34m

does this course come with the labs? if so how can we test that our compiler is working?

fear91
0 replies
8h8m

There doesn't seem to be a good in-depth academic resource for advanced compiler optimization. I've searched a lot and all the courses I found were introductory, the actual interesting techniques require diving deep into the source code of popular OSS compilers. I found that quite surprising.