return to table of content

C Macro Reflection in Zig

skywal_l
80 replies
1d9h

@cImport is on the chopping block though [0]. You will still be able to import c files but it will require a little more work. This is because they want this functionality out of the language so they can remove libclang dependency.

[0]: https://github.com/ziglang/zig/issues/20630

jay-barronville
55 replies
1d8h

While I understand the reasoning, I think this is one of the most disappointing decisions by the Zig team.

One of the main reasons I took Zig seriously was their C interop story—as someone who loves C and dislikes almost every implementation of C interop and FFI I’ve used in other languages (Rust is a notable exception to this), I was pretty much sold on Zig when I was able to, in a total of < 3-ish hours, (1) implement a complete Zig wrapper over a C library I wrote without introducing any overhead, (2) build a Zig program using the wrapper, and (3) cross-compile the program, statically, producing single binaries for 5 different targets…Linux (amd64, arm64), macOS (amd64, arm64), and Windows (amd64), ALL from one laptop using a single Zig executable and some simple flags. This was C interop and productivity I’ve never experienced before.

I respect Andrew a lot (I think he’s pretty brilliant), so I hope this turns out to not be as bad as I think it’ll be for those of us who love Zig for what brings to the table for programmers who are biased toward C.

ajnin
9 replies
1d4h

Not involved with Zig at all, but this comment is a bit concerning :

This is not the first controversial change I have made to the Zig project, and it won't be the last. I have a vision, and I know how to execute it. People are often surprised by what Zig has accomplished, and they wonder why other projects have not done what Zig does. Well, you are seeing the magic right now. I ignore the peanut gallery and do what I know is right, while taking care of my users' needs at the same time. If you don't understand now, you will understand once this plan unfolds. By that time it will seem obvious in hindsight.

He's probably brillant and all but this ... feels like hubris.

acedTrex
2 replies
1d3h

Wow, that actually makes me want to look into zig a lot more. so many projects, rust included, get bogged down in design by committee and half baked decisions that please no one.

nkozyra
0 replies
23h34m

I agree that letting the community completely dictate direction is a bad idea, but so too is being so dogmatic and idealist that you ignore the feedback.

Rust definitely swayed more to the former than I'd have liked, but you also have Go as a counterexample where generics were dismissed for a decade+ in response to user feedback and then kind of :shrug: ok fine we'll add generics.

bunderbunder
0 replies
1d3h

Agreed. Andrew Kelley's ability to have, communicate and maintain a clear and consistent vision is one of the most refreshing things about Zig. I don't actually use it, but this might actually be the main thing attracting me to it.

Plenty has already been said about design-by-committee and relying overmuch on user feedback, but one thing that doesn't get mentioned as often is that this approach tends to transform additive bias[1] from a cognitive bias into an iron law. Once you let that happen, you're on a relatively short and slippery slope to having a kitchen sink language. And one refreshing thing about Zig is that's it's clearly working very hard at not becoming a kitchen sink language. I'm not sure I can say the same about most other newer systems programming languages.

That doesn't mean having a BDFL is all kittens and rainbows, and I'm sure Andrew has made plenty of mistakes. But I've also never seen any indication that he's acted out of anything other than good faith. That last paragraph is possibly the closest I've ever seen to him saying something arrogant, and I see it as the exception that proves the rule. Finding such a tactful way to remind people that this is a BDFL project and he's the BDFL could not have been easy, and I imagine he put a lot of care into crafting that paragraph.

1: https://www.scientificamerican.com/article/our-brain-typical...

jay-barronville
0 replies
1d

He's probably brillant and all but this ... feels like hubris.

I don’t think it’s hubris, because Andrew’s results speak for him, but it’s certainly alienating, to be honest.

Although I understand the struggle of having to prioritize opinions and perspectives, it comes across like Andrew only values the opinions and perspectives of very specific folks, whom he often calls out.

Here’s what I mean: I love Zig and I write a lot of Zig code, especially within the past year (almost daily), but none of the Zig code I’ve been working on is publicly available or open-source (although I hope I can open-source various components soon, fingers crossed). I’ve gained a lot of valuable experience with Zig—including successfully convincing folks (mainly C programmers) to use it who wouldn’t have tried it otherwise. When I read these interactions, even though I have thoughts I’d like to share as a committed user who wants to see the project succeed and gain mainstream adoption, I get the feeling that my thoughts aren’t welcome since I don’t have a huge Zig project or something, so I just keep my thoughts to myself. Andrew seems to mostly care about feedback from the creators of Bun, TigerBeetle, etc., which, if I’m correct, is fine (it’s his project and therefore his right), but I imagine there are plenty of users like me who aren’t just part of “the peanut gallery” yet staying out of it to avoid the drama.

eyelidlessness
0 replies
1d2h

I simultaneously understand why that comment gives you pause, and find comfort in seeing such a clear expression of vision.

As someone who often thinks several steps ahead about where I want to take a project, I find it’s just as often difficult to communicate that vision at the level of detail necessary to establish a shared understanding of what those steps mean, and how various apparent minutiae come together to make them all valuable together.

I would be lying if I said I don’t wish I shared this particular hubris, and the corresponding expectation that execution will be sufficient to bolster any trust challenged along the way.

dangets
0 replies
23h50m

This type of stance is what de-popularized the Elm language.

Don't get me wrong - I wish the best for both languages and am thoroughly impressed by the work of their creators. I can see that it must be a hard thing to balance.

carapace
0 replies
1d3h

Don Quixote or St. George? Only time will tell. Meanwhile the code works.

JasonSage
0 replies
1d4h

Reading that comment for the first time just now, and I love that. More power to them.

BiteCode_dev
0 replies
1d4h

Or very high confidence backed up by experience and skill, expressed through an honest personality.

In a world of humble bragging and cheap talk, I find it refreshing.

kasajian
7 replies
1d3h

Sounds like damage control. I have to say, for a brilliant guy, andrewrk knows nothing about marketing. The fact that the Zig project can make this significant of a change to their core-value because "trust me i know what I'm doing", makes it impossible (for now) to rely on this in any type of a widely-deployed Enterprise setting. This just made the highly risky move of moving to Zig make it darn near impossible.

What he should have done is announce a project at the same time that will maintain the current developer experience, even if that project is not part of the zig foundation. The developer doesn't care about how he builds zig. They want 1) download 2) use. It doesn't matter from where. If today it's directly from zig, and tomorrow it's from elsewhere to download a combined package, that 's all the devs needed to hear.

flohofwoe
4 replies
1d2h

Unfortunately there's a lot of people on the internet who haven't even used Zig nor plan to use it anytime in the future and who just enjoy to create drama by amplifying any decision with a hint of controversy around it (also see the 'unused variables are errors' drama which turned out to be a non-issue after it was actually implemented).

Unrelated to Zig, it's the exact same thing with "WASM can't access the DOM" btw, comes up everytime when WASM is in the news, but is a complete non-issue in practice for anybody actually using WASM, and I'm sure each popular software project has at least one such 'drama issues'.

The 'LLVM divorce' has been announced waaaay ahead (maybe years) of any actual steps to make that happen, for a language ecosystem that's still unstable anyway, and with the promise that a solution will be in place before LLVM is actually kicked out.

Not sure what else could have been done better, and as other have said, this sort of decision making process is much preferable to a committee approach.

jeltz
1 replies
1d2h

The LLVM divorce has also gotten criticism since it first was announced and we have so far not seen any complete solution. Maybe we will land in one but if people did not voice their concerns there is no reason to think such a solution would be found.

throwawaymaths
0 replies
1d1h

What is there to criticize though? What "solution" is necessary?

LLVM is not and was never going away as a compiler backend, it will just not be the default one and you will be able to compile zig without it (though in practice everyone will for prod releases)

throwawaymaths
0 replies
1d2h

You have to admit that "divorce" was not really a good metaphor, though it did get eyes on the drama.

kasajian
0 replies
1d2h

The fact that WASM requires a JS is a marketing fail and misses what could have been a marketing opportunity to make the claim, "finally an alternative to JavaScript".

In this case, this is not a non-issue. If you look at the original github issue, it does a lot of damage, followed by a lot of confusion, followed by a damage-control post that shouldn't have been required if it the original issue was written with more care. It wasn't, because the marketing aspect of Zig was not the focus -- the technical issue was. So it blew up. Hopefully it'll be a lesson learned, but I suspect it isn't. It will take 20 more of such incidents before it sinks in.

As far as it was "announced years ago", I don't see how that matters. The people who are seeing the issue now, and this discussion weren't there to see the announcement years ago.

But I do understand what you're saying. You're point is that the onus is on the the reader to the do their research before overreacting. That's a perfectly fine position. I have a counter opinion which is that the person making the statement / claim / annoucement, simply be understanding of the implication of their statements.

This thing blowing up should not have been a surprise to anyone. Sounds like it would have been to you, so the fact that it blew up is evidence that you would have misjudged. Unless you also agree that the initial message should could have been better worded, in which case, what exactly are you disagreeing with?

throwawaymaths
0 replies
1d3h

Wait what is controversial here? Removing llvm as a dependency and making it a (probably default available) plugin instead? Seems "obviously good", if you ask me.

Weird machinations around projects that are and aren't (but are privileged because he's the creator) part of the zig foundation would be more concerning, quite frankly.

pharrington
0 replies
1d

Zig's not at version 1.0.0 yet. Yes, it would be very irresponsible to use current-day Zig in a widely-deployed Enterprise setting. The release notes explicitly acknowledge Zig is currently immature, and is currently only suited to people willing to participate in the language development process.

jll29
13 replies
1d8h

This sounds amazing, and it's great that tooling is so strong of some "new kids on the block" (Zig, Rust), even better than C.

With hindsight, it is strange that the C community, with all the people and money behind it (and what scale!) never even managed to build a proper packaging manager (okay, there's now Conan, but that came from Python guys).

But then, there even still isn't a perfect C string library around (something that would combine GLib, SDS, ICU, say, and then standardize it in C2038).

[1] Hanson's C: Interfaces & Implementations (CII) - Str: https://cii.s3.amazonaws.com/book/pdf/quickref.pdf

[2] ICU - https://icu.unicode.org

[3] SDS - https://github.com/antirez/sds

[4] GLib - https://docs.gtk.org/glib/struct.String.html

bluGill
6 replies
1d5h

With hindsight, it is strange that the C community, with all the people and money behind it (and what scale!) never even managed to build a proper packaging manager (okay, there's now Conan, but that came from Python guys).

Package managers are a lot more complex than people realize. Every attempt I've seen in every language has significant lacks for common real world cases.

Most commonly they assume all the world is their language. Very commonly they want to take over all package management but don't have a good easy story for working with the OS package manager (I know Windows doesn't really have a package manager - but you still need to interoperate with it!). Those are big issues anyone in a complex project will face in the real world, there are also lots of little warts that will bite you.

Yes package managers are nice on your trival project. However they all get in the way in your non-complex project - some more than others.

greenavocado
2 replies
1d4h

Weird take. Does pip have to interface with the Windows or iOS package managers? No. Yet it's wildly successful.

jordanozang
0 replies
1d3h

If you use pip without protection, it will gladly and malevolently mess up your system. Every Python user learns quickly about the virtual environment work around. Various distributions (e.g. through Homebrew on Os X, Arch Linux, Ubuntu Linux) ship pip without the ability to make system-wide installations (externally-managed-environment). Even on Windows, where there is no standardized package management system, doing a pip install will place a bunch of dlls in the system's PATH that could get in the way of any program on your system.

The anti-solution of ignoring the problem is what got us here.

bluGill
0 replies
1d3h

Does pip have to interface with the Windows or iOS package managers? No. Yet it's wildly successful.

Successful I agree. I have issues on any system I use it on because now I have the package installed by PIP and the package installed by my OS package manager - slightly different.

Joker_vD
2 replies
1d3h

Why would anyone want to integrate with the OS package manager? Windows, as you've said yourself, doesn't even have one (and thank goodness for that) while on Linux, the distributed packages are normally about 2 to 4 years out of date — unless you discover and use specific 3rd-party repositories at which point what's even the point then? Just use the language's CPAN/PyPI/Hex/Crates.io/etc. analogue.

kitkat_new
0 replies
18h21m

Windows has winget

bluGill
0 replies
1d3h

Why would anyone want to integrate with the OS package manage

Because if they don't integrate you end up with several different versions of the same package installed. When you program grabs the wrong version how do you fix it. Now what if you are in support trying to help a customer.

on Linux, the distributed packages are normally about 2 to 4 years out of date

Maybe you need to find a different distribution. Some make it a point to be out of date. Some make it a point to be up to date.

flohofwoe
2 replies
1d8h

The interesting thing is that the Zig build system could be implemented in a C/C++ compiler without any language changes. All that's needed is that (for instance) Clang would have a new option `clang build` which looks for a build.c/cpp file, compiles that into an executable and runs it.

The actual build system functionality is 'just' an extension to the stdlib.

pjmlp
1 replies
1d4h

It could, but then it would be tied to that specific compiler and the platforms it supports.

So is the mess of ISO defined languages, where compiler toolchains are an abstract concept.

Also a good point to note that OpenGroup and POSIX never bothered to standardise UNIX package management.

flohofwoe
0 replies
1d2h

Yeah, just Clang doing its own thing would be useless. It would have to go through the C and C++ committees, at least for the stdlib parts (and this is basically the show stopper for the idea unfortunately).

fxtentacle
1 replies
1d7h

vcpkg + CMake ?

It can compile almost any dependency on demand, even doing cross-compilation. And it's used by at least Microsoft and Google.

flohofwoe
0 replies
1d7h

Such a setup includes two complex tools from two different parties plus a different C/C++ compiler toolchain per platform and platform SDKs (at least GCC, Clang (plus its Apple flavour) and MSVC). All those tools working together flawlessly at any given time is a minor miracle.

In Zig all those things are in the same toolchain install, which is a a single executable plus a bunch of platform-system-headers and -libraries, all under a single version number and trivially downloadable as a zip archive which works right after unzipping without setting up paths or running an 'installer'.

uecker
0 replies
23h47m

In the unix world we use distribution package managers. This has many advantages, including security updates, some robustness against supply chain attacks, large-scale integration. All this language-level packaging systems are a mistake in my opinion.

throwawaymaths
9 replies
1d4h

What's the problem? @cImport is becoming just @import.

kbolino
5 replies
1d2h

@cImport works without build.zig and even with build.zig requires no special configuration (ignoring linking).

As it is currently described, @import of C will require build.zig and specific configuration therein.

throwawaymaths
1 replies
1d2h

I don't think that's the case. In general you can set up @import modules from the command line, and I'm not 100% sure but I think build.zig generally just templates a command line call anyways.

kbolino
0 replies
1d1h

I don't think this DSL is finalized yet, but you can see on a linked issue what the build.zig to support @import of C looks like on nightly:

    const translate_c = b.addTranslateC(.{
        .root_source_file = b.path("src/c.h"),
        .target = target,
        .optimize = optimize,
    });
    exe.root_module.addImport("c", translate_c.createModule());
https://github.com/ziglang/zig/issues/20648

samatman
0 replies
1d2h

build.zig is on its way to being essential.

I consider that an appropriate development. Binaries need to be built, this should be integrated with the rest of the system in any modern language.

It's not incompatible with using something like CMake or Ninja either, it just puts certain responsibilities inside of build.zig. Where they belong.

mk12
0 replies
20h56m

It won't require build.zig, you'll just have to run zig translate-c on the C file. Andrew's comment here[1] says that @cImport is basically @import + the compiler implicitly running translate-c for you. There was some discussion of removing the translate-c subcommand as well (which would force you to use build.zig), but I don't think it's been decided to do that.

[1] https://github.com/ziglang/zig/issues/20630#issuecomment-225...

jeltz
0 replies
1d2h

And form my experience from Rust that is much less ergonomic. Maybe Zig will do it better than Rust but who knows?

jay-barronville
2 replies
1d1h

What's the problem? @cImport is becoming just @import.

You’re oversimplifying what’s actually a significant change.

Right now, I can do the following:

1. Download a prebuilt Zig release [0], on a fresh computer (zero prerequisites).

2. Implement a C library foo via foo.h and foo.c.

3. Compile foo.c into an object file foo.o via `zig cc`, using standard GNU-compatible compiler flags, for virtually any of the common (and even not-so-common) compilation targets.

4. Implement a Zig program bar via bar.zig.

5. Directly import foo.h into bar.zig via `@cImport` and use the C API as if it was Zig code.

6. Compile bar.zig and foo.o into a statically-linked executable baz via `zig build-exe`, for virtually any of the common (and even not-so-common) compilation targets.

No knowledge of the Zig build system and its semantics necessary. This is one of the most attractive aspects of Zig for a C programmer. I’ve gone through these exact steps with C programmers who thought Zig seemed interesting in theory but just didn’t want to have to learn a new ecosystem, build system, etc. The moment I showed them how quickly they could get up and running with Zig (and test it out with C code), without even having to install anything, it suddenly was impressive enough to experiment with.

The build system requirement may seem minor if you’re already a Zig programmer, but it’s massive if you want to attract C systems programmers.

[0]: https://ziglang.org/download

throwawaymaths
0 replies
1d1h

I think someone who doesn't know what they are talking about about is asserting a build.zig requirement. I don't know for sure, but probably it will work with the command line?

chvrchbvrner
0 replies
8h42m

The build system requirement may seem minor if you’re already a Zig programmer, but it’s massive if you want to attract C systems programmers.

You won't need knowledge of the build system nor a 'build.zig'.

After step 4 you would run 'zig translate-c' on 'foo.h'. Then use '@import' instead of '@cImport' in step 5 for the translated file.

'@cImport' is basically just doing that under the hood. It's an additional step for the user, that's a fair point, but I definitely wouldn't call it massive.

Cloudef
6 replies
1d8h

How I see is that the only thing that changes is that you can't do @importC anymore. You'll instead do something in build.zig that produces a module which you can then addImport("my-c-lib", generated_module); which you then @import("my-c-lib"); in your zig code as you would with @cImport.

This does not seem bad in paper. One thing that does worsen with this is that in @cImport you could also comptime define preprocessor macros which was really cool, now that would have to be handled by build.zig I guess.

jay-barronville
3 replies
1d8h

This makes the experience more similar to Rust (which I don’t think is bad—it’s just not as unique, smooth, and impressive as the current Zig experience).

I’ve been able to convince C programmers to try out and use Zig just due to this unique ability alone, and to be clear, getting C programmers to seriously consider any language other than C is generally very difficult!

Having to consider another build system (with its own semantics), which the current Zig experience doesn’t require, changes the experience much more substantially than I think the Zig team realizes.

flohofwoe
2 replies
1d8h

This makes the experience more similar to Rust

The big difference to the Rust ecosystem (or rather the 'cc' crates.io package which AFAIK is the current standard solution) is that there will be a Clang toolchain package with integrated cross-compilation headers and libraries that is used across all platforms instead of relying on a "platform C/C++ compiler toolchain" - which is the actually brittle part: different compilers, linkers and platform SDKs used on different platforms).

Ideally that same integrated Clang toolchain package used and provided by Zig could also be used by Rust to improve the C/C++/ObjC cross-compilation situation (similar to how the Zig toolchain is sometimes already used for this purpose).

steveklabnik
0 replies
1d2h

I truly wish that Rust would steal this from Zig, but I haven't heard any actual interest in it from the project. Oh well. I think it's a real missed opportunity.

https://crates.io/crates/cargo-zigbuild exists though.

jay-barronville
0 replies
1d8h

The big difference to the Rust ecosystem (or rather the 'cc' crates.io package which AFAIK is the current standard solution) is that there will be a Clang toolchain package with integrated cross-compilation headers and libraries that is used across all platforms instead of relying on a "platform C/C++ compiler toolchain" - which is the actually brittle part: different compilers, linkers and platform SDKs used on different platforms).

Yes, this is a very good point. Zig remains unique and impressive in that sense. The fact that Zig compiles the correct libc and other system libraries on demand is actually another one of those, “How come no other language considered doing this before Zig?!?”

throwawaymaths
0 replies
1d4h

You will probably be able to do it from the command line too (for build-exe, run, etc).

jeltz
0 replies
1d6h

Having worked with Rust I would say that is bad. @cImport is way better than the C introp in Rust.

brabel
2 replies
1d7h

as someone who loves C and dislikes almost every implementation of C interop and FFI I’ve used in other languages (Rust is a notable exception to this),

Have you tried D? It can import C files as if they were D modules: https://dlang.org/spec/importc.html

Basically, if there's a `hello.c` file next to your D file, you simply import it with:

    import hello
And use the functions it provides. You can also import only a subset of it, of course:

    import hello: square;
Or rename the import:

    import hi = hello;

The C stdlib is exposed by D's stdlib as if it were a D library: https://dlang.org/phobos/core_stdc_assert_.html

C libraries (header files) can be compiled to D and then used as D modules as well, see https://github.com/jacob-carlborg/dstep

Is that as good as Rust?

jay-barronville
1 replies
1d7h

Have you tried D?

I tried D years ago (at least 5 years ago, I think), but I don’t remember experimenting with the C interop.

Your explanation sounds intriguing though. And a couple pretty smart folks I respect have talked about D too. So thanks for bringing it up. I’m going to make some time to experiment with it again some time soon.

Is that as good as Rust?

Franky, although I don’t have firsthand experience with D, your explanation and example make the D C interop experience sound actually better than Rust’s. It seems more similar to Zig.

brabel
0 replies
1d7h

I think the feature that allows importing c, called "importC", is newer than 5 years. I think they may have "copied" it from Zig :D they definitely are trying to not "fall behind" Zig and keep making the language better. With DMD, you can cross compile just as with Zig, but only to object files as the linker is not multi-platform... so I am experimenting with using Zig's linker to "finish" the job. Unfortunately, Zig is not being able to link Phobos, the D stdlib, for reasons I don't understand yet.

flohofwoe
0 replies
1d8h

The translate-c build system step and wrapping the output in a Zig module is currently about 10 lines in build.zig, and I guess this could be reduced further by merging those two steps into a single step.

I think that's an acceptable compromise.

Especially for C libraries which require configuration via preprocessor defines or compilation flags, the build.zig way is a lot cleaner than the @-builtins that are currently used (IMHO).

samatman
10 replies
1d2h

It will require a little more work in a trivial way, but not in a meaningful way.

What's happening is that C imports are moving to the build system, instead of being a compiler builtin. It's part of making LLVM and libclang optional for programs which don't use it, but the use case of building C programs, and integrating C libraries with Zig programs, remains a central design goal.

The build system is relatively new, and a lot of things which were originally independent subcommands are being consolidated into the build system.

There's a sort of ambient impression that "Zig won't support C anymore" floating around, I'm not sure from your post whether you have that impression or not, but it isn't even vaguely true.

It just means that importing C libraries will be a build step, and not something you write directly into the relevant source code. This is not a big deal.

jay-barronville
9 replies
1d1h

It just means that importing C libraries will be a build step, and not something you write directly into the relevant source code. This is not a big deal.

It 100% is a big deal. I explained this in another comment [0].

[0]: https://news.ycombinator.com/item?id=41111445

throwawaymaths
4 replies
1d1h

I think you are operating from a mistaken understanding, as responded to in the linked comment.

jay-barronville
2 replies
1d

I think you are operating from a mistaken understanding, as responded to in the linked comment.

No, I’m not. Respectfully, you’re responding to my point while admitting in your comment [0] that you don’t actually know.

If you read the relevant discussions, the conclusion, last time I checked, was that there’s going to be a build system requirement.

[0]: https://news.ycombinator.com/item?id=41111542

throwawaymaths
1 replies
1d

I "don't know" because zig 0.14 is not released yet and anything could happen but I DO know that imports do not currently require the build system. And I THINK that's because the core parts of the build system (including binding imports) are dependent on the command line anyways. So I am ASSUMING that property will be invariant when the c @import gets implemented, and my "don't know" is merely being explicit about that assumption. You are the one coming from a place of ignorance here, and your refusal to acknowledge that you might be wrong makes me suspect your argument is not being made in good faith.

jay-barronville
0 replies
1d

You are the one coming from a place of ignorance here, and your refusal to acknowledge that you might be wrong makes me suspect your argument is not being made in good faith.

You’re making a lot of “assumptions” when you could just read the GitHub issue regarding this change, written by Andrew himself, titled “move @cImport to the build system” [0].

By the way, please note that I not only write Zig code almost daily, I’ve personally contributed to the Zig build system.

[0]: https://github.com/ziglang/zig/issues/20630

TwentyPosts
0 replies
23h32m

Andrew (in the linked Github page) answered a question as follows:

Question: "So after this change, is there way I can still simply call zig run or do I have to use a build.zig file?"

Andrew's answer: "No, this use case will regress."

This in fact literally states that "just" calling "zig run" won't be possible anymore, and heavily implies you'll need a build.zig file.

samatman
3 replies
21h18m

We have different ideas of what a big deal is, clearly.

The idea that C programmers, C programmers, are going to start bailing en masse out of the top of the Zig funnel, because they have to copy-paste a build.zig gist to get started? This is risible.

The complexity has moved to a slightly different place. That's it.

jay-barronville
2 replies
19h27m

The idea that C programmers, C programmers, are going to start bailing en masse out of the top of the Zig funnel, because they have to copy-paste a build.zig gist to get started? This is risible.

What I find incredible about these takes is that they’re so removed from reality. It’s quite unfortunate how difficult it is to get a lot of programmers to evaluate things from the shoes of others.

I have firsthand experience trying to get fellow C programmers to try Zig (and Rust too), and it’s already extremely difficult as is. The three things that have helped me sell Zig has been the combination of (1) how easy it is to get up and running with it (no prerequisites or installs necessary) [0], (2) the C interop story (which was a selling point for me [1]), and (3) the very simple yet impressive and complete build process [0].

What’s interesting is that you call my point risible when, based on your response, it’s clear to me you don’t understand the audience we’re talking about.

For example, you say:

[…] because they have to copy-paste a build.zig gist to get started […]

Professional C programmers generally favor actually understanding what’s happening under the hood, so unlike, e.g., JavaScript or Python developers, suggesting they get started by copying and pasting build code they don’t understand is a nonstarter for a lot of C programmers. And understanding Zig’s build system doesn’t happen as quickly [1] as being able to use Zig directly. Even after writing a decent amount of Zig build code, I’ve had to, on a number of occasions, go read the Zig build system code to understand how something is implemented (e.g., how it deals with certain paths, peculiarities of the dependency system, caching semantics, etc.)—if I had to figure out the build system before I was able to get started with Zig and be productive, I would’ve never taken it seriously.

There’s a reason why Zig has been marketed as a compiler that can also serve as a drop-in replacement for GCC/Clang. What you’re effectively saying is that this main selling point doesn’t matter even though that’s been one of the main things that makes Zig attractive to C programmers. Incredible.

[0]: https://news.ycombinator.com/item?id=41111445

[1]: https://news.ycombinator.com/item?id=41107515

samatman
0 replies
2h41m

I think there are several things going on here:

- You're used to the way it works now, and don't like that it's changing

- The build system is woefully underdocumented (understandable, since it's new and hasn't stabilized, but it's not a good thing), and you're conflating that with the system being hard to understand, which it isn't.

- You're combining these things, and your inflated sense of how well you know every C programmer in existence, into a claim which, again, I consider risible.

Because you're saying things like this:

There’s a reason why Zig has been marketed as a compiler that can also serve as a drop-in replacement for GCC/Clang. What you’re effectively saying is that this main selling point doesn’t matter even though that’s been one of the main things that makes Zig attractive to C programmers. Incredible.

Which is a bizarre thing to derive from what I said, and is also absurd. Andrew wants Zig to be the best way to build C, always has, and he's the one responsible for making it as good as it is. His goals there haven't changed, what's changed is how he intends to provide that.

Now, you can claim, and you have, that he's going to make it much worse in the process. But your reasoning for saying this is thin and I disagree with that premise completely.

mafuyu
0 replies
9h35m

As an embedded C developer who has since branched out to Rust and C++, I’m definitely curious about Zig. Reading about this, I’m a bit split. Losing some of the C interop “magic” sucks, but the proposal to use the Zig build system seems pretty reasonable.

Really, I think what C developers want is a modern, opinionated tooling solution a la cargo. If the Zig build system can deliver that (plus the much better ergonomics of the language itself), I think it could be very compelling.

I’m sick of Make, CMake, Nix flakes, Docker, manually bundling specific arm cross toolchains, etc. If I can send someone a Zig + C project and the zig.build Just Works, I’m all for it.

flohofwoe
4 replies
1d9h

It would actually be nice if the translate-c build system step would also allow some restricted symbol transformation (at least stripping the 'namespace prefix' from C library symbols).

For instance Odin allows to define a 'link_prefix' for C APIs which is then stripped from the imported symbols:

    @(default_calling_convention="c", link_prefix="sg_")
This causes name transformations like:

    sg_setup() => setup()
    sg_shutdown() => shutdown()
...maybe even convert from common case-conventions (like snake-, camel-, pascal- case etc...) to Zig's naming convention so that imported C interfaces don't look so alien relative to native Zig interfaces.

o11c
2 replies
1d3h

common case-conventions

Unfortunately, we must not forget forget ABOMINATIONCase and NAMESPACED_PascalCase (we could also generalize this to any switch from one convention to another after the first word). I've found they usually are, in fact, identifiable and roundtrippable. I've found the following usually works (it fails for multi-word prefixes or non-leading exceptions, and of course single-word identifiers are ambiguous):

  count and strip all leading, then trailing, symbols (in some languages this is not limited to underscore but said languages usually need special handling anyway)
  for every chunk separated by underscore, space, or hyphen (this may be nothing):
    if the chunk either has no uppercase or has no lowercase, simply use it as a word. Otherwise:
    for every sliding-window pair of letters (c, d) in the chunk:
      if c is uppercase:
        if d is lowercase, or d is a digit and there is lowercase elsewhere:
          start a new word before c
Then for the words-to-convention direction:

  space and kebab case don't have any good answer for affixes AFAIK. Otherwise, restore at least the leading underscores (trailing underscores are usually keyword-avoiding)
  for snake and screaming case, be sure to prepend an underscore if the *result* would start with a digit
  for camel variants, prepend an underscore to each *word* that starts with a digit. But if there were originally more than 1 leading underscores, use those instead for the first word.

flohofwoe
1 replies
1d3h

I mean, there can always be a function on the TranslateC step which maps 'special case' names (or even translate them to an entirely different name, for instance when there's collisions with Zig reserved keywords):

    translateSpecialCaseNames(.{
        .{ .src = "ABOMINATIONCase", .dst = "case" },
        .{ .src = "NAMESPACED_PascalCase", .dst = "pascalCase" },
    });
...in my own language bindings generator, being able to define such special case mappings is actually quite important, mainly for handling that situation where an automatic mapping would result in a reserved keyword.

o11c
0 replies
1d

Reserved words really shouldn't require manual care; "list of keywords in X language" is really easy to handle so you can just append an underscore.

I do think that namespaces need to be semi-manually managed though (likely only at the project level), since often there are things that look like a namespace but shouldn't be treated like one, and it's not always obvious from a single identifier how many words should be treated as the namespace in some styles.

One special case is that sometimes C-ish libraries have class-likes like {Foo, FooBar, FooBaz} where the desired mapping is {foo.Foo, foo.Bar, foo.Baz}.

Cloudef
0 replies
1d9h

From the issue

As a consolation prize, the TranslateC build step can be enhanced with advanced settings, such as namespace stripping and validation of existing bindings. Since changes to this don't require changing the language, it's OK if the scope & complexity increase to some extent.
ajnin
4 replies
1d4h

remove libclang

So `zig cc` will have to go as well ? I was under the impression Zig as a drop-in C (cross-)compiler was one of its main selling points.

ajnin
1 replies
9h26m

I read that commment as well but if you need to use the Zig build system it means you won't be able to integrate Zig into an existing project as easily, you'll have to integrate the Zig build system or even to integrate your project into the Zig build system, and that's an another level of complexity entirely.

mst
0 replies
6h31m

There are enough people who care about adding zig to existing projects being easy (including the zig BDFL) that I would hope that this approach won't be stabilised until they're confident that it isn't adding enough extra complexity to spoil the experience.

I mean, we'll have to wait and see, and there may be a transition period during which things kinda suck, but zig has a pretty good track record here so I'm more optimistic than I would be in most cases.

pta2002
0 replies
9h53m

I think for better or for worse it makes sense that this is delegated to a separate package. For me it always seemed like a weird tangential thing that Zig did that is not really related to Zig The Language, and more to Zig The Build System. It made more sense when they were depending on LLVM either way, but now that they're closer to getting rid of it, it does not make sense to keep the dependency just for something that always seemed more like a 'neat thing' than a core language feature.

whitehexagon
1 replies
22h41m

Do we know if this means the Reflection as mentioned in the article via @typeInfo will no longer work? or was it anyway comptime info? My Zig so far has been SoC level stuff, but using something like raylib or glfw is somewhere on my todo list, and this example sounds mighty useful. Anyway seems a strange thing to remove when Zig has the potential to be the new C. Hopefully also as stable for the next 30 years :)

pbaam
0 replies
22h2m

It will still work. If you look at the type signature of @cImport in the language reference[1], it returns a type just as @import. So you can call @typeInfo on it. But instead of writing

  const win32 = @cImport({
    @cInclude("windows.h");
    @cInclude("winuser.h");
  });
You will write:

  const win32 = @import("win32");
Where the module "win32" is declared in build.zig.

[1] https://ziglang.org/documentation/master/#cImport

deagle50
0 replies
1d3h

It seems the build system has gotten enough traction and Andrew is going for broke to enshrine its place in the C space. I wouldn't bet against him.

voidUpdate
21 replies
1d9h

I really want to like zig, but I've just had some annoying problems with it, most of which I think are just a result of it not being in 1.0 yet. For example, the recommended way to start a project, with `zig init`, has a load of code that I really don't need when I just want a bare project to get started. I only recently found out that you can just `zig build-exe filename.zig` and skip the whole init part. Also I've had a lot of issues getting editor integration to work correctly. I've installed the VSCode extension but I don't seem to be getting autocomplete etc. It is quite possibly just an ID-10T problem though, so I'll probably take another look at it some weekend

flohofwoe
7 replies
1d9h

Tbf, when coming from the C/C++ ecosystem, these types of problems are 'just another Tuesday' (especially for cross-platform projects and the official VSCode C/C++ extension).

jeroenhd
4 replies
1d7h

This is part of the reason why I don't use VSCode for C(++). Jetbrains Clion seems to be the best IDE for those languages, with Visual Studio (the full fat one, costing a grand, as the free version lacks a bunch of features) as a close second, depending on what platform you're developing for.

VSCode is great when it works, but in cases like these it quickly becomes obvious that it's a hodge-podge of tools glued together via extensions and command lines rather than a purpose-built IDE.

markus_zhang
2 replies
1d5h

I have never used Clion. Can you please share why you believe it is the top of the crop? I'm writing a C++/SDL2 game engine in Visual Studio but I think the IDE is really slow and error prone even for a small project. Everything just runs so slow. Maybe I need a better machine though.

winrid
1 replies
13h44m

CLion has a free trial you can try. I find it works really well. I use it on a 6yo laptop quite often for C++.

markus_zhang
0 replies
58m

Thanks! Just to confirm, are you using Windows?

Quothling
0 replies
1d4h

I'm not one to defend VSC as such, but I do think it's fair to mention that a lot of the Zig issues have been directly with the Zig LSP (zls) and not necessarily with any specific editor extension. Though I suppose some of it may be worse with VSC as it's often been rather important to keep both your Zig and zls versions up-to-date and synchronized. I'm not sure how that happens with the VSC extension, but if it auto-updates your zls version (which it probably does) then it may race ahead of your Zig version which might cause problems.

voidUpdate
0 replies
1d8h

Yeah, that's one of the reasons I dislike the C++ ecosystem so much and want to like Zig haha

Cloudef
4 replies
1d8h

zls should generally work out of the box but I don't use vscode, so your mileage may vary. Make sure your zls and zig versions match. zig build-exe, zig run can be fine for small things, but when you want to start importing modules or do something more fancy, build.zig is nice to have.

voidUpdate
3 replies
1d8h

According to the extension, ZLS is optional, but according to some zig docs I found, its part of the extension. I may have misread, I'll try this stuff again sometime

brabel
2 replies
1d7h

As with all VSCode language extensions, they will try to download and manage ZLS for you. That's ok if it's done well, but they probably have some ways to go still.

I just prefer using Zig on emacs with the built-in eglot LSP client. You just tell it where you installed ZLS (and it won't do magic to get it for you) and off you go. It's the same level of support as with VS Code. If you're not new to emacs, maybe consider that (otherwise emacs may be too much of a rabbit hole :)).

voidUpdate
1 replies
1d7h

My terminal editor of choice is nano :P

MobiusHorizons
0 replies
1d3h

It sounds like you have two editors at least, vscode and nano. Based on this description, I would assume you use nano for commit messages, config file updates, and not much more. If that’s the case, you may want to at least check out what emacs or vim can do for you with some basic lsp integration. These would be replacements for vscode not nano, although once you get used to them, you might switch EDITOR over. Compared to the complexity of systems programming, vim or eMacs should be well within your grasp.

dgb23
2 replies
1d5h

For example, the recommended way to start a project, with `zig init`, has a load of code that I really don't need when I just want a bare project to get started.

I recently started a new project. zig-init provides you with a working build.zig file and two very minimal project files under /src, one of which is a hello world binary and the other is a hello world library.

Also I've had a lot of issues getting editor integration to work correctly.

ZLS is not at the same level of what you would expect from a more mature language. For example by default it will not help you with anything related to comptime.

I highly recommend reading this fairly recent blog post by Loris Cro:

https://kristoff.it/blog/improving-your-zls-experience/

It explains how you set up your build.zig and ZLS (a few lines of configuration) in order to get much more help from ZLS. It will then perform the equivalent of what `cargo check` (Rust) does.

voidUpdate
1 replies
1d4h

Really? I get a main.zig file with a hello world as well as some maths, and a build.zig that has stuff that I can't work out how to correctly get rid of. I'm not currently at that workstation so I can't give specifics though. I feel it would be easier if it was literally just a hello world print and the build literally just built it, rather than doing tests on code I already know works, because it shipped with the language

dgb23
0 replies
21h57m

Yes, that's what I meant. But I think I talked passed you in a way.

I think its fair criticism that as a beginner, you don't actually want to deal with build.zig or even understand it. You just want to write some code and run it and look at build.zig after you gained some familiarity with the language itself.

jeroenhd
1 replies
1d7h

I have the same issue. It's hard to say if the tooling is working but incomplete, or if the tooling doesn't work for some reason. I can get syntax highlighting to work, but basic variable autocomplete doesn't, so I'm guessing the language server ran into some kind of issue.

I really want to get started with Zig but I don't want to go back to the age of nano/edit.com for learning a new language. Zig is complex enough already.

samatman
0 replies
1d2h

For what it's worth, outside of some outstanding issues with comptime (which are genuinely difficult, if ultimately solvable), I've found autocomplete, go-to-definition, and the other LSP semantic tools, to work fine in vs-code.

You could hop on the Discord and get some help with your configuration if you'd like. All things Zig are somewhat underdocumented currently, that could use some polish for sure.

nindalf
0 replies
1d7h

Honestly same. I think it has the potential to be a great language and I'll definitely have a look at 1.0. Right now it's more for the people who are fine living on the bleeding edge.

Nothing wrong with that, imo. It's hard to evolve a language without having the ability to make breaking changes. They're doing the right things, generally making the right calls, and taking their time building instead of rushing.

laeri
0 replies
9h19m

In this case you probably haven't looked at it properly. There is just the starting entry main.zig, and a root.zig file which you can remove if you don't need it. Nothing complicated or annyling problems at this stage.

jay-barronville
0 replies
1d8h

[…] I've had a lot of issues getting editor integration to work correctly. I've installed the VSCode extension but I don't seem to be getting autocomplete etc. […]

If you use ZLS [0], make sure you’re always using the right version for the Zig version you have installed on your machine. In my experience, that fixes 90% of editor issues I’ve encountered using Zig (I don’t use Visual Studio Code though, so it’s possible your editor issues are related to the editor itself).

[0]: https://github.com/zigtools/zls

Joker_vD
9 replies
1d9h

Clang's preprocessor is actually not implemented as a separate compilation pre-pass, it's essentially a part of the lexer and I would be willing to bet that gcc uses a similar scheme.

So there is nothing technically impossible about having the access to macro names as a compiler-specific extension, it's just that there is no much demand for it.

JonChesterfield
8 replies
1d9h

Clang prints out things about macro instantiations on the semantic error reporting paths. That probably means it has all the macro information available already.

It's not probably reflected to the language because C++ fears reflection in general and hates macros in particular.

bluGill
5 replies
1d5h

Reflection is on track for C++26 so it is completely wrong to say C++ fears reflection.

Joker_vD
4 replies
1d3h

Reflection proposals have been around at least since 2014, so I'd say it's exactly right to say that C++ fears it — otherwise it'd have arrived much sooner.

bluGill
3 replies
1d3h

C++ fears getting reflection wrong, which is why it takes a long time to get it in.

3836293648
2 replies
1d

C++ fears getting lots of stuff wrong, which is why they got rid of good concepts, waited 20 years and then added bad concepts

pjmlp
0 replies
12h0m

So is the fate of design by committee languages and APIs, where not always the best comes out, rather what the people that decided to show up voted on, and those hard enough to keep battling for their papers throughout all voting sessions until the final victory.

bluGill
0 replies
5h25m

It was only 10 years between getting rid of concepts and getting the replacement in.

They got rid of "good concepts" because when attempts were made to implement them the build for a simple "hello world" was over 15 hours and the implementers didn't think they could optimize that enough to have acceptable build times. As such I wouldn't call what they got rid of good concepts - to be good it needs to be nice not just in an academic paper, but also in the real world.

g15jv2dp
0 replies
1d8h

C++ fears reflection in general and hates macros in particular.

Macros aren't a particular case of reflection... And at least in the way they're done in C++, they're a big source of bugs / spaghetti.

formerly_proven
0 replies
1d7h

C++ fears reflection and hates macros

But only wicked gods would banish us from paradise

Why do they fear our power?

Because evil is what they are.

WalterBright
5 replies
1d1h

Example from the article:

    const win32 = @cImport({
        @cInclude("windows.h");
        @cInclude("winuser.h");
    });

    pub fn main() !void {
        _ = win32.MessageBoxA(null, "world!", "Hello", 0);
    }
Equivalent D:

    import windows, winuser;
    void main() {
        MessageBoxA(null, "world!", "Hello", 0);
    }
In essence pared it down to the essentials. The compiler figures out the rest.

Sometimes people ask for a special syntax for importing C files, but I like this simplicity so much better.

throwawaymaths
4 replies
1d1h

Some of us like explicit invocations, not being unsure if something comes from a .h filenor some other importing mechanism (what if a .h file collides with another import mechanism)

Naked imports are also annoying. Which import did that MessageBoxA come from? windows? or winuser? Is it in the language kernel?

Explicit is better than implicit. The utter pain for the code writer of four or five keystrokes here and there is not worth confounding the code reader.

WalterBright
3 replies
1d1h

Imports are found along the search path (just like .h files are searched for by the C preprocessor). The first one found it the one selected (just like the C preprocessor does).

Which import did that MessageBoxA come from?

If it exists in two or more imports, the compiler will give an ambiguity error. To resolve the ambiguity error, qualify the call with the name of the import:

    windows.MessageBoxA(null, "world!", "Hello", 0);
or, one can do this:

    import windows : MessageBoxA;
or this:

    import windows, winuser;
    alias MessageBoxA = windows.MessageBoxA;
You can be as explicit as you like, and don't need to worry about ambiguity because the compiler will issue an error for that. It works the same way for D imports.

throwawaymaths
2 replies
1d

Compiler giving a compiler error still favors the writer, not the reader of code. It seems like in its design in general D favors the writer of code with all its complicated bells and whistles that one must keep in mind as a reader. I think decades of experience has shown us that it is way better to favor the reader.

WalterBright
1 replies
23h16m

You can do it either way in D. It's nice to have a choice! I use both methods, depending on the context.

mst
0 replies
6h21m

Seems like "if it's obvious where it came from, use the unadorned name, if it'll be non-obvious to a future reader, use the more explicit form" is a solid heuristic and letting the programmer decide is a feature.

(it might be worth having some sort of lint option that will flag using the non-explicit form because I can see large multi-developer projects finding it preferable to add 'use the explicit form' to their local coding style ... but I don't have a dog in this fight so 'free thought, worth exactly what you paid' applies ;)

eska
4 replies
1d9h

Wouldn’t this add at least UINT16_MAX*sizeof(intptr_t) bytes into the executable per enum?

Cloudef
2 replies
1d9h

It adds 65536 pointers to the binary. Alternative would be to use a hash map. I think if they made function that used inline for instead it would optimize to a switch. No need for a LUT.

   fn stringFromMsg(umsg: c_int) [:0]const u8 {
       @setEvalBranchQuota(1000000);
       inline for (@typeInfo(win32).Struct.decls) |field| {
           if (field.name.len >= 3 and std.mem.eql(u8, field.name[0..3], "WM_")) {
               if (umsg == @field(win32, field.name)) {
                   return field.name;
               }
           }
       }
       unreachable; // umsg is not valid, programming mistake
   }
godbolt: https://zig.godbolt.org/z/7b73aoosf

In zig-budoux, I also do comptime reflection on cImport struct to assert compile time that we won't produce broken runtime code

https://github.com/Cloudef/zig-budoux/blob/master/src/c.zig#...

flohofwoe
1 replies
1d8h

...instead it would optimize to a switch. No need for a LUT.

IME there's is no difference in (optimized) code generation between an if-else chain, a switch or (like in your example) an unrolled for-loop with an if inside. All those high level constructs will be optimized into a single lookup table, or a combination of multiple lookup tables and binary search to select between those. Only if there are absolutely no consecutive ranges in the switch-set, a binary search without jump tables will be used.

Cloudef
0 replies
1d8h

Note that you can't use normal for here as you are accessing comptime known variables. Inline for will unroll the loop so that comptime constants get resolved for testing against runtime variables and then the optimizer picks out the best code (often jump tables / switch) to generate. You are right though.

flohofwoe
0 replies
1d9h

I think the executable will essentially contain a lookup table with 2^16 slots and the string data for each macro name matching WM_* in Windows.h. But it's hard to come up with a better solution since the actually required names are unpredictable at compile time. The size of the lookup table could be reduced though if the WM_* values occupy a much smaller range.

classified
4 replies
13h25m

Thank Apple for Reader View in Safari. If you are as incompetent in visual design as the author of that page, you should really stay away from dark mode. Your readers will thank you.

flohofwoe
3 replies
11h24m

What's wrong with green-on-black? I'll take that over amber-on-black any day even if amber is apparently better for CRT burn-in.

rk06
0 replies
9h37m

It is jarring when you use light mode. There is also issue with contrast. The red const gave me enough pain that I switched to reader view

mst
0 replies
6h8m

I tend to use green on black for daytime hacking and amber on black in the evenings to reduce blue light issues.

But I also habitually run with jacked up contrast and nerfed brightness, so I make no claims that what works for me would be a good idea for anybody else.

classified
0 replies
6h51m

If you have to ask that, please don't make dark-mode pages.

montyanderson
0 replies
1d5h

i like your site! seems like zig is really taking off.

Uptrenda
0 replies
1d5h

Those function definitions really look amazingly readable. I've seen this done before in other languages and its usually quite horrible. Maybe Zig is worth learning? This is a killer feature.