A 30k-line PR that fixes years of accumulated type resolution quirks — that's not just engineering, that's courage. Pre-1.0 is exactly the right time to take these swings. Hat tip to mlugg for seeing it through.
I would really like to hear from people using Zig in production/semi-serious applications; where software stability is important.
How's your experience with the constantly changing language? How're your update/rewrite cycles looking like? Are there cases where packages you may use fall behind the language?
I know Bun's using zig to a degree of success, was wondering how the rest were doing.
I maintain a ~250K LoC Zig compiler code base [0]. We've been through several breaking Zig releases (although the code base was much smaller for most of that time; Writergate is the main one we've had to deal with since the code base crossed the 100K LoC mark).
The language and stdlib changing hasn't been a major pain point in at least a year or two. There was some upgrade a couple of years ago that took us awhile to land (I think it might have been 0.12 -> 0.13 but I could be misremembering the exact version) but it's been smooth sailing for a long time now.
These days I'd put breaking releases in the "minor nuisance" category, and when people ask what I've liked and disliked about using Zig I rarely even remember to bring it up.
I've worked on two "production" zig codebases: tigerbeetle [0] and sig [1].
These larger zig projects will stick to a tagged release (which doesn't change), and upgrade to newly tagged releases, usually a few days or months after they come out. The upgrade itself takes like a week, depending on the amount of changes to be done. These projects also tend to not use other zig dependencies.
Two different philosophical approaches with Zig and Rust.
- Zig: Let's have a simple language with as few footguns as possible and make good code easy to write. However we value explicitness and allow the developer to do anything they need to do. C interoperability is a primary feature that is always available. We have run time checks for as many areas of undetermined behaviour as we can.
- Rust: let's make the compiler the guardian of what is safe to do. Unless the developer hits the escape hatch, we will disallow behaviour to keep the developer safe. To allow the compiler to reason about safety we will have an intricate type system which will contain concepts like lifetimes and data mobility. This will get complex sometimes so we will have a macro system to hide that complexity.
Zig is a lot simpler than Rust, but I think it asks more of it's developer.
That's disingenous, Rust tries to minimize errors, first at compile time then at runtime, even if it at some discomfort of to programer.
Zig goes for simplicity while removing a few footguns. It's more oriented towards programmer enjoyment. Keep in mind that programmers don't distinguish ease of writing code from ease of writing unforeseen errors.
As someone who never liked writing anything C++ since 2000+ (did like it before) I cannot agree with this. C++ and Rust are not comparable in this sense at all.
One can argue Rust is what C++ wanted to be maybe. But C++ as it is now is anything but clean and clear.
It’s like people do it just because Zig is very comparable to C. So the more complex Rust must be like something else that is also complex, right? And C++ is complex, so…
But that is a bit nonsensical. Rust isn’t very close to C++ at all.
I wrote lots of C++ before learning Rust, and I enjoyed it. Since learning Rust, I write no more C++. I found no place in which C++ is a better fit than Rust, and so it's my "new C++".
Memory safety by default in kernel sounds like a good idea :). However I don't think that C is being _replaced_ by Rust code, it's rather that more independent parts that don't need to deeply integrate with the existing C constructs can be written in a memory safe language, and IMO that's a fine tradeoff
Definitely not. Rust gives you a tangible benefit in terms of correctness. It's such a valuable benefit that it outweighs the burden of incorporating a new language in the kernel, with all that comes with it.
Zig offers no such thing. It would be a like-for-like replacement of an unsafe old language with an unsafe new one. May even be a better language, but that's not enough reason to overcome the burden.
My take, unfortunately, is that Zig might be a more modern C but that gives us little we don’t already have.
Rust gives us memory safety by default and some awesome ML-ish type system features among other things, which are things we didn’t already have. Memory safety and almost totally automatic memory management with no runtime are big things too.
Go, meanwhile, is like a cleaner more modern Java with less baggage. You might also compare it to Python, but compiled.
Tbh Go is also really nice for various local tools where you don’t want something as complex as C++ but also don’t want to depend on the full C# runtime (or large bundles when self-contained), or the same with Java.
With Wails it’s also a low friction way to build desktop software (using the heretical web tech that people often reach for, even for this use case), though there are a few GUI frameworks as well.
Either way, self contained executables that are easy to make and during development give you a rich standard library and not too hard of a language to use go a long way!
Go has a garbage collector though. This makes it unsuitable for many use cases where you could have used C or C++ in the past. Rust and Zig don't have a GC, so they are able to fill this role.
GC is a showstopper for my day job (hard realtime industrial machine control/robotics), but would also be unwanted for other use cases where worst case latency is important, such as realtime audio/video processing, games (where you don't want stutter, remember Minecraft in Java?), servers where tail latency matters a lot, etc.
> GC is a showstopper for my day job (hard realtime industrial machine control/robotics)
Which is a very niche use case to begin with, isn't it? It doesn't really contradict what the parent comment stated about Go feeling like modern C (with a boehm gc included if you will). We're using it this way and it feels just fine. I'd be happy to see parts of our C codebase rewritten in Go, but since that code is security sensitive and has already been through a number of security reviews there's little motivation to do so.
> Which is a very niche use case to begin with, isn't it?
My specific use case is yes, but there are a ton of microcontrollers running realtime tasks all around us: brakes in cars, washing machine controllers, PID loops to regulate fans in your computer, ...
Embedded systems in general are far more common than "normal" computers, and many of them have varying levels of realtime requirements. Don't believe me? Every classical computer or phone will contain multiple microcontrollers, such as an SSD controller, a fan controller, wifi module, cellular baseband processor, ethernet NIC, etc. Depending on the exact specs of your device of course. Each SOC, CPU or GPU will contain multiple hidden helper cores that effectively run as embedded systems (Intel ME, AMD PSP, thermal management, and more). Add to that all the appliances, cars, toys, IOT things, smartcards, etc all around us.
No, I don't think it is niche. Fewer people may work on these, but they run in far more places.
It's been a non-issue for us at tvScientific. Once or twice a year you settle in for a mass refactor, and when that's done you move on with your day.
Packages do fall behind. We only use a couple, so it's pretty easy to point to an internal fork while we wait for upstream to update or to accept our updates. That'd probably be a pain point if you were using a lot of them.
Zig 0.15 is pretty stable. The biggest issue I face daily are silent compiler errors (SIGBUS) for trivial things, e.g. a typo in an import path. I've yet to find exactly why this [only sometimes] causes such a crash, but they're a real pain to figure out over a large changeset. `zig ast-check` sometimes catches the error, else Claude's pretty good at spotting where I accidentally re-used a variable name (again, 90% of the time I do that, it's an easy error, but the other 10%, I get a message-less compiler crash). It sounds like the changes in the OP might be specifically addressing these types of issues.
Also, my .zig-cache is currently at 173GB, which causes some issues on the small Linux ARM VPS I test with.
As for upgrades. I upgraded lightpanda to 0.14 then 0.15 and it was fine. I think for lightpanda, the 0.16 changes might not be too bad, with the only potential issue coming from our use of libcurl and our small websocket server (for CDP connections). Those layers are relatively isolated / abstracted, so I'm hopeful.
As a library developer, I've given up following / tracking 0.16. For one, the change don't resonate with me, and for another, it's changing far too fast. I don't think anyone expects 0.16 support in a library right now. I've gotten PRs for my "dev" branches from a few brave souls and everyone seems happy with that arrangement.
> The biggest issue I face daily are silent compiler errors (SIGBUS) for trivial things, e.g. a typo in an import path.
I don't use zig. My experience has been that caches themselves are sources of bugs (not talking about zig only, but in general). Clearing all relevant caches occasionally is useful when you're experiencing weird bugs.
I don't know why I was downvoted here. One day, I was experiencing weird compilation errors. Clearing the `ccache` C/C++ compiler cache helped get past the problem. Yes, I could have investigated in detail what was the issue and if ccache had a bug but sometimes you don't have the luxury of investigating everything your toolchain throws at you.
> zig's caching system is designed explicitly so that garbage collection could happen in one process simultaneously while the cache is being used by another process.
> I just ran WizTree to find out why my disk was full, and the zig cache for one project alone was like 140 GB.
> not only the .zig-cache directory in my projects, but the global zig cache directory which is caching various dependencies: I'm finding each week I have to clear both caches to prevent run-away disk space
Like what's going on? This doesn't seem normal at all. I also read somewhere that zig stores every version of your binary as well? Can you shed some light on why it works like this in zigland?
AFAIK garbage collection is basically not implemented yet. I myself do `ZIG_LOCAL_CACHE_DIR=~/.cache/zig` so I only have to nuke single directory whenever I feel like it.
20 seconds each time. Last time I tried to enable incremental build, it wasn't working for us. It was a while ago, but I think it had to do with something in our v8 bridge.
The forever backwards compatible promise of C++ was a tremendous design mistake that has resulted in mindshare death as of 2026. It might suck to have to modify your code to continue to get it to work, but it’s the right long term approach.
Mindshare death is a very large overstatement given the massive amount of legacy C++ out there that will be maintained by poor souls for year to come. But you are right, there used to be a great language hiding within C++ if the committee ever dared to break backwards compat. But even if they did it now it would be too late and they'd just end up with a worse Rust or Zig.
The biggest problem with C++ is that while everyone agrees there is a great language hiding in it, everyone also has a remarkably different idea of what that great language actually is.
I don't agree there's a great language hiding in C++. My high level objections would be that the type system is garbage and the syntax is terrible, so you'd need a different type system and syntax and that's nothing close to C++ after the changes.
After many years of insisting that "dialects" of C++ are a terrible idea, despite the reality that most C++ users have a specific dialect they use - Bjarne Stroustrup has endorsed essentially the same thing but as "profiles" to address safety issues. So for people who think there is a "great language" in there perhaps in C++ 29 or C++ 32 you will be able to find out for yourselves that you're wrong.
The C++ standards committee’s antiquated reliance on compiler “vendors” holds it back. They should adopt maintenance of clang and bless it as the reference compiler.
- Cranelift applies less optimizations in exchange for faster compilation times, because it was developed to compile WASM (wasmtime), but turns out that is good enough for Rust debug builds.
- Cranelift does not support the wide range of platforms (AFAIK just X86_64 and some ARM targets)
Looking at LLVM build times I seriously believe that C would have been the better choice :/ (it wouldn't be a problem if LLVM wouldn't be the base for so many other projects)
Same for the Metal shading language. C++ adds exactly nothing useful to a shading language over a C dialect that's extended with vector and matrix math types (at least they didn't pick ObjC or Swift though).
Hilariously, they broke this compatibility. std::auto_ptr was an abomination, but removing it from the language was needless and undermined the long term stability that differentiates C++ from upstarts.
The language itself does not change much, but the std does. It depends on individuals, but some people rely less on the std, some copy the old code that they still need.
> Are there cases where packages you may use fall behind the language?
Using third party packages is quite problematic yes. I don't recommend using them too much personally, unless you want to make more work for yourself.
Pinning your build chain and tooling to specific commits helps with stability but also traps you with old bugs or missing features. Chasing upstream fixes is a chore if you miss a release window and suddenly some depended package won't even compiile anymore.
I recently tried to learn it and found it frustrating. A lot of docs are for 0.15 but the latest is (or was) 0.16 which changed a lot of std so none of the existing write ups were valid anymore. I plan to revisit once it gets more stable because I do like it when I get it to work.
I stopped updating the compiler at 0.14 for three projects. Getting the correct toolchain is part of my (incremental) build process. I don't use any external Zig packages.
I think one of the more PITA changes necessary to get these projects to 0.15 is removing `usingnamespace`, which I've used to implement a kind of mixin. The projects are all a few thousand LOC and it shouldn't be that much trouble, but enough trouble that none of what I gain from upgrading currently justify doing it. I think that's fine.
Mitchell Hashimoto (developer of Ghostty) talks about Zig a lot. Ghostty is written in it, and he seems to love it. The churn doesn't seem to bother him at all.
Bun uses JSC (JavaScriptCore) instead of V8. From what I understand, whereas Node/V8 has a higher tier 4 "top speed", JSC is more optimized for memory and is faster to tier up early/less overhead. Good for serverless. Great for agents -> Anthropic purchase.
> Good for serverless. Great for agents -> Anthropic purchase.
Surely nobody would use javascript for either yea? The weaknesses of the language are amplified in constrained environments: low certainty, high memory pressure, high startup costs.
I think Bun helps with the memory pressure, granted this is relative to V8. I'd pushback on the certainty with the reality that TS provides a significant drop in entropy while benefiting from what is a sweet spot between massive corpus size and low barrier for typical problem/use-case complexity. You'll never have the fastest product with JS, but you will always have good speed to market and be able to move quickly.
It's plenty usable. Most of the problems with the claude TUI stem from it being a TUI (no way to query the terminal emulator's displayed character grid), so you have to maintain your own state and try to keep them in sync, which more than a few TUIs will fail at at least sometimes, hence the popularity of conventions like Ctrl+L to redraw.
The actual JS code is in the same ballpark as nodejs. They get fast by specializing to each platform's fastest APIs instead of using generic ones, reimplementing JS libraries in Zig (for example npm or jest) and using faster libraries (for example they use the oniguruma regex engine). Also you don't need an extra transpiling step when using TypeScript.
I am impressed by the achievements of the Zig team. I use the ghostty terminal emulator regularly -- it is built in Zig and it is super stable. It is a fantastic piece of software.
This makes me feel that the underlying technology behind Zig is solid.
But I prefer Rust over Zig. The main difference is Rust chooses a "closed world" model while Zig chooses an "open world" model: in Rust, you must explicitly implement a trait while in Zig as long as the shape fits, or the `.` on a structure member exists (for whichever type you pass in), it will work (I don't use Zig so pardon hand wavy description).
This gives Zig very powerful meta programming abilities but is a pain because you don't know what kind of type "shapes" will be used in a particular piece of code. Zig is similar to C++ templates in some respects.
This has a ripple effect everywhere. Rust generated documentation is very rich and explicit about what functions a structure supports (as each trait is explicitly enrolled and implemented). In Zig the dynamic nature of the code becomes a problem with autocomplete, documentation, LSP support, ...
> But I prefer Rust over Zig. The main difference is Rust chooses a "closed world" model while Zig chooses an "open world" model: in Rust, you must explicitly implement a trait while in Zig as long as the shape fits, or the `.` on a structure member exists (for whichever type you pass in), it will work (I don't use Zig so pardon hand wavy description).
Do you happen to have a more specific example by any chance? I’d be interested in what this looks like in practice, because what you described sounds a bit like Go interfaces and from my understanding of Zig, there’s no direct equivalent to it, other than variations of fieldParentPtr.
They are referring to `anytype`, which is a comptime construct telling the compiler that the parameter can be of any type and as long as the code compiles with the given value, it's good.
It's an extremely useful thing, but unconstrained, it's essentially duck typing during compile time. People has been wanting some kind of trait/interface support to constrain it, but it's unlikely to happen.
Conceptually it's quite similar to how C++ templates work (not including C++20 concepts, which is the kind of constraining you're talking about).
I quite like it when writing C++ code. Makes it dead easy to write code like `min` that works for any type in a generic way. It is, however, arguably the main culprit behind C++s terrible compiler-errors, because you'll have standard library functions which have like a stack of fifteen generic calls, and it fails really deeply on some obscure inner thing which has some kind of type requirement, and it's really hard to trace back what you actually did wrong.
In my (quite limited) experience, Zig largely avoids this by having a MUCH simpler type system than C++, and the standard library written by a sane person. Zig seems "best of both worlds" in this regard.
A 30K-line PR for type resolution sounds scary but this is exactly what pre-1.0 means. Rust went through multiple rounds of similar foundational rework before stabilization, remember the old sigil-based pointer syntax. Getting the type system formalized before committing to backwards compat is worth temporary churn. The real test is whether this new design makes future type system changes smaller and more incremental rather than requiring another 30K-line rewrite.
It’s good to see that this is finally addressed. It’s been a well known broken part of the language semantics for years.
There are similar hidden quirks in the language that will need to be addressed at some point, such as integer promotion semantics.
To address the question about stability: the Zig community are already used to Zig breaking between 0.x versions. Unlike competitors such as Odin or my own C3, there is no expectation that Zig is trying to minimize upgrading problems.
This is a cultural thing, it would be no real problem to be clear about deprecations, but in the Zig community it’s simply not valued. In fact it’s a source of pride to be able to adapt as fast as possible to the new changes.
I like to talk about expectation management, and this is a great example of it.
In discussions, it is often falsely argued that ”Zig is not 1.0 so breaks are expected” in order to motivate the frequent breaks. However, there are degrees to how you handle breaks, and Zig is clearly in the ”we don’t care to reduce the work”-camp.
If someone is trying to get a more objective look at the Zig upgrade path, then it’s worth keeping in mind that the tradition in Zig is to offload all the work on the user.
The argument, which is frequently voiced, is that ”breaking things will make the language get better and so it’s good that there are language breaks”
It is certainly true that breaking changes are needed, but most people outside of the Zig community would expect it to be done with more care (deprecation paths etc)
Secondly, it should perhaps be a concern for Zig, now at 10 years old, to still produce solidly breaking code every half year.
10 years is the common point where languages go 1.0. However, the outlook for a Zig 1.0 is bleak from what I gather from Zig social forums: the most optimistic estimate I’ve heard is 2029 for 1.0.
This means that in the future, projects using Zig can still expect any libraries and applications to bitrot quickly if they are not constantly maintained.
Putting this in contrast with Odin (9 years old) which is essentially 1.0 already and has been stable for several years.
Maybe this also explains the difference in actual output. For example the number of games I know of written in Odin is somewhere between 5 to 10 times as many as Zig games. Now weighing in that Zig has maybe 5 or 10 times as many users, it means Odin users are somewhere between 20-100 times as likely to have written a playable game.
There are several explanations as to why this is: we could discuss whether the availability of SDL, Raylib etc is easier on Odin (then why is Zig less friendly?), maybe more Odin has better programmers (then why do better programmers choose Odin over Zig), maybe it’s just easier to write resource intensive applications with Odin than Zig (then what do we make of Zig’s claim of optimality?)
If we look past the excuses made for Zig (”it’s easy to fix breaks” ”it’s not 1.0”) and the hype (”Zig is much safer than C” ”Zig makes me so productive”) and compare with Odin in actual productivity, stability and compilation speed (neither C3 nor Odin requires 100s of GB of cache to compile in less than a second using LLVM) then Zig is not looking particularly good.
Even things like build.zig, often touted as a great thing, is making it really hard for a Zig beginner (”to build your first Hello World, first understand this build script in non-trivial Zig”). Then for IDEs, suddenly something like just reading the configuration of what is going to be used for building is hidden behind an opaque Zig script. These trade-offs are rarely talked about, as both criticism and hype is usually based on surface rather than depth.
Well, that’s long enough of a comment.
To round it off I’d like to end on a positive note: I find the Zig community nice and welcoming. So if you’re trying Zig out (and better do that, don’t let others’ opinions - including mine - prevent you from trying things out) do so.
If you want to evaluate Zig against competitors, I’d recommend comparing it to D, Odin, Jai and C3.
> Secondly, it should perhaps be a concern for Zig, now at 10 years old, to still produce solidly breaking code every half year.
Not at all, if the team needs 30 more years they should take it.
> However, the outlook for a Zig 1.0 is bleak from what I gather from Zig social forums: the most optimistic estimate I’ve heard is 2029 for 1.0.
Funny you see it as bleak when most of the community sees it as the most excitinh thing in systems programming happening right now.
I think you comment is in bad faith, all the big zig projects say that the upgrade path is never a main concern, just read HN comments here or on other zig threads, people ask about this a lot and maintains always answer.
Congratulations to the dev, a 30,000 line PR for a language compiler (and a very much non-trivial compiler) is a feat to be proud of. But a change of this magnitude is a serious bit of development and gave me pause.
I understand both of the following:
1. Language development is a tricky subject, in general, but especially for those languages looking for wide adoption or hoping for ‘generational’ (program life span being measured in multiple decades) usage in infrastructure, etc.
2) Zig is a young-ish language, not at 1.0, and explicitly evolving as of the posting of TFA
With those points as caveats, I find the casualness of the following (from the codeburg post linked on the devlog) surprising:
‘’’This branch changes the semantics of "uninstantiable" types (things like noreturn, that is, types which contain no values). I wasn't originally planning to do this here, but matching the semantics of master was pretty difficult because the existing semantics don't make much sense.’’’
I don’t know Zig’s particular strategy and terminology for language and compiler development, but I would assume the usage of ‘branch’ here implies this is not a change fully/formally adopted by the language but more a fully implemented proposal. Even if it is just a proposal for change, the large scale of the rewrite and clear implication that the author expects it to be well received strikes me as uncommon confidence. Changing the semantics of a language with any production use is nearly definitionally MAJOR, to just blithely state your PR changes semantics and proceed with no deep discussion (which could have previously happened, IDK) or serious justification or statements concerning the limited effect of those changes is not something I have experienced watching the evolution (or de-evolution) of other less ‘serious’ languages.
Is this a “this dev” thing, a Zig thing, or am just out of touch with modern language (or even larger scale development) projects?
Also, not particularly important or really significant to the overall thrust of TFA, but the author uses the phrase “modern Zig”, which given Zig’s age and seeming rate of change currently struck me as a very funny turn of phrase.
When someone's communication is casual and informal, without any context, you really can't distinguish between:
* The author is being flippant and not taking the situation seriously enough.
* The author is presuming a high-trust audience that knows that they have done all the due diligence and don't have to restate all of that.
In this case, it's a devlog (i.e. not a "marketing post") for a language that isn't at 1.0 yet. A certain amount of "if you're here, you probably have some background" is probably reasonable.
The post does link directly to the PR and the PR has a lot more context that clearly conveys the author knows what they are doing.
It is weird reading about (minor) breaking language changes sort of mentioned in passing. We're used to languages being extremely stable. But Zig isn't 1.0 yet. Andrew and friends certainly take user stability seriously, but you signed up for a certain amount of breakage if you pick the language today.
As someone who maintains a post-1.0 language, there really is a lot of value in breaking changes like this. It's good to fix things while your userbase is small. It's maddening to have to live with obvious warts in the language simply because the userbase got too big for you to feasibly fix it, even when all the users wish you could fix it too. (Witness: The broken precedence of bitwise operators in C.)
It's better for all future users to get the language as clean and solid as you can while it's still malleable.
mlugg is one of the core contributors of Zig, and is a member of the Zig foundation iirc. They've been wanting to work on dependency resolution for a while now, so I'm really glad they're cleaning this up (I've been bitten before by unclear circular dependency errors). There's not a formal language spec yet, since it's moving pretty fast, but tbh I don't see the need for a standard, since that's not one of their goals currently.
> A (fairly uncontroversial) subset of this behavior was implemented in [the changeset we are discussing]. I'll close this for now, though I'll probably end up revisiting these semantics more precisely at some point, in which case I'll open a new issue on Codeberg.
I don't know how evident this is to the casual HN reader, but to me this changeset very obviously moves Zig the language from experimental territory a large degree towards being formally specified, because it makes type resolution a Directed Acyclic Graph. Just look at how many bugs it resolved to get a feel for it. This changeset alone will make the next release of the compiler significantly more robust.
Now, I like talking about its design and development, but all that being said, Zig project does not aim for full transparency. It says right there in the README:
> Zig is Free and Open Source Software. We welcome bug reports and patches from everyone. However, keep in mind that Zig governance is BDFN (Benevolent Dictator For Now) which means that Andrew Kelley has final say on the design and implementation of everything.
It's up to you to decide whether the language and project are in trustworthy hands. I can tell you this much: we (the dev team) have a strong vision and we care deeply about the project, both to fulfill our own dreams as well as those of our esteemed users whom we serve[1]. Furthermore, as a 501(c)(3) non-profit we have no motive to enshittify.
Where does the name "zero type" come from? In type theory this is called an "empty" type because the set of values of this type is empty and I couldn't find (though I have no idea where to start) mention of it as a "zero" type.
This stuff is foundational and so it's certainly a priority to get it right (which C++ didn't and will be paying for until it finally collapses under its own weight) but it's easier to follow as an outsider when people use conventional terminology.
The ZSTs are unit types. They have one value, which we usually write as just the type name, so e.g.
struct Goose;
let x = Goose; // The variable x has type Goose, but also value Goose, the only value of that type
The choice to underscore that Rust's unit types have size zero is to contrast with languages like C or C++ where these types, which don't need representing, must nevertheless take up a whole byte of storage and it's just wasted.
But what we're talking about here are empty types. In Rust we'd write this:
enum Donkey {}
// We can't make any variables with the Donkey type, because there are no values of this type and a variable needs a value
No, that’s a different thing. “noreturn” is like Rust’s “never” type (spelled as an exclamation mark, !). Also known as an “uninhabited type” in programming language theory.
I don't use Zig (is not a language for me), but I like to read/watch about it, when you or others talk about the language design and the reasons behind the changes.
Just thinking out loud here, perhaps behavior like this has been more normalized because of the total shitshow that C is. Which followed all these supposedly correct rules.
They literally ended the comment by asking. I don't get why you're criticizing them for providing context that they're not an expert. You're literally distracting from the comment you're telling them has useful information on it, but in a weirdly aggressive way.
A 30k-line PR that fixes years of accumulated type resolution quirks — that's not just engineering, that's courage. Pre-1.0 is exactly the right time to take these swings. Hat tip to mlugg for seeing it through.
I would really like to hear from people using Zig in production/semi-serious applications; where software stability is important.
How's your experience with the constantly changing language? How're your update/rewrite cycles looking like? Are there cases where packages you may use fall behind the language?
I know Bun's using zig to a degree of success, was wondering how the rest were doing.
I maintain a ~250K LoC Zig compiler code base [0]. We've been through several breaking Zig releases (although the code base was much smaller for most of that time; Writergate is the main one we've had to deal with since the code base crossed the 100K LoC mark).
The language and stdlib changing hasn't been a major pain point in at least a year or two. There was some upgrade a couple of years ago that took us awhile to land (I think it might have been 0.12 -> 0.13 but I could be misremembering the exact version) but it's been smooth sailing for a long time now.
These days I'd put breaking releases in the "minor nuisance" category, and when people ask what I've liked and disliked about using Zig I rarely even remember to bring it up.
[0]: https://github.com/roc-lang/roc
At this point do you believe porting the codebase to Zig was the right decision? Do you have any regrets?
Also, I'm excited about trying out your language even moreso than Zig. :)
I've worked on two "production" zig codebases: tigerbeetle [0] and sig [1].
These larger zig projects will stick to a tagged release (which doesn't change), and upgrade to newly tagged releases, usually a few days or months after they come out. The upgrade itself takes like a week, depending on the amount of changes to be done. These projects also tend to not use other zig dependencies.
[0]: https://github.com/tigerbeetle/tigerbeetle/pulls?q=is%3Apr+a...
[1]: https://github.com/Syndica/sig/pulls?q=is%3Apr+author%3Akpro...
I really wanted to deep dive into zig but I'm into rust now kinda late as I'm really just started like 2024.
Have you tried rust? how does it compared to zig?
* just asking
Two different philosophical approaches with Zig and Rust.
- Zig: Let's have a simple language with as few footguns as possible and make good code easy to write. However we value explicitness and allow the developer to do anything they need to do. C interoperability is a primary feature that is always available. We have run time checks for as many areas of undetermined behaviour as we can.
- Rust: let's make the compiler the guardian of what is safe to do. Unless the developer hits the escape hatch, we will disallow behaviour to keep the developer safe. To allow the compiler to reason about safety we will have an intricate type system which will contain concepts like lifetimes and data mobility. This will get complex sometimes so we will have a macro system to hide that complexity.
Zig is a lot simpler than Rust, but I think it asks more of it's developer.
> However we value explicitness and allow the developer to do anything they need to do*
* except for having unused variables. Those are so dangerous the compiler will refuse the code every time.
That's disingenous, Rust tries to minimize errors, first at compile time then at runtime, even if it at some discomfort of to programer.
Zig goes for simplicity while removing a few footguns. It's more oriented towards programmer enjoyment. Keep in mind that programmers don't distinguish ease of writing code from ease of writing unforeseen errors.
Rust is a Bugatti Veyron, Zig is a McLaren F1.
Zig is a modern C,
Rust is a modern C++/OCaml
So if you enjoy C++, Rust is for you. If you enjoy C and wish it was more verbose and more modern, try Zig.
As someone who never liked writing anything C++ since 2000+ (did like it before) I cannot agree with this. C++ and Rust are not comparable in this sense at all.
One can argue Rust is what C++ wanted to be maybe. But C++ as it is now is anything but clean and clear.
I found swift way more enjoyable than rust as a C++ alternative. It even has first class-ish interop now.
Comparing Rust to C++ feels strange to me.
It’s like people do it just because Zig is very comparable to C. So the more complex Rust must be like something else that is also complex, right? And C++ is complex, so…
But that is a bit nonsensical. Rust isn’t very close to C++ at all.
I wrote lots of C++ before learning Rust, and I enjoyed it. Since learning Rust, I write no more C++. I found no place in which C++ is a better fit than Rust, and so it's my "new C++".
For example, high performance servers (voltlane.net), programming languages (https://github.com/HF-Foundation, https://github.com/lionkor/mcl-rs, and one private one), webservers (beampaint.com) and lots of other domains.
Rust is close to C++ in that it is a systems language that allows a reasonable level of zero-cost abstractions.
It is kind of interesting that the Linux kernel is slowly adopting Rust, whereas Zig seems like it would be a more natural fit?
I know, timelines not matching up, etc.
Memory safety by default in kernel sounds like a good idea :). However I don't think that C is being _replaced_ by Rust code, it's rather that more independent parts that don't need to deeply integrate with the existing C constructs can be written in a memory safe language, and IMO that's a fine tradeoff
Definitely not. Rust gives you a tangible benefit in terms of correctness. It's such a valuable benefit that it outweighs the burden of incorporating a new language in the kernel, with all that comes with it.
Zig offers no such thing. It would be a like-for-like replacement of an unsafe old language with an unsafe new one. May even be a better language, but that's not enough reason to overcome the burden.
And “if you enjoy C++/if you enjoy C” are gross oversimplifications.
My take, unfortunately, is that Zig might be a more modern C but that gives us little we don’t already have.
Rust gives us memory safety by default and some awesome ML-ish type system features among other things, which are things we didn’t already have. Memory safety and almost totally automatic memory management with no runtime are big things too.
Go, meanwhile, is like a cleaner more modern Java with less baggage. You might also compare it to Python, but compiled.
Seriously asking, where Go sits in this categorization?
Nowhere, or wherever C# would sit. Go is a high level managed language.
Go is modern Java, at least based on the main area of usage: server infrastructure and backend services.
Tbh Go is also really nice for various local tools where you don’t want something as complex as C++ but also don’t want to depend on the full C# runtime (or large bundles when self-contained), or the same with Java.
With Wails it’s also a low friction way to build desktop software (using the heretical web tech that people often reach for, even for this use case), though there are a few GUI frameworks as well.
Either way, self contained executables that are easy to make and during development give you a rich standard library and not too hard of a language to use go a long way!
i wonder what makes go more modern than java, in terms of features.
The tooling and dependency management probably
It's also a modern C.
If you enjoy C and wish it was less verbose and more modern, try Go.
Go has a garbage collector though. This makes it unsuitable for many use cases where you could have used C or C++ in the past. Rust and Zig don't have a GC, so they are able to fill this role.
GC is a showstopper for my day job (hard realtime industrial machine control/robotics), but would also be unwanted for other use cases where worst case latency is important, such as realtime audio/video processing, games (where you don't want stutter, remember Minecraft in Java?), servers where tail latency matters a lot, etc.
> GC is a showstopper for my day job (hard realtime industrial machine control/robotics)
Which is a very niche use case to begin with, isn't it? It doesn't really contradict what the parent comment stated about Go feeling like modern C (with a boehm gc included if you will). We're using it this way and it feels just fine. I'd be happy to see parts of our C codebase rewritten in Go, but since that code is security sensitive and has already been through a number of security reviews there's little motivation to do so.
> Which is a very niche use case to begin with, isn't it?
My specific use case is yes, but there are a ton of microcontrollers running realtime tasks all around us: brakes in cars, washing machine controllers, PID loops to regulate fans in your computer, ...
Embedded systems in general are far more common than "normal" computers, and many of them have varying levels of realtime requirements. Don't believe me? Every classical computer or phone will contain multiple microcontrollers, such as an SSD controller, a fan controller, wifi module, cellular baseband processor, ethernet NIC, etc. Depending on the exact specs of your device of course. Each SOC, CPU or GPU will contain multiple hidden helper cores that effectively run as embedded systems (Intel ME, AMD PSP, thermal management, and more). Add to that all the appliances, cars, toys, IOT things, smartcards, etc all around us.
No, I don't think it is niche. Fewer people may work on these, but they run in far more places.
Thanks. I write some Go, and feel the same about it. I really enjoy it actually.
Maybe I'll jump to Zig as a side-gig (ha, it rhymes), but I still can't motivate myself to play with Rust. I'm happy with C++ on that regard.
Maybe gccrs will change that, IDK, yet.
C++ added OOP to C.
Rust is not object-oriented.
That makes your statement wrong.
Time to start zig++
It's been a non-issue for us at tvScientific. Once or twice a year you settle in for a mass refactor, and when that's done you move on with your day.
Packages do fall behind. We only use a couple, so it's pretty easy to point to an internal fork while we wait for upstream to update or to accept our updates. That'd probably be a pain point if you were using a lot of them.
Zig 0.15 is pretty stable. The biggest issue I face daily are silent compiler errors (SIGBUS) for trivial things, e.g. a typo in an import path. I've yet to find exactly why this [only sometimes] causes such a crash, but they're a real pain to figure out over a large changeset. `zig ast-check` sometimes catches the error, else Claude's pretty good at spotting where I accidentally re-used a variable name (again, 90% of the time I do that, it's an easy error, but the other 10%, I get a message-less compiler crash). It sounds like the changes in the OP might be specifically addressing these types of issues.
Also, my .zig-cache is currently at 173GB, which causes some issues on the small Linux ARM VPS I test with.
As for upgrades. I upgraded lightpanda to 0.14 then 0.15 and it was fine. I think for lightpanda, the 0.16 changes might not be too bad, with the only potential issue coming from our use of libcurl and our small websocket server (for CDP connections). Those layers are relatively isolated / abstracted, so I'm hopeful.
As a library developer, I've given up following / tracking 0.16. For one, the change don't resonate with me, and for another, it's changing far too fast. I don't think anyone expects 0.16 support in a library right now. I've gotten PRs for my "dev" branches from a few brave souls and everyone seems happy with that arrangement.
> The biggest issue I face daily are silent compiler errors (SIGBUS) for trivial things, e.g. a typo in an import path.
I don't use zig. My experience has been that caches themselves are sources of bugs (not talking about zig only, but in general). Clearing all relevant caches occasionally is useful when you're experiencing weird bugs.
I don't know why I was downvoted here. One day, I was experiencing weird compilation errors. Clearing the `ccache` C/C++ compiler cache helped get past the problem. Yes, I could have investigated in detail what was the issue and if ccache had a bug but sometimes you don't have the luxury of investigating everything your toolchain throws at you.
That .zig-cache seems massive to me. I keep mine on a tmpfs and remove it every time the tmpfs is full.
Do you see any major problems when you remove your .zig-cache and start over?
Just a slower build. From ~20 seconds to ~65 seconds the first time after I nuke it.
But why is it so big in the first place?
I was searching around for causes and came across the following issues: https://github.com/ziglang/zig/issues/15358 which was moved to https://codeberg.org/ziglang/zig/issues/30193
The following quotes stand out
> zig's caching system is designed explicitly so that garbage collection could happen in one process simultaneously while the cache is being used by another process.
> I just ran WizTree to find out why my disk was full, and the zig cache for one project alone was like 140 GB.
> not only the .zig-cache directory in my projects, but the global zig cache directory which is caching various dependencies: I'm finding each week I have to clear both caches to prevent run-away disk space
Like what's going on? This doesn't seem normal at all. I also read somewhere that zig stores every version of your binary as well? Can you shed some light on why it works like this in zigland?
AFAIK garbage collection is basically not implemented yet. I myself do `ZIG_LOCAL_CACHE_DIR=~/.cache/zig` so I only have to nuke single directory whenever I feel like it.
what exactly people call 'garbage collection' in Zig? build cache cleanup?
Indeed what was referred to here is the zig build system cache.
Does Zig have incremental builds yet? Or is it 20 secs each time for your build.
20 seconds each time. Last time I tried to enable incremental build, it wasn't working for us. It was a while ago, but I think it had to do with something in our v8 bridge.
I have heard that from other Zig devs too. Must get a bit annoying as the project grows. But I guess it will be supported sooner or later.
The forever backwards compatible promise of C++ was a tremendous design mistake that has resulted in mindshare death as of 2026. It might suck to have to modify your code to continue to get it to work, but it’s the right long term approach.
The forever backwards compatible promise of C++ is pretty much the main reason anyone is using C++.
> that has resulted in mindshare death as of 2026
I could make a bet that as of 2026 still more C++ projects are being started than Rust + Zig combined.
World is much more vast than ShowHN and GitHub would indicate.
Being started? I would take that bet.
Rust has managed just fine to remain mostly backwards compatible since 1.0 , while still allowing for evolution of the language through editions.
This puts much more work on the compiler development side, but it's a great boon for the ecosystem.
To be fair, zig is pre 1.0, but Zig is also already 8 years old. Rust turned 1.0 at ~ 5 years, I think.
Rust started in 2006 and reached v1 in 2015, that's 9 years.
Mindshare death is a very large overstatement given the massive amount of legacy C++ out there that will be maintained by poor souls for year to come. But you are right, there used to be a great language hiding within C++ if the committee ever dared to break backwards compat. But even if they did it now it would be too late and they'd just end up with a worse Rust or Zig.
The biggest problem with C++ is that while everyone agrees there is a great language hiding in it, everyone also has a remarkably different idea of what that great language actually is.
I don't agree there's a great language hiding in C++. My high level objections would be that the type system is garbage and the syntax is terrible, so you'd need a different type system and syntax and that's nothing close to C++ after the changes.
After many years of insisting that "dialects" of C++ are a terrible idea, despite the reality that most C++ users have a specific dialect they use - Bjarne Stroustrup has endorsed essentially the same thing but as "profiles" to address safety issues. So for people who think there is a "great language" in there perhaps in C++ 29 or C++ 32 you will be able to find out for yourselves that you're wrong.
There are multiple great languages hiding within it
As proven a few times, it doesn't matter if committee decides to break something if compiler vendors aren't on board with what is being broken.
There is still this disconnection on how languages under ISO process work in the industry.
The C++ standards committee’s antiquated reliance on compiler “vendors” holds it back. They should adopt maintenance of clang and bless it as the reference compiler.
There is a reason GCC, LLVM, CUDA, Metal, HPC,.. rely on C++ and will never rewrite to something else, including Zig.
Yes, inertia. If those projects started today, they would likely choose rust.
Why isn't rustc using Cranelift then?
I can think a few reasons:
- Cranelift applies less optimizations in exchange for faster compilation times, because it was developed to compile WASM (wasmtime), but turns out that is good enough for Rust debug builds.
- Cranelift does not support the wide range of platforms (AFAIK just X86_64 and some ARM targets)
So it isn't just a matter of "they would use Rust instead".
There is a whole ecosystem of contributions across the globe and the lingua franca used by those contributors.
Looking at LLVM build times I seriously believe that C would have been the better choice :/ (it wouldn't be a problem if LLVM wouldn't be the base for so many other projects)
Same for the Metal shading language. C++ adds exactly nothing useful to a shading language over a C dialect that's extended with vector and matrix math types (at least they didn't pick ObjC or Swift though).
CUDA, SYSCL, HLSL evolution roadmap, and Khronos future for Vulkan beg to differ with your opinion.
Hilariously, they broke this compatibility. std::auto_ptr was an abomination, but removing it from the language was needless and undermined the long term stability that differentiates C++ from upstarts.
those that used it were rightly punished by the removal
The language itself does not change much, but the std does. It depends on individuals, but some people rely less on the std, some copy the old code that they still need.
> Are there cases where packages you may use fall behind the language?
Using third party packages is quite problematic yes. I don't recommend using them too much personally, unless you want to make more work for yourself.
Using third party packages has gotten a lot easier with the changes described in this devlog https://ziglang.org/devlog/2026/#2026-02-06
Pinning your build chain and tooling to specific commits helps with stability but also traps you with old bugs or missing features. Chasing upstream fixes is a chore if you miss a release window and suddenly some depended package won't even compiile anymore.
I recently tried to learn it and found it frustrating. A lot of docs are for 0.15 but the latest is (or was) 0.16 which changed a lot of std so none of the existing write ups were valid anymore. I plan to revisit once it gets more stable because I do like it when I get it to work.
0.16 is the development version. 0.15.2 is latest release.
I stopped updating the compiler at 0.14 for three projects. Getting the correct toolchain is part of my (incremental) build process. I don't use any external Zig packages.
I think one of the more PITA changes necessary to get these projects to 0.15 is removing `usingnamespace`, which I've used to implement a kind of mixin. The projects are all a few thousand LOC and it shouldn't be that much trouble, but enough trouble that none of what I gain from upgrading currently justify doing it. I think that's fine.
You can fix you code 10 times you will fix it.
> I know Bun's using zig to a degree of success, was wondering how the rest were doing.
Just a degree of success?
Mitchell Hashimoto (developer of Ghostty) talks about Zig a lot. Ghostty is written in it, and he seems to love it. The churn doesn't seem to bother him at all.
I asked him about in a thread a while back: https://news.ycombinator.com/item?id=47206009#47209313
The makers of TigerBeatle also rave about how good Zig is.
For those like me who have never heard of this software: Bun, some sort of package management service for javascript. https://en.wikipedia.org/wiki/Bun_(software)
Bun is a full fledged JavaScript runtime! Think node.js but fast
> Think node.js but fast
Color me extremely sceptical. Surely if you could make javascript fast google would have tried a decade ago....
Bun uses JSC (JavaScriptCore) instead of V8. From what I understand, whereas Node/V8 has a higher tier 4 "top speed", JSC is more optimized for memory and is faster to tier up early/less overhead. Good for serverless. Great for agents -> Anthropic purchase.
> Good for serverless. Great for agents -> Anthropic purchase.
Surely nobody would use javascript for either yea? The weaknesses of the language are amplified in constrained environments: low certainty, high memory pressure, high startup costs.
I think Bun helps with the memory pressure, granted this is relative to V8. I'd pushback on the certainty with the reality that TS provides a significant drop in entropy while benefiting from what is a sweet spot between massive corpus size and low barrier for typical problem/use-case complexity. You'll never have the fastest product with JS, but you will always have good speed to market and be able to move quickly.
Claude CLI is based on bun. The dependency is so complete that Bun have now joined Anthropic.
Claude CLI is not exactly a reference of usable software
It's plenty usable. Most of the problems with the claude TUI stem from it being a TUI (no way to query the terminal emulator's displayed character grid), so you have to maintain your own state and try to keep them in sync, which more than a few TUIs will fail at at least sometimes, hence the popularity of conventions like Ctrl+L to redraw.
> Surely nobody would use javascript for either yea?
It's probably the most popular language for serverless.
I can't vouch for this behavior but obviously you can have a better serverless experience than writing lisp with the shittiest syntax invented by man
Only because the likes of Vercel and Netlify barely offer anything else on their free tiers.
When people go AWS, Azure, GCP,... other languages take the reigns.
Do you have any stats on the latter?
The actual JS code is in the same ballpark as nodejs. They get fast by specializing to each platform's fastest APIs instead of using generic ones, reimplementing JS libraries in Zig (for example npm or jest) and using faster libraries (for example they use the oniguruma regex engine). Also you don't need an extra transpiling step when using TypeScript.
they have, v8 is a pretty fast engine and an engineering marvel. bun is faster at cost of having worse jit and less correctness
Instead of implementing a workaround for types as namespaces, wouldn't it better to explicitly add namespaces to the language?
I am impressed by the achievements of the Zig team. I use the ghostty terminal emulator regularly -- it is built in Zig and it is super stable. It is a fantastic piece of software.
This makes me feel that the underlying technology behind Zig is solid.
But I prefer Rust over Zig. The main difference is Rust chooses a "closed world" model while Zig chooses an "open world" model: in Rust, you must explicitly implement a trait while in Zig as long as the shape fits, or the `.` on a structure member exists (for whichever type you pass in), it will work (I don't use Zig so pardon hand wavy description).
This gives Zig very powerful meta programming abilities but is a pain because you don't know what kind of type "shapes" will be used in a particular piece of code. Zig is similar to C++ templates in some respects.
This has a ripple effect everywhere. Rust generated documentation is very rich and explicit about what functions a structure supports (as each trait is explicitly enrolled and implemented). In Zig the dynamic nature of the code becomes a problem with autocomplete, documentation, LSP support, ...
> But I prefer Rust over Zig. The main difference is Rust chooses a "closed world" model while Zig chooses an "open world" model: in Rust, you must explicitly implement a trait while in Zig as long as the shape fits, or the `.` on a structure member exists (for whichever type you pass in), it will work (I don't use Zig so pardon hand wavy description).
Do you happen to have a more specific example by any chance? I’d be interested in what this looks like in practice, because what you described sounds a bit like Go interfaces and from my understanding of Zig, there’s no direct equivalent to it, other than variations of fieldParentPtr.
They are referring to `anytype`, which is a comptime construct telling the compiler that the parameter can be of any type and as long as the code compiles with the given value, it's good.
It's an extremely useful thing, but unconstrained, it's essentially duck typing during compile time. People has been wanting some kind of trait/interface support to constrain it, but it's unlikely to happen.
Conceptually it's quite similar to how C++ templates work (not including C++20 concepts, which is the kind of constraining you're talking about).
I quite like it when writing C++ code. Makes it dead easy to write code like `min` that works for any type in a generic way. It is, however, arguably the main culprit behind C++s terrible compiler-errors, because you'll have standard library functions which have like a stack of fifteen generic calls, and it fails really deeply on some obscure inner thing which has some kind of type requirement, and it's really hard to trace back what you actually did wrong.
In my (quite limited) experience, Zig largely avoids this by having a MUCH simpler type system than C++, and the standard library written by a sane person. Zig seems "best of both worlds" in this regard.
A 30K-line PR for type resolution sounds scary but this is exactly what pre-1.0 means. Rust went through multiple rounds of similar foundational rework before stabilization, remember the old sigil-based pointer syntax. Getting the type system formalized before committing to backwards compat is worth temporary churn. The real test is whether this new design makes future type system changes smaller and more incremental rather than requiring another 30K-line rewrite.
It’s good to see that this is finally addressed. It’s been a well known broken part of the language semantics for years.
There are similar hidden quirks in the language that will need to be addressed at some point, such as integer promotion semantics.
To address the question about stability: the Zig community are already used to Zig breaking between 0.x versions. Unlike competitors such as Odin or my own C3, there is no expectation that Zig is trying to minimize upgrading problems.
This is a cultural thing, it would be no real problem to be clear about deprecations, but in the Zig community it’s simply not valued. In fact it’s a source of pride to be able to adapt as fast as possible to the new changes.
I like to talk about expectation management, and this is a great example of it.
In discussions, it is often falsely argued that ”Zig is not 1.0 so breaks are expected” in order to motivate the frequent breaks. However, there are degrees to how you handle breaks, and Zig is clearly in the ”we don’t care to reduce the work”-camp.
If someone is trying to get a more objective look at the Zig upgrade path, then it’s worth keeping in mind that the tradition in Zig is to offload all the work on the user.
The argument, which is frequently voiced, is that ”breaking things will make the language get better and so it’s good that there are language breaks”
It is certainly true that breaking changes are needed, but most people outside of the Zig community would expect it to be done with more care (deprecation paths etc)
Secondly, it should perhaps be a concern for Zig, now at 10 years old, to still produce solidly breaking code every half year.
10 years is the common point where languages go 1.0. However, the outlook for a Zig 1.0 is bleak from what I gather from Zig social forums: the most optimistic estimate I’ve heard is 2029 for 1.0.
This means that in the future, projects using Zig can still expect any libraries and applications to bitrot quickly if they are not constantly maintained.
Putting this in contrast with Odin (9 years old) which is essentially 1.0 already and has been stable for several years.
Maybe this also explains the difference in actual output. For example the number of games I know of written in Odin is somewhere between 5 to 10 times as many as Zig games. Now weighing in that Zig has maybe 5 or 10 times as many users, it means Odin users are somewhere between 20-100 times as likely to have written a playable game.
There are several explanations as to why this is: we could discuss whether the availability of SDL, Raylib etc is easier on Odin (then why is Zig less friendly?), maybe more Odin has better programmers (then why do better programmers choose Odin over Zig), maybe it’s just easier to write resource intensive applications with Odin than Zig (then what do we make of Zig’s claim of optimality?)
If we look past the excuses made for Zig (”it’s easy to fix breaks” ”it’s not 1.0”) and the hype (”Zig is much safer than C” ”Zig makes me so productive”) and compare with Odin in actual productivity, stability and compilation speed (neither C3 nor Odin requires 100s of GB of cache to compile in less than a second using LLVM) then Zig is not looking particularly good.
Even things like build.zig, often touted as a great thing, is making it really hard for a Zig beginner (”to build your first Hello World, first understand this build script in non-trivial Zig”). Then for IDEs, suddenly something like just reading the configuration of what is going to be used for building is hidden behind an opaque Zig script. These trade-offs are rarely talked about, as both criticism and hype is usually based on surface rather than depth.
Well, that’s long enough of a comment.
To round it off I’d like to end on a positive note: I find the Zig community nice and welcoming. So if you’re trying Zig out (and better do that, don’t let others’ opinions - including mine - prevent you from trying things out) do so.
If you want to evaluate Zig against competitors, I’d recommend comparing it to D, Odin, Jai and C3.
> Secondly, it should perhaps be a concern for Zig, now at 10 years old, to still produce solidly breaking code every half year.
Not at all, if the team needs 30 more years they should take it.
> However, the outlook for a Zig 1.0 is bleak from what I gather from Zig social forums: the most optimistic estimate I’ve heard is 2029 for 1.0.
Funny you see it as bleak when most of the community sees it as the most excitinh thing in systems programming happening right now.
I think you comment is in bad faith, all the big zig projects say that the upgrade path is never a main concern, just read HN comments here or on other zig threads, people ask about this a lot and maintains always answer.
Congratulations to the dev, a 30,000 line PR for a language compiler (and a very much non-trivial compiler) is a feat to be proud of. But a change of this magnitude is a serious bit of development and gave me pause.
I understand both of the following:
1. Language development is a tricky subject, in general, but especially for those languages looking for wide adoption or hoping for ‘generational’ (program life span being measured in multiple decades) usage in infrastructure, etc.
2) Zig is a young-ish language, not at 1.0, and explicitly evolving as of the posting of TFA
With those points as caveats, I find the casualness of the following (from the codeburg post linked on the devlog) surprising:
‘’’This branch changes the semantics of "uninstantiable" types (things like noreturn, that is, types which contain no values). I wasn't originally planning to do this here, but matching the semantics of master was pretty difficult because the existing semantics don't make much sense.’’’
I don’t know Zig’s particular strategy and terminology for language and compiler development, but I would assume the usage of ‘branch’ here implies this is not a change fully/formally adopted by the language but more a fully implemented proposal. Even if it is just a proposal for change, the large scale of the rewrite and clear implication that the author expects it to be well received strikes me as uncommon confidence. Changing the semantics of a language with any production use is nearly definitionally MAJOR, to just blithely state your PR changes semantics and proceed with no deep discussion (which could have previously happened, IDK) or serious justification or statements concerning the limited effect of those changes is not something I have experienced watching the evolution (or de-evolution) of other less ‘serious’ languages.
Is this a “this dev” thing, a Zig thing, or am just out of touch with modern language (or even larger scale development) projects?
Also, not particularly important or really significant to the overall thrust of TFA, but the author uses the phrase “modern Zig”, which given Zig’s age and seeming rate of change currently struck me as a very funny turn of phrase.
When someone's communication is casual and informal, without any context, you really can't distinguish between:
* The author is being flippant and not taking the situation seriously enough.
* The author is presuming a high-trust audience that knows that they have done all the due diligence and don't have to restate all of that.
In this case, it's a devlog (i.e. not a "marketing post") for a language that isn't at 1.0 yet. A certain amount of "if you're here, you probably have some background" is probably reasonable.
The post does link directly to the PR and the PR has a lot more context that clearly conveys the author knows what they are doing.
It is weird reading about (minor) breaking language changes sort of mentioned in passing. We're used to languages being extremely stable. But Zig isn't 1.0 yet. Andrew and friends certainly take user stability seriously, but you signed up for a certain amount of breakage if you pick the language today.
As someone who maintains a post-1.0 language, there really is a lot of value in breaking changes like this. It's good to fix things while your userbase is small. It's maddening to have to live with obvious warts in the language simply because the userbase got too big for you to feasibly fix it, even when all the users wish you could fix it too. (Witness: The broken precedence of bitwise operators in C.)
It's better for all future users to get the language as clean and solid as you can while it's still malleable.
mlugg is one of the core contributors of Zig, and is a member of the Zig foundation iirc. They've been wanting to work on dependency resolution for a while now, so I'm really glad they're cleaning this up (I've been bitten before by unclear circular dependency errors). There's not a formal language spec yet, since it's moving pretty fast, but tbh I don't see the need for a standard, since that's not one of their goals currently.
Originally, Zig's type system was less disciplined in terms of the "zero" type (also known as "noreturn").
This was proposed, discussed, and accepted here: https://github.com/ziglang/zig/issues/3257
Later, Matthew Lugg made a follow-up proposal, which was discussed both publicly and in ZSF core team meetings. https://github.com/ziglang/zig/issues/15909
He writes:
> A (fairly uncontroversial) subset of this behavior was implemented in [the changeset we are discussing]. I'll close this for now, though I'll probably end up revisiting these semantics more precisely at some point, in which case I'll open a new issue on Codeberg.
I don't know how evident this is to the casual HN reader, but to me this changeset very obviously moves Zig the language from experimental territory a large degree towards being formally specified, because it makes type resolution a Directed Acyclic Graph. Just look at how many bugs it resolved to get a feel for it. This changeset alone will make the next release of the compiler significantly more robust.
Now, I like talking about its design and development, but all that being said, Zig project does not aim for full transparency. It says right there in the README:
> Zig is Free and Open Source Software. We welcome bug reports and patches from everyone. However, keep in mind that Zig governance is BDFN (Benevolent Dictator For Now) which means that Andrew Kelley has final say on the design and implementation of everything.
It's up to you to decide whether the language and project are in trustworthy hands. I can tell you this much: we (the dev team) have a strong vision and we care deeply about the project, both to fulfill our own dreams as well as those of our esteemed users whom we serve[1]. Furthermore, as a 501(c)(3) non-profit we have no motive to enshittify.
[1]: https://ziglang.org/documentation/master/#Zen
It's been incredible working with Matthew. I hope I can have the pleasure to continue to call him my colleague for many years to come.
Where does the name "zero type" come from? In type theory this is called an "empty" type because the set of values of this type is empty and I couldn't find (though I have no idea where to start) mention of it as a "zero" type.
This stuff is foundational and so it's certainly a priority to get it right (which C++ didn't and will be paying for until it finally collapses under its own weight) but it's easier to follow as an outsider when people use conventional terminology.
I think rust calls them "zero sized types".
The ZSTs are unit types. They have one value, which we usually write as just the type name, so e.g.
The choice to underscore that Rust's unit types have size zero is to contrast with languages like C or C++ where these types, which don't need representing, must nevertheless take up a whole byte of storage and it's just wasted.But what we're talking about here are empty types. In Rust we'd write this:
No, that’s a different thing. “noreturn” is like Rust’s “never” type (spelled as an exclamation mark, !). Also known as an “uninhabited type” in programming language theory.
I don't use Zig (is not a language for me), but I like to read/watch about it, when you or others talk about the language design and the reasons behind the changes.
Thanks for your and the Zig team work.
> Zig the language from experimental territory a large degree towards being formally specified
Great to hear; I look forward to reading the language spec one day.
Just thinking out loud here, perhaps behavior like this has been more normalized because of the total shitshow that C is. Which followed all these supposedly correct rules.
> I don’t know Zig’s particular strategy and terminology for language and compiler development
Indeed you don't ... perhaps you should have asked.
> I would assume
Generally a bad idea.
> which could have previously happened, IDK
Indeed you don't. Perhaps you should have asked.
Check the response from the language BDFN, Andrew Kelley.
They literally ended the comment by asking. I don't get why you're criticizing them for providing context that they're not an expert. You're literally distracting from the comment you're telling them has useful information on it, but in a weirdly aggressive way.
This seems unnecessarily hostile. They are asking. Here.