This reminds me of the Servo project's journey. Always impressed to see another implementation of the WHATWG specs.
It's interesting to see Zig being chosen here over Rust for a browser engine component. Rust has kind of become the default answer for "safe browser components" (e.g., Servo, Firefox's oxidation), primarily because the borrow checker maps so well to the ownership model of a DOM tree in theory. But in practice, DOM nodes often need shared mutable state (parent pointers, child pointers, event listeners), which forces you into Rc<RefCell<T>> hell in Rust.
Zig's manual memory management might actually be more ergonomic for a DOM implementation specifically because you can model the graph relationships more directly without fighting the compiler, provided you have a robust strategy for the arena allocation. Excited to learn from Lightpanda's implementation when it's out.
Our goal is to build a headless browser, rather than a general purpose browser like Servo or Chrome. It's already available if you would like to try it: https://lightpanda.io/docs/open-source/installation
I see you're using html5ever for HTML parsing, and like it's trait/callback based API (me too). It looks like style/layout is not in scope at the moment, but if you're ever looking at adding style/layout capabilities to lightpanda, then you may find it useful to know that Stylo [0] (CSS / style system) and Taffy [1] (box-level layout) are both avaiable with a similar style of API (also Parley [2] which has a slightly different API style but can be combined with Taffy to implement inline/text layout).
Off topic note: I read the website and a few pages of the docs and it's unclear to me for what I can use LightPanda safely. Like say I wanted to swap my it as my engine on playwright, what are the tradeoffs? What things are implemented, what isnt?
Thanks for the feedback, we will try to make this clearer on the website. Lightpanda works with Playwright, and we have some docs[1] and examples[2] available.
Web APIs and CDP specifications are huge, so this is still a work in progess. Many websites and scripts already work, while others do not, it really depends on the case. For example, on the CDP side, we are currently working on adding an Accessibility tree implentation.
Maybe you should recommend a recipe for configuring playwright with both chromium and lightpanda backends so a given project can compare and evaluate whether lightpanda could work given their existing test cases.
I think it's really more of an alternative to JSDom than it is an alternative to Chromium. It's not going to fool any websites that care about bots into thinking it's a real browser in other words.
Would be helpful to compare Lightpanda to Webkit, Playwright has a driver for example and its far faster and less resource hungry than Chrome.
When I read your site copy it struck me as either naive to that, or a somewhat misleading comparison, my feedback would be just to address it directly alongside Chrome.
Respectfully, for browser-based work, simplicity is absolutely not a good enough reason to use a memory-unsafe language. Your claim that Zig is in some way safer than Rust for something like this is flat out untrue.
What is your attack model here? Each request lives in its own arena allocator, so there is no way for any potentially malicious JavaScript to escape and read memory owned by any other request, even if there is a miscode. otherwise, VM safety is delegated to the V8 core.
Choosing something like Zig over C++ on simplicity grounds is going to be a false economy. C++ features exist for a reason. The complexity is in the domain. You can't make a project simpler by using a simplistic language: the complexity asserts itself somehow, somewhere, and if a language can't express the concept you want, you'll end up with circumlocution "patterns" instead.
Build system complexity disappears when you set it up too. Meson and such can be as terse as your Curl example.
I mean, it's your project, so whatever. Do what you want. But choosing Zig for the stated reasons is like choosing a car for the shape of the cupholders.
Your Swiss Army Knife with a myriad of 97 oddly-shaped tools may be able to do any job anyone could ask of it, but my Swiss Army Knife of 10 well-designed tools that are optimal for my set of tasks will get my job done with much less frustration.
But sometimes not good ones. Lot's of domains make tradeoffs about what features of C++ to actually make use of. It's an old language with a lot of cruft being used across a wide set of problems that don't necessarily share engineering trade offs.
That’s not fully true though. There’s different types of complexity:
- project requirements
- requirements forced upon you due to how the business is structured
- libraries available for a particular language ecosystem
- paradigms / abstractions that a language is optimised for
- team experiences
Your argument is more akin to saying “all general purpose languages are equal” which I’m sure you’d agree is false. And likewise, complexity can and will manifest itself differently depending on language, problems being solved, and developer preferences for different styles of software development.
So yes, C++ complexity exists for a reason (though I’d personally argue that “reason” was due to “design by committee”). But that doesn’t mean that reason is directly applicable to the problems the LightPanda team are concerned about solving.
C++ features for complexity management are not ergonomic though, with multiple conflicting ideas from different eras competing with each other. Sometimes demolition and rebuild from foundations is paradoxically simpler.
A lot of them only still exist for backwards compatabilities sake though. And a decent amount because adding something as a language extension rather than building the language around it has consequences.
C++ features exist for a reason but it may not be a reason that is applicable to their use case. For example, C++ has a lot of features/complexity that are there primarily to support low-level I/O intensive code even though almost no one writes I/O intensive code.
I don't see why C++ would be materially better than Zig for this particular use case.
I don't think that a language that was meant to compete with C++ and in 10+ years hasn't captured 10% of C++'s (already diminished) market share could be said to have become "kind of the default" for anything (and certainly not when that requires generalising from n≅1).
> It has for Amazon, Adobe, Microsoft, Google and the Linux kernel.
I don't think so. I don't know about Adobe, but it's not a meaningful statement for the rest. Those companies default to writing safe code in languages other than Rust, and the Linux kernel defaults to unsafe code in C. BTW, languages favoured by those projects/companies do not reliably represent industry-wide preferences, let alone defaults. You could certainly say that of the two languages accepted so far in the Linux kernel, the only safe one is Rust, but there's hardly any "default" there.
> It remains to be seen which big name will make Zig unavoidable.
I have no idea whether or not Zig will ever be successful, but at this point it's pretty clear that Rust's success has been less than modest at best.
It is a clear mandate on those companies that whatever used to be C or C++, should be written in Rust for green field development.
Whatever could be done in programming languages with automatic memory management was already being done.
Anyone deploying serverless code into Amazon instances is running of top of Firecracker, my phone has Rust code running on it, and whatever Windows 11 draws something into the screen, it goes through Rust rewrite of the GDI regions logic, all the Azure networking traffic going through Azure Boost cards does so via Rust firmware.
Adobe is the sponsor for the Hylo programming language, and key figures in the C++ community, are doing Rust talks nowadays.
"Adobe’s memory safety roadmap: Securing creativity by design"
> It is a clear mandate on those companies that whatever used to be C or C++, should be written in Rust for green field development. Whatever could be done in programming languages with automatic memory management was already being done.
I don't know how true either of these statements is or to what extent the mandate is enforced (at my company we also have language mandates, but what they mean is that to use a different language all you need is an explanation and a manager to sign off), but I'll ask acquaintances in those companies (Except Adobe; don't know anyone there. Although the link you provided doesn't say Rust; it says "Rust or Swift". It also commits only to "exploring ways to reduce the use of new C and C++ code in safety critical parts of our products to a fraction of current levels").
What I do know is that the rate at which Rust is adopted, is significantly lower than the rate at which C++, Java, C#, Python, TS, and even Go were adopted, even in those companies.
Now, there's no doubt that Rust has some real adoption, and much more than just hobby languages. Its rate of adoption is significantly higher than that of Haskell, or Clojure, or Elixir were (but lower than that of Ruby or PHP). That is without a doubt a great accomplishment, but not what you'd expect from a language that wishes to become the successor to C++ (and doesn't suffer from lack of hype despite its advanced age). Languages that offer a significant competitive advantage, or even the perception of one, spread at a faster pace, certainly those that eventually end up in the top 5.
I also think there's little doubt that the Rust "base" is more enthusiastic than that of any language I remember except maybe that of Haskell's resurgence some years back (and maybe Ruby), and that enthusiasm may make up for what they lack in numbers, but at some point you need the numbers. A middle-aged language can only claim to be the insurgent for so long.
I spoke with someone at AWS, and he says that there is an investment in using Rust for low-level code, but there is no company-wide mandate, and projects are free to pick C or C++.
>>It is a clear mandate on those companies that whatever used to be C or C++, should be written in Rust for green field development.
>>Any hobby language author would like to have 1% of the said modest Rust's success, I really don't get the continuous downplay of such achievement.
This is a political achievement, not technical one. People are bitter about it as it doesn't feel organic and feel pushed onto them.
> Anyone deploying serverless code into Amazon instances is running of top of Firecracker, my phone has Rust code running on it, and whatever Windows 11 draws something into the screen, it goes through Rust rewrite of the GDI regions logic, all the Azure networking traffic going through Azure Boost cards does so via Rust firmware.
Ignoring it doesn't make those achievements political rather than technical.
I was referring to mandate to use it at big companies. This is a political achievement.
Teams/contributors making their own choice and then shipping good software counts as technical one but that wasn't the main point of the post I replied to.
> I was referring to mandate to use it at big companies.
I've worked in almost all of big tech, and these companies don't create mandate just because "Trust me bro" or to gain some "political achievements". Their are teams who champion new technology/languages, they create proof of what new technology will bring to the table which cannot be filled with existing ones.
I left amazon 7 years ago so don't know about recent development. However at Meta/Google teams are encouraged to choose from the mandate languages and if they can't they need to request for exemption and justify the exception.
Rust appeared in 2012. Zig in 2016. I consider them both two successful programming languages, but given they are only 4 years apart, it's easy to compare Zig today with 4-years-back Rust and see they are very far apart, in term of maturity, progress, community size and adoption.
Rust is a very successful language so far, but expecting that in 10y it can overthrow C++ is silly. Codebases add up more than they are replaced.
While certain teams within Google are using rust by default, I'm not sure rust is anywhere close in scale for new lines of code committed per week to c++.
Sure and android is a small part of Google. Everyone in ads, search, cloud are still predominantly c++ (or something higher level like Java). Rust is gaining momentum but overall it's still small.
It's unfortunate that "writing safe code" is constantly being phrased in this way.
The borrow checker is a deterministic safety net. Claiming Zig is easier ignores that its lack of safety checks is what makes it feel easier; if Zig had Rust’s guarantees, the complexity would be the same. Comparing them like this is apples vs. oranges.
That's a very narrow way of looking at things. ATS has a much stronger "deterministic safety net" than Rust, yet the reason to use Rust over ATS is that "fighting the compiler" is easier in Rust than in ATS. On the other hand, if any cost is worth whatever level of safety Rust offers for any project, than Rust wouldn't exist because there are far more popular languages with equal (or better) safety. So Rust's design itself is an admission that 1. more compile-time safety is always better, even if it complicates the language (or everyone who uses Rust should use ATS), and 2. any cost is worth paying for safety (or Rust wouldn't exist in the first place).
Safety has some value that isn't infinite, and a cost that isn't zero. There are also different kinds of safety with different value and different costs. For example, spatial memory safety appears to have more value than temporal safety (https://cwe.mitre.org/top25/archive/2025/2025_cwe_top25.html) and Zig offers spatial safety. The question is always what you're paying and what you're getting in return. There doesn't appear to be a universal right answer. For some projects it may be worth it to pay for more safety, and for other it may be better to pay for something else.
You’re changing the argument. The point wasn’t whether more safety is “worth it”, but that comparing ease while ignoring which invariants are enforced is misleading. Zig can feel simpler because it encodes fewer guarantees. I’m not saying one approach is better, only that this comparison shifts the goalposts.
This is comparing what Rust has and other languages don't without also doing the opposite. For example, Java doesn't enforce data-race freedom, but its data races are safe, which means you can write algorithms with benign races safely (which are very useful in concurrent programming [1]), while in Rust that requires unsafe. Rust's protection against memory leaks that can cause a panic is also weaker, as is Rust's ability to recover from panics in general. Java is now in the process of eliminating the unsafe escape hatch altogether except for FFI. Rust is nowhere near that. I.e. sometimes safe Rust has guarantees that mean that programs need to rely on unsafe code more so than in other languages, which allows saying that safe Rust is "safer" while it also means that fewer programs are actually written purely in safe Rust. The real challenge is increasing safety without also increasing the number of programs that need to circumvent it or increasing the complexity of the language further.
[1]: A benging race is when multiple tasks/threads can concurrently write to the same address, but you know they will all write the same value.
> 1. more compile-time safety is always better, even if it complicates the language (or everyone who uses Rust should use ATS), and 2. any cost is worth paying for safety (or Rust wouldn't exist in the first place).
You keep repeating this. It's not true. If what you said was true, Rust would have adopted HKT, and God knows whatever type astronomy Haskell & Scala cooked up.
There is a balancing act, and Rust decided to plant a flag in memory safety without GC. The fact that Zig, didn't expand on this, but went backwards is more of an indictment of programmers unwilling to adapt and perfect what came before, but to reinvent it in their own worse way.
How did you derive this from the top 25 of CWEs? Let's say you completely remove the spatial memory issues. You still get temporal memory issues at #6.
Rust does have a GC, but I agree it planted its flag at some intermediate point on the spectrum. Zig didn't "go backwards" but planted its own flag ever so slightly closer to C than to ATS (although both Rust and Zig are almost indistinguishable from C when compare to ATS). I don't know if where Rust planted its flag is universally better than where Zig planted its flag, but 1. no one else does either, 2. both are compromises, and 3. it's uncertain whether a universal sweet spot exists in the first place.
> How did you derive this from the top 25 of CWEs? Let's say you completely remove the spatial memory issues. You still get temporal memory issues at #6.
Sure, but spatial safety is higher. So if Rust's compromise, we'll exact a price on temporal safety and have both temporal and spatial safety, is reasonable, then so is Zig's that says, the price on temporal safety is too high for what you get in return, but spatial safety only is a better deal. Neither go as far as ATS in offering, in principle, the ability to avoid all bugs. Nobody knows whether Rust's compormise is universally better than Zig's or vice versa (or perhaps neither is universally better), but I find it really strange to arbitrarily claim that one compromise is reasonable and the other isn't, where both are obviously compromises that recognise there are different benefits and different costs, and that not every benefit is worth any cost.
> And "opt-in non-tracing GC that isn't used largely throughout the standard library" is not a reasonable definition.
Given that refcounting and tracing are the two classic GC algorithms, I don't see what specifying "non tracing" here does, and reference-counting with special-casing of the one reference case is still reference counting. I don't know if the "reasonable definition" of GC matters at all, but if it does, this does count as one.
I agree that the one-reference case is handled in the language and the shared reference case is handled in the standard library, and I think it can be reasonable to call using just the one-reference case "not a GC", but most Rust programs do use the GC for shared references. It is also true that Rust depends less on GC than Java or Go, but that's not the same as not having one.
> When it comes to having more segfaults, we know. Zig "wins" most segfaults per issue Razzie Award.
And Rust wins the Razzie Award for most painful development and lack of similarly powerful arenas. It's like declaring that you win by paying $100 for something while I paid $50 for something else without comparing what we got for the money, or declaring that you win by getting a faster car without looking at how much I paid for mine.
> This is what happens when you ignore one type of memory safety.
When you have less safety for any property, you're guarnateed to have more violations. This is what you buy. Obviously, this doesn't mean that avoiding those extra violations is necessarily worth the cost you pay for that extra safety. When you buy something, looking just at what you pay or just at what you get doesn't make any sense. The question is whether this is the best deal for your case.
Nobody knows if there is a universal best deal here let alone what it is. What is clear is that nothing here is free, and that nothing here has infinite value.
> If you define all non-red colors to be green, it is impossible to talk about color theory.
Except reference counting is one of the two classical GC algorithms (alongside tracing), so I think it's strange to treat it as "not a GC". But it is true that GC/no-GC distinction is not very meaningful given how different the tradeoffs that different GC algorithms make are. Even within these basic algorithms there are combinations. For example a mark-and-sweep collector is quite different from a moving collector, or CPython uses refcouting for some things and tracing for others.
> That's a non-quantifiable skill issue. Segfaults per issue is a quantifiable thing.
That it's not as easily quantifiable doesn't make it any less real. If we compare languages only by easily quantifiable measures, there would be few differences between them (and many if not most would argue that we're missing the differences that matter to them most). For example, it would be hard to distinguish between Java and Haskell. It's also not necessarily a "skill issue". I think that even skilled Rust users would admit that writing and maintaining a large program in TypeScript or Java takes less effort than doing the same in Rust.
Also, ATS has many more compile-time safety capabilities than either Rust or Zig (in fact, compared to ATS, Rust and Zig are barely distinguishable in what they can guarantee at runtime), so according to your measure, both Rust and Zig lose when we consider other alternatives.
> Then you'd be advocating for ATS or Ada.SPARK, not Zig.
Quite the opposite. I'm pointing out that, at least as far as this discussion goes, every added value comes with added cost that needs to be considered. If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust. I'm saying that we don't know where the cost-benfit sweet point is or, indeed, even if there's only one such sweey point or multiple. I'm certainly not advocating for Zig as a universal choice. I'm advocating for selecting the right tradeoffs for every project, and I'm rejecting the claim that whatever benefits Rust or Zig have compared to the other are free. Both (indeed, all languages) require you to pay in some way to get what they're offering. In other words, I'm advocating can both be more or less appropriate than the other, depending on the situation and against the position that Rust is always superior, which is based on only looking at its advantages and ignoring its disadvantages (which, I think, are quite significant).
> Except reference counting is one of the two classical GC algorithms (alongside tracing), so I think it's strange to treat it as "not a GC". But it is true that GC/no-GC distinction is not very meaningful given how different the tradeoffs that different GC algorithms make are.
That's not the issue. Calling anything with opt-in reference counting a GC language. You're just fudging definitions to get to the desired talking point. I mean, C is, by that definition, a GC language. It can be equipped with
> That it's not as easily quantifiable doesn't make it any less real.
It makes it more subjective and easy to bias. Rust has a clear purpose. To put a stop to memory safety errors. What does it's painful to use? Is it like Lisp to Haskell or C to Lisp.
> For example, it would be hard to distinguish between Java and Haskell.
It would be possible to objectively distinguish between Java and Haskell, as long as they aren't feature-by-feature compatible.
If you can make a program that halts on that feature, you can prove you're in language with that feature.
> If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust.
Yeah, because you fight a strawman. Having a safe language is a precondition but not enough. I want it to be as performant as C as well.
Second, even if you have the goal of moving to ATS, developing ATS-like isn't going to help. You need a mass of people to move there.
> Calling anything with opt-in reference counting a GC language
Except I never called it "a GC language" (whatever that means). I said, and I quote, "Rust does have a GC". And it does. Saying that it's "opt in" when most Rust programs use it (albeit to a lesser extent than Java or Go programs, provided we don't consider Rust's special case of a single reference to be GC) is misleading.
> Rust has a clear purpose. To put a stop to memory safety errors.
Yes, but 1. other languages do it, too, so clearly "stopping memory errors" isn't enough, 2. Rust does it in a way that requires much more use of unsafe escape hatches than other languages, so it clearly recognises the need for some compromise, and 3. Rust's safety very much comes at a cost.
So its purpose may be clear, but it is also very clear that it makes tradeoffs and compromises, which implies that other tradeoffs and compromises may be reasonable, too.
But anyway, having a very precise goal makes some things quantifiable, but I don't think anyone thinks that's what makes a language better than another. C and JS also have very clear purposes, but does that make them better than, say, Python?
> Having a safe language is a precondition but not enough. I want it to be as performant as C as well... You need a mass of people to move there.
So clearly you have a few prerequisites, not just memory safety, and you recognise the need for some pragmatic compromises. Can you accept that your prerequisites and compromises might not be universal and there may be others that are equally reasonable, all things considered?
I am a proponent of software correctness and formal methods (you can check out my old blog: https://pron.github.io) and I've learnt a lot over my decades in industry about the complexities of software correctness. When I choose a low-level language, to switch away from C++ my prerequisites are: a simple language with no implicitness (I want to see every operation on the page) as I think it makes code reviews more effective (the effectiveness of code reviews has been shown empirically, although not the relationship to language design) and fast compilation to allow me to write more tests and run them more often.
I'm not saying that my requirements are universally superior to yours, and my interests also lie in a high emphasis on correctness (which extends far beyond mere memory safety), it's just that my conclusions and perhaps personal preferences lead me to prefer a different path to your preferred one. I don't think anyone has any objective data to support the claim that my preferred path to correctness is superior to yours or vice-versa.
I can say, however, that in the 1970s, proponents of deductive proofs warned of an impending "software crisis" and believed that proofs are the only way to avoid it (as proofs are "quantifiably" exhaustive). Twenty years later, one of them, Tony Hoare, famously admitted he was wrong, and that less easily quantifiable approaches turned out to be more effective than expected (and more effective than deductive proofs, at least of complicated properties). So the idea that an approach is superior just because it's absolute/"precise" is not generally true.
Of course, we must be careful not to extrapolate and generalise in either direction, but my point is that software correctness is a very complicated subject, and nobody knows what the "best" path is, or even if there is one such best path.
So I certainly expect a Rust program to have fewer memory-safety bugs than a Zig programs (though probably more than a Java program), but that's not what we care about. We want the program to have the fewest dangerous bugs overall. After all, I don't care if my user's credit-card data is stolen due to a UAF or due to SQL injection. Do I expect a Rust program to have fewer serious bugs than a Zig program? No, and maybe the opposite (and maybe the same) due to my preferred prerequisites I listed above. The problem with saying that we should all prefer the more "absolute" approach, though it could possibly harm less easily-quantifiable aspects, because it's at least absolute in whatever it does guarantee is that this belief has already been shown to not be generally true.
(As a side note, I'll add that a tracing GC doesn't necessarily have a negative impact on speed, and may even have a positive one. The main tradeoff is RAM footprint. In fact, the cornerstone of tracing algorithms is that they can reduce the cost of memory management to be arbitrarily low given a large-enough heap. In practice, of course, different algorithms make much more complicated pragmatic tradeoffs. Basic refcounting collectors primarily optimise for footprint.)
> Except I never called it "a GC language" (whatever that means). I said, and I quote, "Rust does have a GC".
Ok, semantics aside, my point still stands. C also has a GC. See Boehm GC. And before you complain RC is part of std I will point that std is optional and is on track to become a freestanding library.
> Can you accept that your prerequisites and compromises might not be universal
Not the way hardware is moving, which is to say more emphasis on more cores and with no more free lunch from hardware. Regardless of whether it is on-prem or in the cloud, mandatory GC is not a cost you can justify easily anymore.
> As a side note, I'll add that a tracing GC doesn't necessarily have a negative impact on speed, and may even have a positive one
Yeah, but it has a negative impact on memory. As witnessed in the latest RAM crisis, there is no guarantee you can just rely on more memory providing benefits.
> After all, I don't care if my user's credit-card data is stolen due to a UAF or due to SQL injection.
Sure, but those that see fewer UAF errors have more time to deal with logic errors. Of course there are confounding variables such as believing you are king of the world, or that Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
> C also has a GC. See Boehm GC. And before you complain RC is part of std I will point that std is optional and is on track to become a freestanding library.
Come on. The majority of Rust programs use the GC. I don't understand why it's important to you to debate this obvious point. Rust has a GC and most Rust programs use it (albeit to a much lesser extent than Java/Python/Go etc.). I don't understand why it's a big deal.
You want to add the caveat that some Rust programs don't use the GC and it's even possible to not use the standard library at all? Fine.
> Not the way hardware is moving, which is to say more emphasis on more cores and with no more free lunch from hardware. Regardless of whether it is on-prem or in the cloud, mandatory GC is not a cost you can justify easily anymore.
This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades and is continuing to do so.
> As witnessed in the latest RAM crisis, there is no guarantee you can just rely on more memory providing benefits.
What you say about RAM prices is true, but it still doesn't change the economics of RAM/CPU sufficiently. There is a direct correspondence between how much extra RAM a tracing collector needs and the amount of available CPU (through the allocation rate). Regardless of how memory management is done (even manually), reducing footprint requires using more CPU, so the question isn't "is RAM expensive?" but "what is the relative cost of RAM and CPU when I can exchange one for the other?" The RAM/CPU ratios available in virtually all on-prem or cloud offerings are favourable to tracing algorithms.
If you're interested in the subject, here's an interesting keynote from the last International Symposium on Memory Management (ISMM): https://youtu.be/mLNFVNXbw7I
> Sure, but those that see fewer UAF errors have more time to deal with logic errors.
I think that's a valid argument, but so is mine. If we knew the best path to software correctness, we'd all be doing it.
> Of course there are confounding variables such as believing you are king of the world, or that Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
I understand that's something you believe, but it's not supported empirically, and as someone who's been deep in the software correctness and formal verification world for many, many years, I can tell you that it's clear we don't know what the "right" approach is (or even that there is one right approach) and that very little is obvious. Things that we thought were obvious turned out to be wrong.
It's certainly reasonable to believe that the Rust approach leads to more correctness than the Zig approach, and some believe that, and it's equally reasonable to believe that the Zig approach leads to more correctness than the Rust approach, and some people believe that. It's also reasonable to believe that a different approaches is better for correctness in different circumstances. We just don't know, and there are reasonable justifications in both directions. So until we know, different people will make different choices, based on their own good reasons, and maybe at some point in the future we'll be able to have some empirical data that gives us something more grounded in fact.
> Come on. The majority of Rust programs use the GC.
This part is false. You make a ridiculous statement and expect everyone to just nod along.
I could see this being true iff you say all Rust UI programs use "RC".
> This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades
Without ever increasing memory/CPU, you're going to have to squeeze more performance out of the stone (more or less unchanging memory/CPUs).
GC will be a mostly unacceptable overhead in numerous instances. I'm not saying it will be fully gone, but I don't think the current crop of C-likes is accidental either.
> I understand that's something you believe, but it's not supported empirically
> Stable and high-quality changes differentiate Rust. DORA uses rollback rate for evaluating change stability. Rust's rollback rate is very low and continues to decrease, even as its adoption in Android surpasses C++.
So for similar patches, you see fewer errors in new code. And the overall error rate still favors Rust.
> Without ever increasing memory/CPU, you're going to have to squeeze more performance out of the stone (more or less unchanging memory/CPUs).
The memory overhead of a moving collector is related only to the allocation rate. If the memory/CPU is sufficient to cover that overhead, which, in turn help save more costly CPU, it doesn't matter if the relative cost reduced (also, it's not even reduced; you're simply speculating that one day it could be).
> I'm not saying it will be fully gone
That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years with no reversal in trend. This is like saying that more people will find the cost of typing a text message unacceptable so we'll see a rise in voicemail messages, but of course text messaging will not be fully gone.
Even embedded software is increasingly written in languages that rely heavily on GC. Now, I don't know the future market forces, and maybe we won't be using any programming languages at all but LLMs will be outputting machine code directly, but I find it strange to predict with such certainty that the trend we've been seeing for so long will reverse in such full force. But ok, who knows. I can't prove that the future you're predicting is not possible.
> It's supported by Google's usage of Rust.
There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing, and therefore in the total reduction of bugs, and you said that maybe a complex language like Rust, with lots of implicitness but also temporal memory safety could perhaps have a positive effect on other bugs, too, in comparison. What you linked to is something about Rust vs C and C++. Zig is at least as different from either one as it is from Rust.
> And the overall error rate still favors Rust.
Compared to C++. What does it have to do with anything we were talking about?
> That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years
I wish I knew what you mean by programs relying primarily on GC. Does that include Rust?
Regardless, but extrapolating current PL trends so far is a fools errand. I'm not looking at current social/market trends but limits of physics and hardware.
> There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing
No, let me remind you:
> > [snip] Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
> I understand that's something you believe, but it's not supported empirically
we were talking how not having to worry about UB allows for easier defect catching.
> Compared to C++.
Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time. Even if it isn't a 1-to-1 comparison with Zig, we have other examples like Bun vs Deno, where Bun incurs more segfaults (per issue).
Also don't see how much of Zig design could really assist code reviews and testing.
No. Most memory management in Rust is not through it's GC, even though most Rust programs do use the GC to some extent.
> I'm not looking at current social/market trends but limits of physics and hardware.
The laws of physics absolutely do not predict that the relative cost of CPU to RAM will decrease substantially. Unforeseen economic events may always happen, but they are unforeseen. It's always possible that current trends would reverse, but that's a different matter from assuming they are likely to reverse.
> Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time.
I don't know how reasonable it is to think that. If Rust's value comes from eliminating spatial and temporal memory safety issues, surely there's value in eliminating the more dangerous of the two, which Zig does as well as Rust (but C++ doesn't).
But even if you think that's reasonable for some reason, I think it's at least as reasonable to think the opposite, given that in almost 30 years of programming in C++, by far my biggest issue with the language has been its complexity and implicitness, and Zig fixes both. Given how radically different Zig is from C++, my preferenece for Zig stems precisely from it solving what is, to me, the biggest issue with C++.
> Also don't see how much of Zig design could really assist code reviews and testing.
Because it's both explicit and simple. There are no hidden operations performed by a routine that do not appear in that routine's code. In C++ (or Rust), to know whether there's some hidden call to a destructor/trait, you have to examine all the types involved (to make matters worse, some of them may be inferred).
The fact that Zig doesn't have Rust's guarantees doesn't mean Zig does not have safety checks. The safety checks that Zig does have are different, and are different in a way that's uniquely useful for this particular project.
Zig's check absolutely don't go to the extent that Rust's do, which is kind of the point here. If you do need to go beyond safe code in Rust, Zig is safer than unsafe code in Rust.
Saying Zig lacks safety checks is unfortunate, although I wouldn't presume you meant it literally and just wanted to highlight the difference.
Thing is, those safety checks are also available in C and C++, provided that one uses the right tools like PVS and PurifyPlus (just to quote two examples), and now ongoing AI based tooling efforts for verification, thus the question is why a language like Zig in the 21st century, other than "I don't like either C++ or Rust".
You did. Or, alternatively, if you don't equate "checks" with "features", then I never said you said that so what are you complaining about?
> If it would have Rusts guarantees (as in: The same) it would be more complex.
Which is true (if tautological), and is basically what the GP said:
> Zig's manual memory management might actually be more ergonomic for a DOM implementation specifically because you can model the graph relationships more directly without fighting the compiler, provided you have a robust strategy for the arena allocation
Both you and the GP agree that Rust is more complex.
You objected to this with:
> It's unfortunate that "writing safe code" is constantly being phrased in this way.
Upon which I commented that Zig does have safety features, even if they're not covering you as well as Rust's ones. Which is, again, inline with "provided you have a robust strategy for the arena allocation."
Now, if you think I'm going overboard with this, I agree with you -- and this is the exact feeling I have when I look at Rust :)
But arenas have substantial benefits. They may be one of the few remaining reasons to use a low-level (or "systems programming") language in the first place. Most things are tradeoffs, and the question isn't what you're giving up, but whether you're getting the most for what you're paying.
First, Zig is more modern than any of the languages you mention. Second, I'm not aware that any of those languages offer arenas similar in their power and utility to Zig's while offering UAF-freedom at the same time. Note that "type-safe" arenas are neither as powerful as general purpose arenas nor fully offer UAF-freedom. I could be wrong (and if I am, I'd really love to see an arena that's both general and safe), but I believe that in all these languages you must compromise on either safety or the power of the arena (or both).
"modern: relating to the present or recent times as opposed to the remote past". I agree it's not a useful concept here but I didn't bring it up. Specifically, I don't think there's any consideration that had gone into the design of D, C#, or Rust that escaped Zig's designer. He just consciously made different choices based on the data available and his own judgment.
Not really modern, it is Object Pascal/Modula-2 repackaged in C like syntax.
The only thing relatively modern would be compile time execution, if we forget about how long some languages have had reader macros, or similar capabilities like D's compile time metaprogramming.
Also it is the wrong direction when the whole industry is moving into integrity by default on cyber security legislation.
There are several examples around of doing arenas in said languages.
> Not really modern, it is Object Pascal/Modula-2 repackaged in C like syntax.
That's your opinion, but I couldn't disagree more. It places partial evaluation as its biggest focus more so than any other language in history, and is also extraordinarily focused on tooling. There isn't any piece of information nor any technique that was known to the designers of those older languages and wasn't known to Zig's designer. In some situations, he intentionally chose different tradeoffs on which there is no consensus. It's strange to insist that there is some consensus when many disagree.
I have been doing low-level programming (in C, C++, and Ada in the 90s) for almost 30 years, and over that time I have not seen a low-level language that's as revolutionary in its approach to low-level programming as Zig. I don't know if it's good, but I find its design revolutionary. You certainly don't have to agree with my assessment, but you do need to acknowledge that some people very much see it that way, and don't think it's merely a "repackaged" Pascal-family language in any way.
I guess you could say that you personally don't care about Zig's primary design points and when you ignore them you're left with something that you find similar to other languages, but that's like saying that if you don't care about Rust's borrow- and lifetime checking, it's basically just a mix of C++ and ML. It's perfectly valid to not care about what matters most to some language's designer, and it's perfectly valid to claim that what matters to them most is misguided, but it is not valid to ignore a language's design core when describing it just because you don't care about it.
> Also it is the wrong direction when the whole industry is moving into integrity by default on cyber security legislation.
Again, that is an opinion, but not one I agree with. For one, Rust isn't as safe as other safe languages given its relatively common reliance on unsafe. If spatial and temporal memory safety were the dominating concerns, there wouldn't be a need for Rust, either (and it wouldn't have exposed unsafe). Clearly, everyone recognises that there are other concerns that sometimes dominate, and it's pretty clear that some people, who are no less knowledgeable about the software industry and its direction, prefer Zig. There is no consensus here either way, and I'm not sure there can be one. They are different languages that suit different people/projects' preferences.
Now, I agree that there's definitely more motion toward more correctness - which is great! - and I probably wouldn't write a banking or healthcare system in Zig, but I wouldn't write it in Rust, either. People reach for low level languages precisely when there may be a need to compromise on safety in some way, and Rust and Zig make different compromises, both of which - as far as I can tell - can be reasonable.
> There are several examples around of doing arenas in said languages.
From what I can tell, all of them either don't provide freedom from UAF, or they're not nearly as general as a proper arena.
I know of one safe and general arena design in RTSJ, which immidiately prevents a reference to a non-enclosing arena from being written into an object, but it comes with a runtime cost (which makes sense for hard realtime, where you want to sacrifice performance for worst-case predictability).
> You certainly don't have to agree with my assessment, but you do need to acknowledge that some people very much see it that way, and don't think it's merely a "repackaged" Pascal-family language in any way.
My opinion is that 99% of those people never knew anything beyond C and C++ for systems programming, and even believe the urban myth that before C there were no systems programming languages.
Similar to those that only discover compiled languages and type systems exist, after spending several years with Python and JavaScript, and then even Go seems out of this world.
I don't know about the numbers. Some of Zig's famous proponents are Rust experts. I don't know the specific percentages, but you could level a similar accusation at Rust's proponents, too, i.e. that they have insufficient exposure to alternative techniques. And BTW, Zig's approach is completely different from that of C, C++, Rust, or the Pascal family languages. So if we were to go by percentages, we could dismiss all criticisms against Zig on the same basis (i.e. most people may think it's like C++, or C, or Modula, but since it isn't, then their criticisms are irrelevant). In fact, because Rust is a fairly old language and Zig isn't, it's more likely that more Zig developers are familiar with Rust than vice-versa.
But also I don't see why that even matters. If even some people with a lot of experience in other approaches to systems programming and even with experience with deeper aspects of software correctness accept this assessment, then you can't waive it away. It's okay to think we're wrong - after all no one has the sufficient empirical evidence to support their claim either way - but you cannot ignore the fact that some of those with extensive experience disagree with you, just as I'm happy to accept that some of them disagree with me.
Wouldn't C# and Swift make it tough to integrate with other languages? Whereas something written in Zig (or Rust) can integrate with anything that can use the C ABI?
It's harder than you'd expect. Depending on what kind of bucketing an arena does (by size or by type), a stale reference may end up pointing to another piece of memory of the correct type, which is still wrong, but more subtly than a crash.
I'm not familiar enough with Zig to want to dive into architecture, the point I wanted to make is general to arenas in any language that can have a stale reference.
I once had a stale stack reference bug in C that lived for a year, because the exact same object was created at the exact same offset every time it was used, which is a similar situation.
Too late now, but is the requirement for shared mutable state inherent in the problem space? Or is it just because we still thought OOP was cool when we started on the DOM design?
Yes. It is required for W3C's DOM APIs, which give access to parent nodes and allow all kinds of mutations whenever you want.
Event handlers + closures also create potentially complex situations you can't control, and you'll need a cycle-breaking GC to avoid leaking like IE6 did.
You can make a more restricted tree if you design your own APIs with immutability/ownership/locking, but that won't work for existing JS codebases.
I don't think it's really that bad in Rust. If you're happy with an arena in Zig you can do exactly the same thing in Rust. There are a ton of options listed here: https://donsz.nl/blog/arenas/
Some of them even prevent use after free (the "ABA mitigation" column).
I'm not super experienced with zig, but I always think that in the same way that rust forces you to think about ownership (by having the borrow checker - note: I think of this as a good thing personally) zig makes you think upfront about your allocation (by making everything that can allocate take an allocator argument.).
It makes everything very explicit, and you can always _see_ where your allocations are happening in a way that you can't (as easily, or as obviously - imo) in rust.
It seems like something I quite like. I'm looking forward to rust getting an effects system/allocator api to help a little more with that side of things.
Yep, rust forces you to think about lifetimes. Zig only suggests it (because you're forced to think about allocation, which makes you naturally think about the lifetime usually) but does not help you with it/ensure correctness.
It's still nice sometimes to ensure that you have to think about allocation everywhere, and can change the allocation strategy for something that works for your usecase. (hence why I'm looking forward to the allocator api in rust to get the best of both worlds).
That's true and I liked the idea of it until I started writing some Zig where I needed to work with strings. Very painful. I'm sure you typically get a bit faster string manipulation code than what you'd get with Rust but I don't think it's worth the cost (Rust is pretty fast already).
Can't agree more. I hope someone puts some work into a less painful way to manage strings in std. I would but I don't manipulate strings quite enough to support usecases more than basically concatenation...
As of 0.15.X, you can build strings using a std.Io.Writer. You can either:
- use std.Io.Writer.fixed to use a slice for the memory, and use .buffered() when you're done to get the subslice of the buffer that contains your string
or
- Create an instance of std.Io.Writer.Allocating with an allocator, and use .toOwnedSlice() when you're done to get your allocated string.
In both cases you just use regular print functions to build your string.
Depending on your needs, it may also be good to use a fixed writer with a dynamically allocated slice, where the size of the allocation is computed using std.fmt.count(). This can be better than using std.Io.Writer.Allocating because you can avoid doing multiple allocations.
No, you can't do the same thing in Rust, because Rust crates and the standard library generally use the global allocator and not any arena you want to use in your code.
I mean you can store the nodes in an arena so you don't have to deal with the borrow checker getting upset with your non-tree ownership structure. That's the context. We weren't talking about arena use for speed/efficiency purposes. In that case you are right; it's much more awkward to use custom allocators in Rust.
A language which is not 1.0, and has repeatedly changed its IO implementation in a non-backwards-compatible way is certainly a courageous choice for production code.
So, I'm noodling around with writing a borrow checker for zig, and you don't get to appreciate this working with zig on a day to day level, but the internals of how the zig compiler works are AMAZING. Also, the io refactor will (I think) let me implement aliasing checking (alias xor mutable).
In my experience, migrating small-scale projects takes from minutes to single digit hours.
Standard library is changing. The core language semantics - not so much. You can update from std.ArrayListUnmanaged to std.array_list.Aligned with to greps.
- Fetch and execute JavaScript that manipulates the DOM
But not the following:
- Fetch and parse CSS to apply styling rules
- Calculate layout
- Fetch images and fonts for display
- Paint pixels to render the visual result
- Composite layers for smooth scrolling and animations
So it's effectively a net+DOM+script-only browser with no style/layout/paint.
---
Definitely fun for me to watch as someone who is making a lightweight browser engine with a different set of trade-offs (net+DOM+style/layout/paint-only with no script)
When I was working before on something that used headless browser agents, the ability to do a screenshot (or even a recording) was really great for debugging... so I am not sure about the "no paint". But hey everything in life is a trade-off.
Really depends on what you want to do with the agents. Just yesterday I was looking for something like this for our web access MCP server[0]. The only thing that it needs to do is visit a website and get the content (with JS support, as it's expected that most pages today use JS), and then convert that to e.g. Markdown.
I'm not too happy with the fact that Chrome is one of our memory-hungriest parts of all the MCP servers we have in use. The only thing that exceeds that in our whole stack is the Clickhouse shard, which comes with Langfuse. Especially if you are looking to build a "deep research" feature that may access a few hundreds of webpages in a short timeframe, having a lightweight alternative like Lightpanda can make quite the difference.
Well, it was "normal" crawlers that needed to work perfectly and deterministically (as best as possible), not probabilistically (AI); speed was no issue. And I wanted to debug when something went wrong. So yeah for me it was crucial to be able to record/screenshot.
So yeah, everything is a trade-off, and we needed a different trade-off; we actually decided to not use headless chromium, because they are slight differences, so we ended up using full chrome (not even chromium, again - slight differences) with xvfb. It was very, very memory hungry; but again was not an issue
(I used "agent" as in "browser agent", not "AI agent", I should be more precise I guess.)
yeah I feel the same, I think even having a screenshot of part of rendered page or full page can be useful even for machines considering how heavy those HTML can be to parse and expensive for LLM context. Sometimes (sub)screenshot is just a better kind of compression
I've spent some time feeding llm with scrapped web pages and I've found that retaining some style information (text size, visibility, decoration image content) is non trivial.
> So it's effectively a net+DOM+script-only browser with no style/layout/paint.
> ---
> Definitely fun for me to watch as someone who is making a lightweight browser engine with a different set of trade-offs (net+DOM+style/layout/paint-only with no script)
Both projects (Lightpanda, DioxusLabs/blitz) sound very interesting to me. What do you think about rendering patterns that require both script+layout for rendering, e.g. virtual scrolling of large tables?
What would be a good pattern to make virtual scrolling work with Lightpanda or Blitz?
So Blitz does technically have scripting, it's just Rust scripting rather than JavaScript scripting. So the plan for virtual scrolling would likely be to implement it in Rust.
If your aim is to render a UI (ala Electron/Flutter) then we have a React-style framework (Dioxus) that runs on top of Blitz, and allows you access to the low-level Rust API of the DOM for advanced use cases (although it's still a WIP and this API is a bit rough atm). I'm also hoping to eventually have a built-in `RecyclerView`-like widget for this (that can bypass the style/layout systems for much more efficient virtual scrolling).
Thanks! But I meant JS based virtual scrolling in web pages. E.g. dynamic data tables that only render the part of the table that fits in the viewport.
For scrolling, when using Intersection Observer, we currently assume all elements are visible. So, if you register an observer, we will dispatch an entry indicating an intersection with a ratio of 1.0.
it's so tiring that every time there's a post about something being implemented in Zig or C or C++, the Rust brigade shows up trying to pick up a fight.
It’s a site where programming nerds congregate to waste time arguing with each other. Where do you think you are?
This same pattern used to play out with Ruby, Lisp, and other languages in different eras of this site. It will probably never stop and calling it out seems to just fan the flames more than anything else.
As part of the "all software should be liable brigade", it is a matter of misplaced goals after the cybersecurity agencies started looking into the matter.
Innovation doesn't go for the sake of innovation itself. Innovation should serve a purpose. And the purpose of having programming languages is to overcome the limitations of human mind, of our attention span, of our ability to manipulate concepts expressed in abstractions and syntax. We don't know how long we'll need this.
I really like Zig, I wish it appeared several years earlier. But rewriting everything in Zig might just not have practical sense soon.
I agree that programming languages will no longer need to be as accessible to humans.
However there is still a strong argument to be made for protections/safety that languages can provide.
e.g. would you expect a model (assuming it had the same expertise in each language) to make more mistakes in ASM, C, Zig, or Rust?
I imagine most would agree that ASM/C would be likely to have the most mistakes simply because fewer constraints are enforced as you go closer to the metal.
So, while we might not care about how easy it is for a human to read/write, there will still be a purpose for innovation in programming languages. But those innovations, IMO, will be more focused on how to make languages easier for AI.
> would you expect a model (assuming it had the same expertise in each language) to make more mistakes in ASM, C, Zig, or Rust?
"assuming it had the same expertise in each language" is the most important part here, because the expertise of AI with these languages is very different. And, honestly, I bet on C here because its code base is the largest, the language itself is the easiest to reason about and we have a lot of excellent tooling that helps mitigate where it falls short.
> I imagine most would agree that ASM/C would be likely to have the most mistakes simply because fewer constraints are enforced as you go closer to the metal.
We need these constraints because we can't reliably track all the necessary details. But AI might be much more capable (read — scalable) in that, so all the complexity that we need to accumulate in a programming language it might just know out of the way it's built.
I’m going to assume you’re open to an honest discussion here.
> "assuming it had the same expertise in each language" is the most important part here, because the expertise of AI with these languages is very different.
You are correct, but I am trying to illustrate that assuming some ideal system with equal expertise, the languages with more safety would win out in productivity/bugs over those with less safety.
As in to say that it could be worth investing further in safer programming languages because AI would benefit.
> We need these constraints because we can't reliably track all the necessary details.
AI cannot reliably track the details either (yet, though I am sure it can be done). Even if it could, it would be a complete waste of resources (tokens).
Why have an AI determine the type of a variable when it could be done in a deterministic manner with a compiler or linter?
To me these arguments closely mirror/follow arguments of static/dynamically typed languages for human programmers. Static type systems eliminate certain kinds of errors and can produce higher quality programs. AI systems will benefit in the same way if not more by getting instant feedback on the validity of their program.
Yes, I get your point and I think your arguments are valid, it's just not the whole story.
The thing about programming languages is that both for their creators and advocates a significant part of motivation to drive is emotions and not the rational necessity alone. Learning a new programming language along with its ecosystem is an investment of time and effort, it is something that our brains mark as important and therefore protected (I'm looking at Rust). Now when AI is going to write all the code, that emotional part might eventually dissolve and move to something else, leaving the question of choice of a programming language much less relevant. Like the list of choices Claude Code shows to you in planning mode: "do you wish to use SQLite, PostgreSQL or MySQL as a database for your project?" (*picking the "Recommended" option)
That said, I hope that Zig will make it to version 1.0 before AI turns all the tables and sweeps many things away. It might be my bias and I'm wrong and overestimating the irrational part, then I'll be glad to admit my mistake.
This reminds me of the Servo project's journey. Always impressed to see another implementation of the WHATWG specs.
It's interesting to see Zig being chosen here over Rust for a browser engine component. Rust has kind of become the default answer for "safe browser components" (e.g., Servo, Firefox's oxidation), primarily because the borrow checker maps so well to the ownership model of a DOM tree in theory. But in practice, DOM nodes often need shared mutable state (parent pointers, child pointers, event listeners), which forces you into Rc<RefCell<T>> hell in Rust.
Zig's manual memory management might actually be more ergonomic for a DOM implementation specifically because you can model the graph relationships more directly without fighting the compiler, provided you have a robust strategy for the arena allocation. Excited to learn from Lightpanda's implementation when it's out.
Hi, I am Francis, founder of Lightpanda. We wrote a full article explaining why we choose Zig over Rust or C++, if you are interested: https://lightpanda.io/blog/posts/why-we-built-lightpanda-in-...
Our goal is to build a headless browser, rather than a general purpose browser like Servo or Chrome. It's already available if you would like to try it: https://lightpanda.io/docs/open-source/installation
I see you're using html5ever for HTML parsing, and like it's trait/callback based API (me too). It looks like style/layout is not in scope at the moment, but if you're ever looking at adding style/layout capabilities to lightpanda, then you may find it useful to know that Stylo [0] (CSS / style system) and Taffy [1] (box-level layout) are both avaiable with a similar style of API (also Parley [2] which has a slightly different API style but can be combined with Taffy to implement inline/text layout).
[0]: https://github.com/servo/stylo
[1]: https://github.com/DioxusLabs/taffy
[2]: https://github.com/linebender/parley
---
Also, if you're interested in contributing C bindings for html5ever upstream then let me know / maybe open a github issue.
Off topic note: I read the website and a few pages of the docs and it's unclear to me for what I can use LightPanda safely. Like say I wanted to swap my it as my engine on playwright, what are the tradeoffs? What things are implemented, what isnt?
Thanks for the feedback, we will try to make this clearer on the website. Lightpanda works with Playwright, and we have some docs[1] and examples[2] available.
Web APIs and CDP specifications are huge, so this is still a work in progess. Many websites and scripts already work, while others do not, it really depends on the case. For example, on the CDP side, we are currently working on adding an Accessibility tree implentation.
[1] https://lightpanda.io/docs/quickstart/build-your-first-extra...
[2] https://github.com/lightpanda-io/demo/tree/main/playwright
Maybe you should recommend a recipe for configuring playwright with both chromium and lightpanda backends so a given project can compare and evaluate whether lightpanda could work given their existing test cases.
I was actually interested into using lightpanda for E2Es to be honest, because halving the feedback cycle would be very valuable to me.
I think it's really more of an alternative to JSDom than it is an alternative to Chromium. It's not going to fool any websites that care about bots into thinking it's a real browser in other words.
Would be helpful to compare Lightpanda to Webkit, Playwright has a driver for example and its far faster and less resource hungry than Chrome.
When I read your site copy it struck me as either naive to that, or a somewhat misleading comparison, my feedback would be just to address it directly alongside Chrome.
Thanks Francis, appreciate the nice & honest write-up with the thought process (while keeping it brief).
Would be great if it could be used as a wasm library... Just saying... Is it? I would actually need and use this.
Respectfully, for browser-based work, simplicity is absolutely not a good enough reason to use a memory-unsafe language. Your claim that Zig is in some way safer than Rust for something like this is flat out untrue.
What is your attack model here? Each request lives in its own arena allocator, so there is no way for any potentially malicious JavaScript to escape and read memory owned by any other request, even if there is a miscode. otherwise, VM safety is delegated to the V8 core.
In that blog post, the author said safer than C not Rust.
Choosing something like Zig over C++ on simplicity grounds is going to be a false economy. C++ features exist for a reason. The complexity is in the domain. You can't make a project simpler by using a simplistic language: the complexity asserts itself somehow, somewhere, and if a language can't express the concept you want, you'll end up with circumlocution "patterns" instead.
Build system complexity disappears when you set it up too. Meson and such can be as terse as your Curl example.
I mean, it's your project, so whatever. Do what you want. But choosing Zig for the stated reasons is like choosing a car for the shape of the cupholders.
Your Swiss Army Knife with a myriad of 97 oddly-shaped tools may be able to do any job anyone could ask of it, but my Swiss Army Knife of 10 well-designed tools that are optimal for my set of tasks will get my job done with much less frustration.
> C++ features exist for a reason.
But sometimes not good ones. Lot's of domains make tradeoffs about what features of C++ to actually make use of. It's an old language with a lot of cruft being used across a wide set of problems that don't necessarily share engineering trade offs.
That’s not fully true though. There’s different types of complexity:
- project requirements
- requirements forced upon you due to how the business is structured
- libraries available for a particular language ecosystem
- paradigms / abstractions that a language is optimised for
- team experiences
Your argument is more akin to saying “all general purpose languages are equal” which I’m sure you’d agree is false. And likewise, complexity can and will manifest itself differently depending on language, problems being solved, and developer preferences for different styles of software development.
So yes, C++ complexity exists for a reason (though I’d personally argue that “reason” was due to “design by committee”). But that doesn’t mean that reason is directly applicable to the problems the LightPanda team are concerned about solving.
C++ features for complexity management are not ergonomic though, with multiple conflicting ideas from different eras competing with each other. Sometimes demolition and rebuild from foundations is paradoxically simpler.
A lot of them only still exist for backwards compatabilities sake though. And a decent amount because adding something as a language extension rather than building the language around it has consequences.
C++ features exist for a reason but it may not be a reason that is applicable to their use case. For example, C++ has a lot of features/complexity that are there primarily to support low-level I/O intensive code even though almost no one writes I/O intensive code.
I don't see why C++ would be materially better than Zig for this particular use case.
I don't think that a language that was meant to compete with C++ and in 10+ years hasn't captured 10% of C++'s (already diminished) market share could be said to have become "kind of the default" for anything (and certainly not when that requires generalising from n≅1).
It has for Amazon, Adobe, Microsoft, Google and the Linux kernel.
It remains to be seen which big name will make Zig unavoidable.
> It has for Amazon, Adobe, Microsoft, Google and the Linux kernel.
I don't think so. I don't know about Adobe, but it's not a meaningful statement for the rest. Those companies default to writing safe code in languages other than Rust, and the Linux kernel defaults to unsafe code in C. BTW, languages favoured by those projects/companies do not reliably represent industry-wide preferences, let alone defaults. You could certainly say that of the two languages accepted so far in the Linux kernel, the only safe one is Rust, but there's hardly any "default" there.
> It remains to be seen which big name will make Zig unavoidable.
I have no idea whether or not Zig will ever be successful, but at this point it's pretty clear that Rust's success has been less than modest at best.
It is a clear mandate on those companies that whatever used to be C or C++, should be written in Rust for green field development.
Whatever could be done in programming languages with automatic memory management was already being done.
Anyone deploying serverless code into Amazon instances is running of top of Firecracker, my phone has Rust code running on it, and whatever Windows 11 draws something into the screen, it goes through Rust rewrite of the GDI regions logic, all the Azure networking traffic going through Azure Boost cards does so via Rust firmware.
Adobe is the sponsor for the Hylo programming language, and key figures in the C++ community, are doing Rust talks nowadays.
"Adobe’s memory safety roadmap: Securing creativity by design"
https://blog.adobe.com/security/adobes-memory-safety-roadmap...
Any hobby language author would like to have 1% of the said modest Rust's success, I really don't get the continuous downplay of such achievement.
> It is a clear mandate on those companies that whatever used to be C or C++, should be written in Rust for green field development. Whatever could be done in programming languages with automatic memory management was already being done.
I don't know how true either of these statements is or to what extent the mandate is enforced (at my company we also have language mandates, but what they mean is that to use a different language all you need is an explanation and a manager to sign off), but I'll ask acquaintances in those companies (Except Adobe; don't know anyone there. Although the link you provided doesn't say Rust; it says "Rust or Swift". It also commits only to "exploring ways to reduce the use of new C and C++ code in safety critical parts of our products to a fraction of current levels").
What I do know is that the rate at which Rust is adopted, is significantly lower than the rate at which C++, Java, C#, Python, TS, and even Go were adopted, even in those companies.
Now, there's no doubt that Rust has some real adoption, and much more than just hobby languages. Its rate of adoption is significantly higher than that of Haskell, or Clojure, or Elixir were (but lower than that of Ruby or PHP). That is without a doubt a great accomplishment, but not what you'd expect from a language that wishes to become the successor to C++ (and doesn't suffer from lack of hype despite its advanced age). Languages that offer a significant competitive advantage, or even the perception of one, spread at a faster pace, certainly those that eventually end up in the top 5.
I also think there's little doubt that the Rust "base" is more enthusiastic than that of any language I remember except maybe that of Haskell's resurgence some years back (and maybe Ruby), and that enthusiasm may make up for what they lack in numbers, but at some point you need the numbers. A middle-aged language can only claim to be the insurgent for so long.
P.S.
I spoke with someone at AWS, and he says that there is an investment in using Rust for low-level code, but there is no company-wide mandate, and projects are free to pick C or C++.
>>It is a clear mandate on those companies that whatever used to be C or C++, should be written in Rust for green field development. >>Any hobby language author would like to have 1% of the said modest Rust's success, I really don't get the continuous downplay of such achievement.
This is a political achievement, not technical one. People are bitter about it as it doesn't feel organic and feel pushed onto them.
There is technical achievement in:
> Anyone deploying serverless code into Amazon instances is running of top of Firecracker, my phone has Rust code running on it, and whatever Windows 11 draws something into the screen, it goes through Rust rewrite of the GDI regions logic, all the Azure networking traffic going through Azure Boost cards does so via Rust firmware.
Ignoring it doesn't make those achievements political rather than technical.
I was referring to mandate to use it at big companies. This is a political achievement. Teams/contributors making their own choice and then shipping good software counts as technical one but that wasn't the main point of the post I replied to.
> I was referring to mandate to use it at big companies.
I've worked in almost all of big tech, and these companies don't create mandate just because "Trust me bro" or to gain some "political achievements". Their are teams who champion new technology/languages, they create proof of what new technology will bring to the table which cannot be filled with existing ones. I left amazon 7 years ago so don't know about recent development. However at Meta/Google teams are encouraged to choose from the mandate languages and if they can't they need to request for exemption and justify the exception.
I wonder what you consider a successful language.
Rust appeared in 2012. Zig in 2016. I consider them both two successful programming languages, but given they are only 4 years apart, it's easy to compare Zig today with 4-years-back Rust and see they are very far apart, in term of maturity, progress, community size and adoption.
Rust is a very successful language so far, but expecting that in 10y it can overthrow C++ is silly. Codebases add up more than they are replaced.
While certain teams within Google are using rust by default, I'm not sure rust is anywhere close in scale for new lines of code committed per week to c++.
For Android specifically, by Q3 of last year more new lines of Rust were being added per week than new lines of C++: https://security.googleblog.com/2025/11/rust-in-android-move...
Sure and android is a small part of Google. Everyone in ads, search, cloud are still predominantly c++ (or something higher level like Java). Rust is gaining momentum but overall it's still small.
The problem is that the number of browser engines is n=2.
Interestingly, Ladybird, which aims at being the n = 3, is also written in C++.
Ladybird is in the process of switching over to Swift, and has been for a little over a year now.
Not linking to the pedophilic nazi-site, and as Nitter is dead-ish, here is the full-text announcement archived on tildes: https://tildes.net/~comp/1j7m/ladybird_chooses_swift_as_its_...
> without fighting the compiler
It's unfortunate that "writing safe code" is constantly being phrased in this way.
The borrow checker is a deterministic safety net. Claiming Zig is easier ignores that its lack of safety checks is what makes it feel easier; if Zig had Rust’s guarantees, the complexity would be the same. Comparing them like this is apples vs. oranges.
That's a very narrow way of looking at things. ATS has a much stronger "deterministic safety net" than Rust, yet the reason to use Rust over ATS is that "fighting the compiler" is easier in Rust than in ATS. On the other hand, if any cost is worth whatever level of safety Rust offers for any project, than Rust wouldn't exist because there are far more popular languages with equal (or better) safety. So Rust's design itself is an admission that 1. more compile-time safety is always better, even if it complicates the language (or everyone who uses Rust should use ATS), and 2. any cost is worth paying for safety (or Rust wouldn't exist in the first place).
Safety has some value that isn't infinite, and a cost that isn't zero. There are also different kinds of safety with different value and different costs. For example, spatial memory safety appears to have more value than temporal safety (https://cwe.mitre.org/top25/archive/2025/2025_cwe_top25.html) and Zig offers spatial safety. The question is always what you're paying and what you're getting in return. There doesn't appear to be a universal right answer. For some projects it may be worth it to pay for more safety, and for other it may be better to pay for something else.
You’re changing the argument. The point wasn’t whether more safety is “worth it”, but that comparing ease while ignoring which invariants are enforced is misleading. Zig can feel simpler because it encodes fewer guarantees. I’m not saying one approach is better, only that this comparison shifts the goalposts.
Then we're in agreement. Both languages give you something that may be important, but it has a price.
You're changing the argument again. I'm not in agreement with your statement.
Imo "safety" in safe Rust is higher than it is in more popular languages.
Data races, type state pattern, lack of nulls, ...
This is comparing what Rust has and other languages don't without also doing the opposite. For example, Java doesn't enforce data-race freedom, but its data races are safe, which means you can write algorithms with benign races safely (which are very useful in concurrent programming [1]), while in Rust that requires unsafe. Rust's protection against memory leaks that can cause a panic is also weaker, as is Rust's ability to recover from panics in general. Java is now in the process of eliminating the unsafe escape hatch altogether except for FFI. Rust is nowhere near that. I.e. sometimes safe Rust has guarantees that mean that programs need to rely on unsafe code more so than in other languages, which allows saying that safe Rust is "safer" while it also means that fewer programs are actually written purely in safe Rust. The real challenge is increasing safety without also increasing the number of programs that need to circumvent it or increasing the complexity of the language further.
[1]: A benging race is when multiple tasks/threads can concurrently write to the same address, but you know they will all write the same value.
> 1. more compile-time safety is always better, even if it complicates the language (or everyone who uses Rust should use ATS), and 2. any cost is worth paying for safety (or Rust wouldn't exist in the first place).
You keep repeating this. It's not true. If what you said was true, Rust would have adopted HKT, and God knows whatever type astronomy Haskell & Scala cooked up.
There is a balancing act, and Rust decided to plant a flag in memory safety without GC. The fact that Zig, didn't expand on this, but went backwards is more of an indictment of programmers unwilling to adapt and perfect what came before, but to reinvent it in their own worse way.
> There are also different kinds of safety with different value and different costs. For example, spatial memory safety appears to have more value than temporal safety (https://cwe.mitre.org/top25/archive/2025/2025_cwe_top25.html)
How did you derive this from the top 25 of CWEs? Let's say you completely remove the spatial memory issues. You still get temporal memory issues at #6.
Rust does have a GC, but I agree it planted its flag at some intermediate point on the spectrum. Zig didn't "go backwards" but planted its own flag ever so slightly closer to C than to ATS (although both Rust and Zig are almost indistinguishable from C when compare to ATS). I don't know if where Rust planted its flag is universally better than where Zig planted its flag, but 1. no one else does either, 2. both are compromises, and 3. it's uncertain whether a universal sweet spot exists in the first place.
> How did you derive this from the top 25 of CWEs? Let's say you completely remove the spatial memory issues. You still get temporal memory issues at #6.
Sure, but spatial safety is higher. So if Rust's compromise, we'll exact a price on temporal safety and have both temporal and spatial safety, is reasonable, then so is Zig's that says, the price on temporal safety is too high for what you get in return, but spatial safety only is a better deal. Neither go as far as ATS in offering, in principle, the ability to avoid all bugs. Nobody knows whether Rust's compormise is universally better than Zig's or vice versa (or perhaps neither is universally better), but I find it really strange to arbitrarily claim that one compromise is reasonable and the other isn't, where both are obviously compromises that recognise there are different benefits and different costs, and that not every benefit is worth any cost.
> Rust does have a GC
It doesn't. Not by any reasonable definition of having a GC.
And "opt-in non-tracing GC that isn't used largely throughout the standard library" is not a reasonable definition.
> Nobody knows whether Rust's compormise is universally better than Zig's
When it comes to having more segfaults, we know. Zig "wins" most segfaults per issue Razzie Award.
This is what happens when you ignore one type of memory safety. You have to have both. Just ask Go.
> And "opt-in non-tracing GC that isn't used largely throughout the standard library" is not a reasonable definition.
Given that refcounting and tracing are the two classic GC algorithms, I don't see what specifying "non tracing" here does, and reference-counting with special-casing of the one reference case is still reference counting. I don't know if the "reasonable definition" of GC matters at all, but if it does, this does count as one.
I agree that the one-reference case is handled in the language and the shared reference case is handled in the standard library, and I think it can be reasonable to call using just the one-reference case "not a GC", but most Rust programs do use the GC for shared references. It is also true that Rust depends less on GC than Java or Go, but that's not the same as not having one.
> When it comes to having more segfaults, we know. Zig "wins" most segfaults per issue Razzie Award.
And Rust wins the Razzie Award for most painful development and lack of similarly powerful arenas. It's like declaring that you win by paying $100 for something while I paid $50 for something else without comparing what we got for the money, or declaring that you win by getting a faster car without looking at how much I paid for mine.
> This is what happens when you ignore one type of memory safety.
When you have less safety for any property, you're guarnateed to have more violations. This is what you buy. Obviously, this doesn't mean that avoiding those extra violations is necessarily worth the cost you pay for that extra safety. When you buy something, looking just at what you pay or just at what you get doesn't make any sense. The question is whether this is the best deal for your case.
Nobody knows if there is a universal best deal here let alone what it is. What is clear is that nothing here is free, and that nothing here has infinite value.
> I don't know if the "reasonable definition" of GC matters at all
If you define all non-red colors to be green, it is impossible to talk about color theory.
> And Rust wins the Razzie Award for most painful development and lack of similarly powerful arenas.
That's a non-quantifiable skill issue. Segfaults per issue is a quantifiable thing.
> When you have less safety for any property, you're guarnateed to have more violations.
If that's what you truly believed outside some debate point. Then you'd be advocating for ATS or Ada.SPARK, not Zig.
> If you define all non-red colors to be green, it is impossible to talk about color theory.
Except reference counting is one of the two classical GC algorithms (alongside tracing), so I think it's strange to treat it as "not a GC". But it is true that GC/no-GC distinction is not very meaningful given how different the tradeoffs that different GC algorithms make are. Even within these basic algorithms there are combinations. For example a mark-and-sweep collector is quite different from a moving collector, or CPython uses refcouting for some things and tracing for others.
> That's a non-quantifiable skill issue. Segfaults per issue is a quantifiable thing.
That it's not as easily quantifiable doesn't make it any less real. If we compare languages only by easily quantifiable measures, there would be few differences between them (and many if not most would argue that we're missing the differences that matter to them most). For example, it would be hard to distinguish between Java and Haskell. It's also not necessarily a "skill issue". I think that even skilled Rust users would admit that writing and maintaining a large program in TypeScript or Java takes less effort than doing the same in Rust.
Also, ATS has many more compile-time safety capabilities than either Rust or Zig (in fact, compared to ATS, Rust and Zig are barely distinguishable in what they can guarantee at runtime), so according to your measure, both Rust and Zig lose when we consider other alternatives.
> Then you'd be advocating for ATS or Ada.SPARK, not Zig.
Quite the opposite. I'm pointing out that, at least as far as this discussion goes, every added value comes with added cost that needs to be considered. If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust. I'm saying that we don't know where the cost-benfit sweet point is or, indeed, even if there's only one such sweey point or multiple. I'm certainly not advocating for Zig as a universal choice. I'm advocating for selecting the right tradeoffs for every project, and I'm rejecting the claim that whatever benefits Rust or Zig have compared to the other are free. Both (indeed, all languages) require you to pay in some way to get what they're offering. In other words, I'm advocating can both be more or less appropriate than the other, depending on the situation and against the position that Rust is always superior, which is based on only looking at its advantages and ignoring its disadvantages (which, I think, are quite significant).
> Except reference counting is one of the two classical GC algorithms (alongside tracing), so I think it's strange to treat it as "not a GC". But it is true that GC/no-GC distinction is not very meaningful given how different the tradeoffs that different GC algorithms make are.
That's not the issue. Calling anything with opt-in reference counting a GC language. You're just fudging definitions to get to the desired talking point. I mean, C is, by that definition, a GC language. It can be equipped with
> That it's not as easily quantifiable doesn't make it any less real.
It makes it more subjective and easy to bias. Rust has a clear purpose. To put a stop to memory safety errors. What does it's painful to use? Is it like Lisp to Haskell or C to Lisp.
> For example, it would be hard to distinguish between Java and Haskell.
It would be possible to objectively distinguish between Java and Haskell, as long as they aren't feature-by-feature compatible.
If you can make a program that halts on that feature, you can prove you're in language with that feature.
> If what you truly believed is that more compile-time safety always wins, then it is you who should be advocating for ATS over Rust.
Yeah, because you fight a strawman. Having a safe language is a precondition but not enough. I want it to be as performant as C as well.
Second, even if you have the goal of moving to ATS, developing ATS-like isn't going to help. You need a mass of people to move there.
> Calling anything with opt-in reference counting a GC language
Except I never called it "a GC language" (whatever that means). I said, and I quote, "Rust does have a GC". And it does. Saying that it's "opt in" when most Rust programs use it (albeit to a lesser extent than Java or Go programs, provided we don't consider Rust's special case of a single reference to be GC) is misleading.
> Rust has a clear purpose. To put a stop to memory safety errors.
Yes, but 1. other languages do it, too, so clearly "stopping memory errors" isn't enough, 2. Rust does it in a way that requires much more use of unsafe escape hatches than other languages, so it clearly recognises the need for some compromise, and 3. Rust's safety very much comes at a cost.
So its purpose may be clear, but it is also very clear that it makes tradeoffs and compromises, which implies that other tradeoffs and compromises may be reasonable, too.
But anyway, having a very precise goal makes some things quantifiable, but I don't think anyone thinks that's what makes a language better than another. C and JS also have very clear purposes, but does that make them better than, say, Python?
> Having a safe language is a precondition but not enough. I want it to be as performant as C as well... You need a mass of people to move there.
So clearly you have a few prerequisites, not just memory safety, and you recognise the need for some pragmatic compromises. Can you accept that your prerequisites and compromises might not be universal and there may be others that are equally reasonable, all things considered?
I am a proponent of software correctness and formal methods (you can check out my old blog: https://pron.github.io) and I've learnt a lot over my decades in industry about the complexities of software correctness. When I choose a low-level language, to switch away from C++ my prerequisites are: a simple language with no implicitness (I want to see every operation on the page) as I think it makes code reviews more effective (the effectiveness of code reviews has been shown empirically, although not the relationship to language design) and fast compilation to allow me to write more tests and run them more often.
I'm not saying that my requirements are universally superior to yours, and my interests also lie in a high emphasis on correctness (which extends far beyond mere memory safety), it's just that my conclusions and perhaps personal preferences lead me to prefer a different path to your preferred one. I don't think anyone has any objective data to support the claim that my preferred path to correctness is superior to yours or vice-versa.
I can say, however, that in the 1970s, proponents of deductive proofs warned of an impending "software crisis" and believed that proofs are the only way to avoid it (as proofs are "quantifiably" exhaustive). Twenty years later, one of them, Tony Hoare, famously admitted he was wrong, and that less easily quantifiable approaches turned out to be more effective than expected (and more effective than deductive proofs, at least of complicated properties). So the idea that an approach is superior just because it's absolute/"precise" is not generally true.
Of course, we must be careful not to extrapolate and generalise in either direction, but my point is that software correctness is a very complicated subject, and nobody knows what the "best" path is, or even if there is one such best path.
So I certainly expect a Rust program to have fewer memory-safety bugs than a Zig programs (though probably more than a Java program), but that's not what we care about. We want the program to have the fewest dangerous bugs overall. After all, I don't care if my user's credit-card data is stolen due to a UAF or due to SQL injection. Do I expect a Rust program to have fewer serious bugs than a Zig program? No, and maybe the opposite (and maybe the same) due to my preferred prerequisites I listed above. The problem with saying that we should all prefer the more "absolute" approach, though it could possibly harm less easily-quantifiable aspects, because it's at least absolute in whatever it does guarantee is that this belief has already been shown to not be generally true.
(As a side note, I'll add that a tracing GC doesn't necessarily have a negative impact on speed, and may even have a positive one. The main tradeoff is RAM footprint. In fact, the cornerstone of tracing algorithms is that they can reduce the cost of memory management to be arbitrarily low given a large-enough heap. In practice, of course, different algorithms make much more complicated pragmatic tradeoffs. Basic refcounting collectors primarily optimise for footprint.)
> Except I never called it "a GC language" (whatever that means). I said, and I quote, "Rust does have a GC".
Ok, semantics aside, my point still stands. C also has a GC. See Boehm GC. And before you complain RC is part of std I will point that std is optional and is on track to become a freestanding library.
> Can you accept that your prerequisites and compromises might not be universal
Not the way hardware is moving, which is to say more emphasis on more cores and with no more free lunch from hardware. Regardless of whether it is on-prem or in the cloud, mandatory GC is not a cost you can justify easily anymore.
> As a side note, I'll add that a tracing GC doesn't necessarily have a negative impact on speed, and may even have a positive one
Yeah, but it has a negative impact on memory. As witnessed in the latest RAM crisis, there is no guarantee you can just rely on more memory providing benefits.
> After all, I don't care if my user's credit-card data is stolen due to a UAF or due to SQL injection.
Sure, but those that see fewer UAF errors have more time to deal with logic errors. Of course there are confounding variables such as believing you are king of the world, or that Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
> C also has a GC. See Boehm GC. And before you complain RC is part of std I will point that std is optional and is on track to become a freestanding library.
Come on. The majority of Rust programs use the GC. I don't understand why it's important to you to debate this obvious point. Rust has a GC and most Rust programs use it (albeit to a much lesser extent than Java/Python/Go etc.). I don't understand why it's a big deal.
You want to add the caveat that some Rust programs don't use the GC and it's even possible to not use the standard library at all? Fine.
> Not the way hardware is moving, which is to say more emphasis on more cores and with no more free lunch from hardware. Regardless of whether it is on-prem or in the cloud, mandatory GC is not a cost you can justify easily anymore.
This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades and is continuing to do so.
> As witnessed in the latest RAM crisis, there is no guarantee you can just rely on more memory providing benefits.
What you say about RAM prices is true, but it still doesn't change the economics of RAM/CPU sufficiently. There is a direct correspondence between how much extra RAM a tracing collector needs and the amount of available CPU (through the allocation rate). Regardless of how memory management is done (even manually), reducing footprint requires using more CPU, so the question isn't "is RAM expensive?" but "what is the relative cost of RAM and CPU when I can exchange one for the other?" The RAM/CPU ratios available in virtually all on-prem or cloud offerings are favourable to tracing algorithms.
If you're interested in the subject, here's an interesting keynote from the last International Symposium on Memory Management (ISMM): https://youtu.be/mLNFVNXbw7I
> Sure, but those that see fewer UAF errors have more time to deal with logic errors.
I think that's a valid argument, but so is mine. If we knew the best path to software correctness, we'd all be doing it.
> Of course there are confounding variables such as believing you are king of the world, or that Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
I understand that's something you believe, but it's not supported empirically, and as someone who's been deep in the software correctness and formal verification world for many, many years, I can tell you that it's clear we don't know what the "right" approach is (or even that there is one right approach) and that very little is obvious. Things that we thought were obvious turned out to be wrong.
It's certainly reasonable to believe that the Rust approach leads to more correctness than the Zig approach, and some believe that, and it's equally reasonable to believe that the Zig approach leads to more correctness than the Rust approach, and some people believe that. It's also reasonable to believe that a different approaches is better for correctness in different circumstances. We just don't know, and there are reasonable justifications in both directions. So until we know, different people will make different choices, based on their own good reasons, and maybe at some point in the future we'll be able to have some empirical data that gives us something more grounded in fact.
> Come on. The majority of Rust programs use the GC.
This part is false. You make a ridiculous statement and expect everyone to just nod along.
I could see this being true iff you say all Rust UI programs use "RC".
> This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades
Without ever increasing memory/CPU, you're going to have to squeeze more performance out of the stone (more or less unchanging memory/CPUs).
GC will be a mostly unacceptable overhead in numerous instances. I'm not saying it will be fully gone, but I don't think the current crop of C-likes is accidental either.
> I understand that's something you believe, but it's not supported empirically
It's supported by Google's usage of Rust.
https://security.googleblog.com/2025/11/rust-in-android-move...
> Stable and high-quality changes differentiate Rust. DORA uses rollback rate for evaluating change stability. Rust's rollback rate is very low and continues to decrease, even as its adoption in Android surpasses C++.
So for similar patches, you see fewer errors in new code. And the overall error rate still favors Rust.
> Without ever increasing memory/CPU, you're going to have to squeeze more performance out of the stone (more or less unchanging memory/CPUs).
The memory overhead of a moving collector is related only to the allocation rate. If the memory/CPU is sufficient to cover that overhead, which, in turn help save more costly CPU, it doesn't matter if the relative cost reduced (also, it's not even reduced; you're simply speculating that one day it could be).
> I'm not saying it will be fully gone
That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years with no reversal in trend. This is like saying that more people will find the cost of typing a text message unacceptable so we'll see a rise in voicemail messages, but of course text messaging will not be fully gone.
Even embedded software is increasingly written in languages that rely heavily on GC. Now, I don't know the future market forces, and maybe we won't be using any programming languages at all but LLMs will be outputting machine code directly, but I find it strange to predict with such certainty that the trend we've been seeing for so long will reverse in such full force. But ok, who knows. I can't prove that the future you're predicting is not possible.
> It's supported by Google's usage of Rust.
There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing, and therefore in the total reduction of bugs, and you said that maybe a complex language like Rust, with lots of implicitness but also temporal memory safety could perhaps have a positive effect on other bugs, too, in comparison. What you linked to is something about Rust vs C and C++. Zig is at least as different from either one as it is from Rust.
> And the overall error rate still favors Rust.
Compared to C++. What does it have to do with anything we were talking about?
> That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years
I wish I knew what you mean by programs relying primarily on GC. Does that include Rust?
Regardless, but extrapolating current PL trends so far is a fools errand. I'm not looking at current social/market trends but limits of physics and hardware.
> There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing
No, let me remind you:
> > [snip] Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
> I understand that's something you believe, but it's not supported empirically we were talking how not having to worry about UB allows for easier defect catching.
> Compared to C++.
Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time. Even if it isn't a 1-to-1 comparison with Zig, we have other examples like Bun vs Deno, where Bun incurs more segfaults (per issue).
Also don't see how much of Zig design could really assist code reviews and testing.
> Does that include Rust?
No. Most memory management in Rust is not through it's GC, even though most Rust programs do use the GC to some extent.
> I'm not looking at current social/market trends but limits of physics and hardware.
The laws of physics absolutely do not predict that the relative cost of CPU to RAM will decrease substantially. Unforeseen economic events may always happen, but they are unforeseen. It's always possible that current trends would reverse, but that's a different matter from assuming they are likely to reverse.
> Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time.
I don't know how reasonable it is to think that. If Rust's value comes from eliminating spatial and temporal memory safety issues, surely there's value in eliminating the more dangerous of the two, which Zig does as well as Rust (but C++ doesn't).
But even if you think that's reasonable for some reason, I think it's at least as reasonable to think the opposite, given that in almost 30 years of programming in C++, by far my biggest issue with the language has been its complexity and implicitness, and Zig fixes both. Given how radically different Zig is from C++, my preferenece for Zig stems precisely from it solving what is, to me, the biggest issue with C++.
> Also don't see how much of Zig design could really assist code reviews and testing.
Because it's both explicit and simple. There are no hidden operations performed by a routine that do not appear in that routine's code. In C++ (or Rust), to know whether there's some hidden call to a destructor/trait, you have to examine all the types involved (to make matters worse, some of them may be inferred).
The fact that Zig doesn't have Rust's guarantees doesn't mean Zig does not have safety checks. The safety checks that Zig does have are different, and are different in a way that's uniquely useful for this particular project.
Zig's check absolutely don't go to the extent that Rust's do, which is kind of the point here. If you do need to go beyond safe code in Rust, Zig is safer than unsafe code in Rust.
Saying Zig lacks safety checks is unfortunate, although I wouldn't presume you meant it literally and just wanted to highlight the difference.
Thing is, those safety checks are also available in C and C++, provided that one uses the right tools like PVS and PurifyPlus (just to quote two examples), and now ongoing AI based tooling efforts for verification, thus the question is why a language like Zig in the 21st century, other than "I don't like either C++ or Rust".
I never said Zig has no safety features. What I said is true, though. If it would have Rusts guarantees (as in: The same) it would be more complex.
I mean if we're going to nitpick:
>>> its lack of safety checks
>> Saying Zig lacks safety checks is unfortunate,
> I never said Zig has no safety features.
You did. Or, alternatively, if you don't equate "checks" with "features", then I never said you said that so what are you complaining about?
> If it would have Rusts guarantees (as in: The same) it would be more complex.
Which is true (if tautological), and is basically what the GP said:
> Zig's manual memory management might actually be more ergonomic for a DOM implementation specifically because you can model the graph relationships more directly without fighting the compiler, provided you have a robust strategy for the arena allocation
Both you and the GP agree that Rust is more complex.
You objected to this with:
> It's unfortunate that "writing safe code" is constantly being phrased in this way.
Upon which I commented that Zig does have safety features, even if they're not covering you as well as Rust's ones. Which is, again, inline with "provided you have a robust strategy for the arena allocation."
Now, if you think I'm going overboard with this, I agree with you -- and this is the exact feeling I have when I look at Rust :)
And use-after-free, when that arena's memory goes away.
But arenas have substantial benefits. They may be one of the few remaining reasons to use a low-level (or "systems programming") language in the first place. Most things are tradeoffs, and the question isn't what you're giving up, but whether you're getting the most for what you're paying.
Arenas are also available in languages with automatic memory management, e.g. D, C# and Swift, to use only modern languages as example.
Thus I don't consider that a reason good enough for using Zig, while throwing away the safety from modern languages.
First, Zig is more modern than any of the languages you mention. Second, I'm not aware that any of those languages offer arenas similar in their power and utility to Zig's while offering UAF-freedom at the same time. Note that "type-safe" arenas are neither as powerful as general purpose arenas nor fully offer UAF-freedom. I could be wrong (and if I am, I'd really love to see an arena that's both general and safe), but I believe that in all these languages you must compromise on either safety or the power of the arena (or both).
> First, Zig is more modern than any of the languages you mention
How so? This feels like an empty statement at best.
"modern: relating to the present or recent times as opposed to the remote past". I agree it's not a useful concept here but I didn't bring it up. Specifically, I don't think there's any consideration that had gone into the design of D, C#, or Rust that escaped Zig's designer. He just consciously made different choices based on the data available and his own judgment.
Not really modern, it is Object Pascal/Modula-2 repackaged in C like syntax.
The only thing relatively modern would be compile time execution, if we forget about how long some languages have had reader macros, or similar capabilities like D's compile time metaprogramming.
Also it is the wrong direction when the whole industry is moving into integrity by default on cyber security legislation.
There are several examples around of doing arenas in said languages.
https://dlang.org/phobos/std_experimental_allocator.html
You can write your own approach with the low level primitives from Swift, or ping back into the trusty NSAutoreleasePool.
One example for C#, https://github.com/Enichan/Arenas
> Not really modern, it is Object Pascal/Modula-2 repackaged in C like syntax.
That's your opinion, but I couldn't disagree more. It places partial evaluation as its biggest focus more so than any other language in history, and is also extraordinarily focused on tooling. There isn't any piece of information nor any technique that was known to the designers of those older languages and wasn't known to Zig's designer. In some situations, he intentionally chose different tradeoffs on which there is no consensus. It's strange to insist that there is some consensus when many disagree.
I have been doing low-level programming (in C, C++, and Ada in the 90s) for almost 30 years, and over that time I have not seen a low-level language that's as revolutionary in its approach to low-level programming as Zig. I don't know if it's good, but I find its design revolutionary. You certainly don't have to agree with my assessment, but you do need to acknowledge that some people very much see it that way, and don't think it's merely a "repackaged" Pascal-family language in any way.
I guess you could say that you personally don't care about Zig's primary design points and when you ignore them you're left with something that you find similar to other languages, but that's like saying that if you don't care about Rust's borrow- and lifetime checking, it's basically just a mix of C++ and ML. It's perfectly valid to not care about what matters most to some language's designer, and it's perfectly valid to claim that what matters to them most is misguided, but it is not valid to ignore a language's design core when describing it just because you don't care about it.
> Also it is the wrong direction when the whole industry is moving into integrity by default on cyber security legislation.
Again, that is an opinion, but not one I agree with. For one, Rust isn't as safe as other safe languages given its relatively common reliance on unsafe. If spatial and temporal memory safety were the dominating concerns, there wouldn't be a need for Rust, either (and it wouldn't have exposed unsafe). Clearly, everyone recognises that there are other concerns that sometimes dominate, and it's pretty clear that some people, who are no less knowledgeable about the software industry and its direction, prefer Zig. There is no consensus here either way, and I'm not sure there can be one. They are different languages that suit different people/projects' preferences.
Now, I agree that there's definitely more motion toward more correctness - which is great! - and I probably wouldn't write a banking or healthcare system in Zig, but I wouldn't write it in Rust, either. People reach for low level languages precisely when there may be a need to compromise on safety in some way, and Rust and Zig make different compromises, both of which - as far as I can tell - can be reasonable.
> There are several examples around of doing arenas in said languages.
From what I can tell, all of them either don't provide freedom from UAF, or they're not nearly as general as a proper arena.
I know of one safe and general arena design in RTSJ, which immidiately prevents a reference to a non-enclosing arena from being written into an object, but it comes with a runtime cost (which makes sense for hard realtime, where you want to sacrifice performance for worst-case predictability).
> You certainly don't have to agree with my assessment, but you do need to acknowledge that some people very much see it that way, and don't think it's merely a "repackaged" Pascal-family language in any way.
My opinion is that 99% of those people never knew anything beyond C and C++ for systems programming, and even believe the urban myth that before C there were no systems programming languages.
Similar to those that only discover compiled languages and type systems exist, after spending several years with Python and JavaScript, and then even Go seems out of this world.
I don't know about the numbers. Some of Zig's famous proponents are Rust experts. I don't know the specific percentages, but you could level a similar accusation at Rust's proponents, too, i.e. that they have insufficient exposure to alternative techniques. And BTW, Zig's approach is completely different from that of C, C++, Rust, or the Pascal family languages. So if we were to go by percentages, we could dismiss all criticisms against Zig on the same basis (i.e. most people may think it's like C++, or C, or Modula, but since it isn't, then their criticisms are irrelevant). In fact, because Rust is a fairly old language and Zig isn't, it's more likely that more Zig developers are familiar with Rust than vice-versa.
But also I don't see why that even matters. If even some people with a lot of experience in other approaches to systems programming and even with experience with deeper aspects of software correctness accept this assessment, then you can't waive it away. It's okay to think we're wrong - after all no one has the sufficient empirical evidence to support their claim either way - but you cannot ignore the fact that some of those with extensive experience disagree with you, just as I'm happy to accept that some of them disagree with me.
Wouldn't C# and Swift make it tough to integrate with other languages? Whereas something written in Zig (or Rust) can integrate with anything that can use the C ABI?
Both c# and swift have first party c abi integration
Yeah that's certainly possible but leaking a pointer like this seems like it would be really easy to spot?
It's harder than you'd expect. Depending on what kind of bucketing an arena does (by size or by type), a stale reference may end up pointing to another piece of memory of the correct type, which is still wrong, but more subtly than a crash.
Look at the architecture of lightpanda and come back with a response.
I'm not familiar enough with Zig to want to dive into architecture, the point I wanted to make is general to arenas in any language that can have a stale reference.
I once had a stale stack reference bug in C that lived for a year, because the exact same object was created at the exact same offset every time it was used, which is a similar situation.
Too late now, but is the requirement for shared mutable state inherent in the problem space? Or is it just because we still thought OOP was cool when we started on the DOM design?
Yes. It is required for W3C's DOM APIs, which give access to parent nodes and allow all kinds of mutations whenever you want.
Event handlers + closures also create potentially complex situations you can't control, and you'll need a cycle-breaking GC to avoid leaking like IE6 did.
You can make a more restricted tree if you design your own APIs with immutability/ownership/locking, but that won't work for existing JS codebases.
I don't think it's really that bad in Rust. If you're happy with an arena in Zig you can do exactly the same thing in Rust. There are a ton of options listed here: https://donsz.nl/blog/arenas/
Some of them even prevent use after free (the "ABA mitigation" column).
I'm not super experienced with zig, but I always think that in the same way that rust forces you to think about ownership (by having the borrow checker - note: I think of this as a good thing personally) zig makes you think upfront about your allocation (by making everything that can allocate take an allocator argument.).
It makes everything very explicit, and you can always _see_ where your allocations are happening in a way that you can't (as easily, or as obviously - imo) in rust.
It seems like something I quite like. I'm looking forward to rust getting an effects system/allocator api to help a little more with that side of things.
The problem is deallocation... unless you tie the allocated object to an arena allocator with a lifetime somehow (Rust can model that).
Yep, rust forces you to think about lifetimes. Zig only suggests it (because you're forced to think about allocation, which makes you naturally think about the lifetime usually) but does not help you with it/ensure correctness.
It's still nice sometimes to ensure that you have to think about allocation everywhere, and can change the allocation strategy for something that works for your usecase. (hence why I'm looking forward to the allocator api in rust to get the best of both worlds).
That's true and I liked the idea of it until I started writing some Zig where I needed to work with strings. Very painful. I'm sure you typically get a bit faster string manipulation code than what you'd get with Rust but I don't think it's worth the cost (Rust is pretty fast already).
Can't agree more. I hope someone puts some work into a less painful way to manage strings in std. I would but I don't manipulate strings quite enough to support usecases more than basically concatenation...
As of 0.15.X, you can build strings using a std.Io.Writer. You can either:
- use std.Io.Writer.fixed to use a slice for the memory, and use .buffered() when you're done to get the subslice of the buffer that contains your string
or
- Create an instance of std.Io.Writer.Allocating with an allocator, and use .toOwnedSlice() when you're done to get your allocated string.
In both cases you just use regular print functions to build your string.
Depending on your needs, it may also be good to use a fixed writer with a dynamically allocated slice, where the size of the allocation is computed using std.fmt.count(). This can be better than using std.Io.Writer.Allocating because you can avoid doing multiple allocations.
No, you can't do the same thing in Rust, because Rust crates and the standard library generally use the global allocator and not any arena you want to use in your code.
I mean you can store the nodes in an arena so you don't have to deal with the borrow checker getting upset with your non-tree ownership structure. That's the context. We weren't talking about arena use for speed/efficiency purposes. In that case you are right; it's much more awkward to use custom allocators in Rust.
Which is hardly any different from me using PurifyPlus back in 2000.
It's very different.
I've been using it for months now ever since I saw their presentation at GitHub
This is a common flow for me
I even have it as a shell alias, wv(). It's way better than the crusty old lynx and links on sites that need JS.It's solid. Definitely worth a check
Oh, huh, being able to convert arbitrary websites that may use JS for rendering to Markdown could be very handy indeed. Thanks for the tip!
Thanks for the tip, that's very cool. I did not know about `markitdown` and `streamdown`.
A language which is not 1.0, and has repeatedly changed its IO implementation in a non-backwards-compatible way is certainly a courageous choice for production code.
So, I'm noodling around with writing a borrow checker for zig, and you don't get to appreciate this working with zig on a day to day level, but the internals of how the zig compiler works are AMAZING. Also, the io refactor will (I think) let me implement aliasing checking (alias xor mutable).
In my experience, migrating small-scale projects takes from minutes to single digit hours.
Standard library is changing. The core language semantics - not so much. You can update from std.ArrayListUnmanaged to std.array_list.Aligned with to greps.
Right? People must really like the design choices in Zig to do that instead of choosing another language. It's very interesting just because of that.
It's certainly not a choice I would have made, but there's sufficient precedent for it now (TigerBeetle, Ghostty, etc) that I can understand it.
also Bun
also Roc
This one is far from prod-ready however
the upside is absolutely worth it
This table is informative as to exactly what lightpanda is: https://lightpanda.io/blog/posts/what-is-a-true-headless-bro...
TL;DR: It does the following:
- Fetch HTML over the network
- Parse HTML into a DOM tree
- Fetch and execute JavaScript that manipulates the DOM
But not the following:
- Fetch and parse CSS to apply styling rules
- Calculate layout
- Fetch images and fonts for display
- Paint pixels to render the visual result
- Composite layers for smooth scrolling and animations
So it's effectively a net+DOM+script-only browser with no style/layout/paint.
---
Definitely fun for me to watch as someone who is making a lightweight browser engine with a different set of trade-offs (net+DOM+style/layout/paint-only with no script)
When I was working before on something that used headless browser agents, the ability to do a screenshot (or even a recording) was really great for debugging... so I am not sure about the "no paint". But hey everything in life is a trade-off.
Really depends on what you want to do with the agents. Just yesterday I was looking for something like this for our web access MCP server[0]. The only thing that it needs to do is visit a website and get the content (with JS support, as it's expected that most pages today use JS), and then convert that to e.g. Markdown.
I'm not too happy with the fact that Chrome is one of our memory-hungriest parts of all the MCP servers we have in use. The only thing that exceeds that in our whole stack is the Clickhouse shard, which comes with Langfuse. Especially if you are looking to build a "deep research" feature that may access a few hundreds of webpages in a short timeframe, having a lightweight alternative like Lightpanda can make quite the difference.
[0]: https://github.com/EratoLab/web-access-mcp
Well, it was "normal" crawlers that needed to work perfectly and deterministically (as best as possible), not probabilistically (AI); speed was no issue. And I wanted to debug when something went wrong. So yeah for me it was crucial to be able to record/screenshot.
So yeah, everything is a trade-off, and we needed a different trade-off; we actually decided to not use headless chromium, because they are slight differences, so we ended up using full chrome (not even chromium, again - slight differences) with xvfb. It was very, very memory hungry; but again was not an issue
(I used "agent" as in "browser agent", not "AI agent", I should be more precise I guess.)
yeah I feel the same, I think even having a screenshot of part of rendered page or full page can be useful even for machines considering how heavy those HTML can be to parse and expensive for LLM context. Sometimes (sub)screenshot is just a better kind of compression
Yes HTML is too heavy and too expensive for LLM. We are working on a text-based format more suitable for AI.
What do you think of the DeepSeek OCR approach where they say that vision tokens might better compress a document than its pure text representation?
https://news.ycombinator.com/item?id=45640594
I've spent some time feeding llm with scrapped web pages and I've found that retaining some style information (text size, visibility, decoration image content) is non trivial.
Keeping some kind of style information is definitely important to understand the semantics of the webpage.
> So it's effectively a net+DOM+script-only browser with no style/layout/paint.
> ---
> Definitely fun for me to watch as someone who is making a lightweight browser engine with a different set of trade-offs (net+DOM+style/layout/paint-only with no script)
Both projects (Lightpanda, DioxusLabs/blitz) sound very interesting to me. What do you think about rendering patterns that require both script+layout for rendering, e.g. virtual scrolling of large tables?
What would be a good pattern to make virtual scrolling work with Lightpanda or Blitz?
So Blitz does technically have scripting, it's just Rust scripting rather than JavaScript scripting. So the plan for virtual scrolling would likely be to implement it in Rust.
If your aim is to render a UI (ala Electron/Flutter) then we have a React-style framework (Dioxus) that runs on top of Blitz, and allows you access to the low-level Rust API of the DOM for advanced use cases (although it's still a WIP and this API is a bit rough atm). I'm also hoping to eventually have a built-in `RecyclerView`-like widget for this (that can bypass the style/layout systems for much more efficient virtual scrolling).
Thanks! But I meant JS based virtual scrolling in web pages. E.g. dynamic data tables that only render the part of the table that fits in the viewport.
For scrolling, when using Intersection Observer, we currently assume all elements are visible. So, if you register an observer, we will dispatch an entry indicating an intersection with a ratio of 1.0.
it's so tiring that every time there's a post about something being implemented in Zig or C or C++, the Rust brigade shows up trying to pick up a fight.
It’s a site where programming nerds congregate to waste time arguing with each other. Where do you think you are?
This same pattern used to play out with Ruby, Lisp, and other languages in different eras of this site. It will probably never stop and calling it out seems to just fan the flames more than anything else.
Maybe just a reflex by people that had to hear a decade of "why not C++" whenever it was mentioned that Rust is being used?
I don't know, man. At this point I'm liable to ask "Why are you using C++?" if you start a new project. Let them defend their language!
As part of the "all software should be liable brigade", it is a matter of misplaced goals after the cybersecurity agencies started looking into the matter.
Wow. Lightpanda is absolutely bonkers of a project. I'd pay dearly for such an option a few years back.
Because We're Not Smart Enough for C++ or Rust
Very refreshing. Most engineers would rather saw their leg off.
This looks incredible, congratulations!
Thanks Steeve!
Love to see Zig winning!
zigdom all-diy
finally, rewrite in zig movement is coming
Any older project similar to this? Headless browser with js support I mean, I want to check various implementations of this idea.
I hate to say it, but time is quickly running out for Zig(( AI might never pick it up properly and without that it will never go out of its niche
Claude Opus 4.5 is completely fluent in Zig.
I use it constantly, and it never occurred to me that someone might think there was a problem to be solved there.
Are you implying that programming languages are now going to be “frozen” because of AI?
I can understand the source of concern but I wouldn’t expect innovation to stop. The world isn’t going to pause because of a knowledge cutoff date.
Innovation doesn't go for the sake of innovation itself. Innovation should serve a purpose. And the purpose of having programming languages is to overcome the limitations of human mind, of our attention span, of our ability to manipulate concepts expressed in abstractions and syntax. We don't know how long we'll need this.
I really like Zig, I wish it appeared several years earlier. But rewriting everything in Zig might just not have practical sense soon.
I agree that programming languages will no longer need to be as accessible to humans.
However there is still a strong argument to be made for protections/safety that languages can provide.
e.g. would you expect a model (assuming it had the same expertise in each language) to make more mistakes in ASM, C, Zig, or Rust?
I imagine most would agree that ASM/C would be likely to have the most mistakes simply because fewer constraints are enforced as you go closer to the metal.
So, while we might not care about how easy it is for a human to read/write, there will still be a purpose for innovation in programming languages. But those innovations, IMO, will be more focused on how to make languages easier for AI.
> would you expect a model (assuming it had the same expertise in each language) to make more mistakes in ASM, C, Zig, or Rust?
"assuming it had the same expertise in each language" is the most important part here, because the expertise of AI with these languages is very different. And, honestly, I bet on C here because its code base is the largest, the language itself is the easiest to reason about and we have a lot of excellent tooling that helps mitigate where it falls short.
> I imagine most would agree that ASM/C would be likely to have the most mistakes simply because fewer constraints are enforced as you go closer to the metal.
We need these constraints because we can't reliably track all the necessary details. But AI might be much more capable (read — scalable) in that, so all the complexity that we need to accumulate in a programming language it might just know out of the way it's built.
I’m going to assume you’re open to an honest discussion here.
> "assuming it had the same expertise in each language" is the most important part here, because the expertise of AI with these languages is very different.
You are correct, but I am trying to illustrate that assuming some ideal system with equal expertise, the languages with more safety would win out in productivity/bugs over those with less safety.
As in to say that it could be worth investing further in safer programming languages because AI would benefit.
> We need these constraints because we can't reliably track all the necessary details.
AI cannot reliably track the details either (yet, though I am sure it can be done). Even if it could, it would be a complete waste of resources (tokens).
Why have an AI determine the type of a variable when it could be done in a deterministic manner with a compiler or linter?
To me these arguments closely mirror/follow arguments of static/dynamically typed languages for human programmers. Static type systems eliminate certain kinds of errors and can produce higher quality programs. AI systems will benefit in the same way if not more by getting instant feedback on the validity of their program.
Yes, I get your point and I think your arguments are valid, it's just not the whole story.
The thing about programming languages is that both for their creators and advocates a significant part of motivation to drive is emotions and not the rational necessity alone. Learning a new programming language along with its ecosystem is an investment of time and effort, it is something that our brains mark as important and therefore protected (I'm looking at Rust). Now when AI is going to write all the code, that emotional part might eventually dissolve and move to something else, leaving the question of choice of a programming language much less relevant. Like the list of choices Claude Code shows to you in planning mode: "do you wish to use SQLite, PostgreSQL or MySQL as a database for your project?" (*picking the "Recommended" option)
That said, I hope that Zig will make it to version 1.0 before AI turns all the tables and sweeps many things away. It might be my bias and I'm wrong and overestimating the irrational part, then I'll be glad to admit my mistake.
In my experience LLMs are really good at following code examples and constraints (tests).
So even if they don't get to train much on some technology, all you need is some guidance docs in AGENTS.md
There's a plus in being fresh too: LLMs aren't going to be heavily trained on outdated tutorials and docs. Like React for example.