The benefit of Zig seems to be that it allows you to keep thinking like a C programmer. That may be great, but to a certain extent it’s also just a question of habit.
Seasoned Rust coders don’t spend time fighting the borrow checker - their code is already written in a way that just works. Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”.
The “object soup” is a particular approach that won’t work well in Rust, but it’s not a fundamentally easier approach than the alternatives, outside of familiarity.
As long as the audience accepts the framing that ergonomics doesn't matter because it can't be quantified, the hand-waving exemplified above will confound.
"This chair is guaranteed not to collapse out from under you. It might be a little less comfortable and a little heavier, but most athletic people get used to that and don't even notice!"
Let's quote the article:
> I’d say as it currently stands Rust has poor developer ergonomics but produces memory safe software, whereas Zig has good developer ergonomics and allows me to produce memory safe software with a bit of discipline.
The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic. It's true when you ride a skateboard with a helmet on, it's true when you program, it's true for sex.
Instead you see a lot of arguments with anecdotal or indeterminate language. "Most people [that I talk to] don't seem to have much trouble unless they're less experienced."
It's an amazing piece of rhetoric. In one sentence the ergonomic argument has been dismissed by denying subjectivity exists or matters and then implying that those who disagree are stupid.
> produce memory safe software with a bit of discipline
"a bit of discipline" is doing a lot of work here.
"Just don't write (memory) bugs!" hasn't produced (memory) safe C, and they've been trying for 50yrs. The best practices have been to bolt on analyzers and strict "best practice" standards to enforce what should be part of the language.
You're either writing in Rust, or you're writing in something else + using extra tools to try and achieve the same result as Rust.
As Rust Zig has type-safe enums/sum types. That alone eliminates a lot of problems with C. Plus sane error handling with good defaults that are better than Rust also contributes to code with less bugs.
Surely there is no borrow checker, but a lot of memory-safety issues with C and C++ comes from lack of good containers with sane interfaces (std::* in C++ is just bad from memory safety point of view).
If C++ gained the proper sum types, error handling and templates in Zig style 15 years ago and not the insanity that is in modern C++ Rust may not exist or be much more niche at this point.
I actively dislike Zig's memory safety story, but this isn't a real argument until you can start showing real vulnerabilities --- not models --- that exploit the gap in rigor between the two languages. Both Zig and Rust are a step function in safety past C; it is not a given that Rust is that from Zig, or that that next step matters in practice the way the one from C does.
> The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic.
I'd agree with that if the comparison is JavaScript or Python. If the comparison is Zig (or C or C++) then I don't agree that it's universal. I personally find Rust more ergonomic than those languages (in addition to be being safer).
I would argue that good C or C++ code is actually just Rust code with extra steps. So in this sense, Rust gets you to the "desired result" much easier compared to using C or C++ because no one is there to enforce anything and make you do it.
You can argue that using C or C++ can get you to 80% of the way but most people don't actively think "okay, how do I REALLY mess up this program?" and fix all the various invariant that they forgot to handle. Even worse, this issue is endemic in higher level dynamic languages like Python too. Most people most of the time only think about the happy path.
"It's true when you ride a skateboard with a helmet on."
Rust is not the helmet. It is not a safety net that only gives you a benefit in rare catastrophic events.
Rust is your lane assist. It relieves you from the burden of constant vigilance.
A C or C++ programmer that doesn't feel relief when writing Rust has never acquired the mindset that is required to produce safe, secure and reliable code.
> Rust is your lane assist. It relieves you from the burden of constant vigilance.
Interesting analogy. I love lane assist. When I love it. And hate it when it gets in the way. It can actively jerk the car in weird and surprising ways when presented with things it doesn’t cope well with. So I manage when it’s active very proactively. Rust of course has unsafe… but… to keep the analogy, that would be like driving in a peer group where everyone was always asking me if I had my lane assist on, where when I arrived at a destination, I was badgered with “did you do the whole drive with lane assist?” And if I didn’t, I’d have explained to me the routes and techniques I could have used to arrive at my destination using used lane assist the whole way.
Disclaimer, I have only dabbled a little with rust. It is the religion behind and around it that I struggle with, not the borrow checker.
I have also mostly only dabbled with Rust, and I've come to the conclusion that it is a fantastic language for a lot of things but it is very unforgiving.
The optimal way to write Python is to have your code properly structured, but you can just puke a bunch of syntax into a .py file and it'll still run. You can experiment with a file that consists entirely of "print('Hello World')" and go from there. Import a json file with `json.load(open(filename))` and boom.
Rust, meanwhile, will not let you do this. It requires you to write a lot of best-practice stuff from the start. Loading a JSON file in a function? That function owns that new data structure, you can't just keep it around. You want to keep it around? Okay, you need to do all this work. What's that? Now you need to specify a lifetime for the variable? What does that mean? How do I do that? What do I decide?
This makes Rust feel much less approachable and I think gives people a worse impression of it at the start when they start being told that they're doing it wrong - even though, from an objective memory-safety perspective, they are, it's still frustrating when you feel as though you have to learn everything to do anything. Especially in the context of the small programs you write when you're learning a language. I don't care about the 'lifetime' of this data structure if the program I'm writing is only going to run for 350ms.
As I've toiled a bit more with Rust on small projects (mine or others') I feel the negative impacts of the language's restrictions far more than I feel the positive impacts, but it is nice to know that my small "download a URL from the internet" tool isn't going to suffer from a memory safety bug and rootkit my laptop because of a maliciously crafted URL. I'm sure it has lots of other bugs waiting to be found, but at least it's not those ones.
Rust is very forgiving if the goal is not the absolutely best performance. One can rewrite Python code into Rust mostly automatically and the end result is not bad. Recent LLMs can do it without complex prompting.
The only problem is the code would be littered with Rc<RefCell<Foo>>. If Rust would have a compact notation for that a lot of pain related to fighting the borrow checker just to avoid the above would be eliminated.
It is a helmet. But at least it's a helmet in situations where you get into brain cracking accidents multiple times a day. In the end the helmet allows you to get back up and continue your journey compared to when you had no helmet.
Maybe yours is a more apt analogy, but as a very competent driver I can't tell you how often lane assist has driven me crazy.
If I could simply rely on it in all situations, then it would be fine. It's the death of a thousand cuts each and every time it behaves less than ideally that gets to me and I've had to turn it off in every single car a I've driven that has it.
Arguing that one language is more ergonomic but can produce the same safety if you use it unergonomically is... not very useful in a context where safety is highly valued.
> it's a universal tradeoff, that is: Safety is less ergonomic.
I'm not sure that that tradeoff is quite so universal. GC'd languages (or even GC'd implementations like Fil-C) are equally or even more memory-safe than Rust but aren't necessarily any less ergonomic. If anything, it's not an uncommon position that GC'd languages are more ergonomic since they don't forbid some useful patterns that are difficult or impossible to express in safe Rust.
> The tradeoff is between performance, safety and ergonomics. With GC languages you lose the first one.
That's a myth that just won't die. How is it that people simultaneously believe
1) GC makes a language slow, and
2) Go is fast?
Go's also isn't the only safe GC. There are plenty of good options out there. You are unlikely to encounter a performance issue using one of these languages that you could resolve only with manual memory management.
It also drives me insane when i dump the problems i have with Rust about this exact issue, that i usually have to restructure my code to satisfy the compilers needs, and they come at me with the "Skill Issue" club...
I honestly don't even know what to respond to that, but it's kind of weird to me to honestly think that you'd need essentially a "PhD" in order to use a tool...
It's an amazing piece of marketing to corner anyone who dislikes a certain hassle as being mentally deficient - that's what "skill issue" means in this context.
> As long as the audience accepts the framing that ergonomics doesn't matter because it can't be quantified, the hand-waving exemplified above will confound.
I interpreted the parent to be saying that ergonomics IS (at least partly) subjective. The subjective aspect is "what you are used to". And once you get used to Rust its ergonomics are fine, something I agree with having used Rust for a few years now.
> The Rust community should be upfront about this tradeoff
I think they are. But more to the point, I think that safety is not really something you can reasonably "trade-off", at least not for non-toy software. And I think that because I don't really see C/C++/Zig people saying "we're trading off safety for developer productivity/performance/etc". I see them saying "we can write safe code in an unsafe language by being really careful and having a good process". Maybe they're right, but I'm skeptical based on the never-ending proliferation of memory safety issues in C/C++ code.
> Seasoned Rust coders don’t spend time fighting the borrow checker
I like the fact that "fighting the borrow checker" is an idea from the period when the borrowck only understood purely lexical lifetimes. So you have to fight to explain why the thing you wrote, which is obviously correct, is in fact correct.
That's already ancient history by the time I learned Rust in 2021. But, this idea that Rust will mean "fighting the borrow checker" took off anyway even though the actual thing it's about was solved.
Now for many people it really is a significant adjustment to learn Rust if your background is exclusively say, Python, or C, or Javascript. For me it came very naturally and most people will not have that experience. But even if you're a C programmer who has never had most of this [gestures expansively] before you likely are not often "fighting the borrow checker". That diagnostic saying you can't make a pointer via a spurious mutable reference? Not the borrow checker. The warning about failing to use the result of a function? Not the borrow checker.
Now, "In Rust I had to read all the diagnostics to make my software compile" does sound less heroic than "battling with the borrow checker" but if that's really the situation maybe we need to come up with a braver way to express this.
I think the phrase _emotionally_ resonates with people who write code that would work in other languages, but the compiler rejects.
When I was learning rust (coming from python/java) it certainly felt like a battle because I "knew" the code was logically sound (at least in other languages) but it felt like I had to do all sorts of magic tricks to get it to compile. Since then I've adapted and understand better _why_ the compiler has those rules, but in the beginning it definitely felt like a fight and that the code _should_ work.
> Seasoned Rust coders don’t spend time fighting the borrow checker
My experience is that what makes your statement true, is that _seasoned_ Rust developers just sprinkle `Arc` all over the place, thus effectively switching to automatic garbage collection. Because 1) statically checked memory management is too restrictive for most kinds of non trivial data structures, and 2) the hoops of lifetimes you have to go to to please the static checker whenever you start doing anything non trivial are just above human comprehension level.
`Arc`s show up all over the place specifically in async code that targets Tokio runtime running in multithreaded mode. Mostly this is because `tokio::spawn` requires `Future`s to be `Send + 'static`, and this function is a building block of most libraries and frameworks built on top of Tokio.
If you use Rust for web server backend code then yes, you see `Arc`s everywhere. Otherwise their use is pretty rare, even in large projects. Rust is somewhat unique in that regard, because most Rust code that is written is not really a web backend code.
> `Arc`s show up all over the place specifically in async code that targets Tokio runtime running in multithreaded mode. Mostly this is because `tokio::spawn` requires `Future`s to be `Send + 'static`, and this function is a building block of most libraries and frameworks built on top of Tokio.
To some extent this is unavoidable. Non-'static lifetimes correspond (roughly) to a location on the program stack. Since a Future that suspends can't reasonably stay on the stack it can't have a lifetime other than 'static. Once it has to be 'static, it can't borrow anything (that's not itself 'static), so you either have to Copy your data or Rc/Arc it. This, btw, is why even tokio's spawn_local has a 'static bound on the Future.
It would be nice if it were ergonomic for library authors to push the decision about whether to use Rc<RefCell<T>> or Arc<Mutex<T>> (which are non-threadsafe and threadsafe variants of the same underlying concept) to the library consumer.
Also importantly, an Arc<T> can be passed to anything expecting a &T, so you’re not necessarily bumping refcounts all over the place when using an Arc. If you only store it in one place, it’s basically equivalent to any other boxed pointer.
That's fair. It's not really a good pattern though. You get all the runtime overhead of object-soup allocation patterns, syntactic noise making it harder to read than even a primitive GC language (including one using ARC by default and implementing deterministic dropping, a pattern most languages grow out of), and the ability to easily leak [0] memory because it's not a fully garbage-collected solution.
As a rough approximation, if you're very heavy-handed with ARC then you probably shouldn't be using rust for that project.
[0] The term "leak" can be a bit hard to pin down, but here I mean something like space which is allocated and which an ordinary developer would prefer to not have allocated.
I agree that using an Arc where it's unnecessary is not good form.
However, I disagree with generalizations that you can judge the quality of code based on whether or not it uses a lot of Arc. You need to understand the architecture and what's being accomplished.
Reference counting has always been a way to garbage collect. Those who like garbage collection have always looked down on it because it cannot handle circular references and is typically slower than the mark and sweep garbage collectors they prefer.
If you need a referecne counted garbage collector for more than a tiny minotiry of your code, then Rust was probably the wrong choice of language - use something that has a better (mark and sweep) garbage collectors. Rust is good for places where you can almost always find a single owner, and you can use reference counting for the rare exception.
Reference counting can be used as an input to the garbage collector.
However, the difference between Arc and a Garbage Collector is that the Arc does the cleanup at a deterministic point (when the last Arc is dropped) whereas a Garbage Collector is a separate thing that comes along and collects garbage later.
> If you need a referecne counted garbage collector for more than a tiny minotiry of your code
The purpose of Arc isn't to have a garbage collector. It's to provide shared ownership.
There is no reason to avoid Rust if you have an architecture that requires shared ownership of something. These reductionist generalizations are not accurate.
I think a lot of new Rust developers are taught that Arc shouldn't be abused, but they internalize it as "Arc is bad and must be avoided", which isn't true.
The other forms of GC are tracing followed by either sweeping or copying.
> If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically.
Unless you have cycles, in which case the objects are not dropped. And then scanning for cyclic objects almost certainly takes place at a non-deterministic time, or never at all (and the memory is just leaked).
> Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner.
No. That's like saying "a car is a car; a vehicle is anything other than a car". No, GC encompasses reference counting, and GC can be deterministic or non-deterministic (asynchronous).
This still raises the question of why Arc is purportedly used so heavily. I've written 100s of kLoC of modern systems C++ and never needed std::shared_ptr.
Not sure how seasoned I am, but I reject any comparison to a cooking utensil!
I do find myself running into lifetime and borrow-checker issues much less these days when writing larger programs in rust. And while your comment is a bit cheeky, I think it gets at something real.
One of the implicit design mentalities that develops once you write rust for a while is a good understanding of where to apply the `UnsafeCell`-related types, which includes `Arc` but also `Rc` and `RefCell` and `Cell`. These all relate to inner mutability, and there are many situations where plopping in the right one of these effectively resolves some design requirement.
The other idiomatic thing that happens is that you implicitly begin structuring your abstract data layouts in terms of thunks of raw structured data and connections between them. This usually involves an indirection - i.e. you index into an array of things instead of holding a pointer to the thing.
Lastly, where lifetimes do get involved, you tend to have a prior idea of what thing they annotate. The example in the article is a good case study of that. The author is parsing a `.notes` file and building some index of it. The text of the `.notes` file is the obvious lifetime anchor here.
You would write your indexing logic with one lifetime 'src: `fn build_index<'src>(src: &'src str)`
Internally to the indexing code, references to 'src-annotated things can generally pass around freely as their lifetime converges after it.
Externally to the indexing code you'd build a string of the notes text, and passing a reference to that to the `build_index` function.
For simple CLI programs, you tend not to really need anything more than this.
It gets more hairy if you're looking at constructing complex object graphs with complex intermediate state, partial construction of sub-states, etc. Keeping track of state that's valid at some level, while temporarily broken at another level, is where it gets really annoying with multiple nested lifetimes and careful annotation required.
But it was definitely a bit of a hair-pulling journey to get to my state of quasi-peace with Rust's borrow checker.
This is exactly the opposite of what he’s saying, using Arc everywhere is hacking around the borrow checker, a seasoned rust developer will structure their code in a way that works with the borrow checker; Arc has a very specific use case and a seasoned rust developer will rarely use it
Arc<T> is all over the place if you're writing async code unfortunately. IMO Tokio using a work-stealing threaded scheduler by default and peppering literally everything with Send + Sync constraints was a huge misstep.
I mostly wind up using Arc a lot while using async streams. This tends to occur when emulating a Unix-pipeline-like architecture that also supports concurrency. Basically, "pipelines where we can process up to N items in parallel."
But in this case, the data hiding behind the Arc is almost never mutable. It's typically some shared, read-only information that needs to live until all the concurrent workers are done using it. So this is very easy to reason about: Stick a single chunk of read-only data behind the reference count, and let it get reclaimed when the final worker disappears.
Arc + work stealing scheduler is common. But work stealing schedulers are common (eg libdispatch popularized it). I believe the only alternative is thread-per core but they’re not very common/popular. For what it’s worth zig would look very similar except their novel injectable I/O syntax isn’t compatible with work stealing.
Even then, I’d agree that while Arc is used in lots of places in work stealing runtimes, I disagree that it’s used everywhere or that you can really do anything else if you want to leverage all your cores with minimum effort and not having to build your application specialized to deal with that.
Being possible with minimal effort doesn't really preclude it from it not being the default. The issue I have is huge portions of Tokio's (and other async libs) API have a Send + Sync constraint that destroy the benefit of LocalSet / spawn_local. You can't build and application with the specialized thread-per core or single-threaded runtime thing if you wanted to because of pervasive incidental complexity.
I don't care that they have a good work-stealing event loop, I care that it's the default and their APIs all expect the work-stealing implementation and unnecessarily constrain cases where you don't use that implementation. It's frustrating and I go out of my way to avoid Tokio because of it.
Edit: the issues are in Axum, not the core Tokio API. Other libs have this problem too due to aforementioned defaults.
>You can't build and application with the specialized thread-per core or single-threaded runtime thing if you wanted to because of pervasive incidental complexity. [...] It's frustrating and I go out of my way to avoid Tokio because of it.
At $dayjob we have built a large codebase (high-throughput message broker) using the thread-per-core model with tokio (ie one worker thread per CPU, pinned to that CPU, driving a single-threaded tokio Runtime) and have not had any problems. Much of our async code is !Send or !Sync (Rc, RefCell, etc) precisely because we want it to benefit from not needing to run under the default tokio multi-threaded runtime.
We don't use many external libs for async though, which is what seems to be the source of your problems. Mostly just tokio and futures-* crates.
I might be misremembering and the overbearing constraints might be in Axum (which is still a Tokio project). External libs are a huge problem in this area in general, yeah.
Single-threaded runtime doesn't require Send+Sync for spawned futures. AFAIK Tokio doesn't have a thread-per-core backend and as a sibling intimated you could build it yourself (or use something more suited for thread-per-core like Monoio or Glommio).
These extreme generalizations are not accurate, in my experience.
There are some cases where someone new to Rust will try to use Arc as a solution to every problem, but I haven't seen much code like this outside of reviewing very junior Rust developers' code.
In some application architectures Arc is a common feature and it's fine. Saying that seasoned Rust developers rarely use Arc isn't true, because some types of code require shared references with Arc. There is nothing wrong with Arc when used properly.
I think this is less confusing to people who came from modern C++ and understand how modern C++ features like shared_ptr work and when to use them. For people coming from garbage collected languages it's more tempting to reach for the Arc types to try to write code as if it was garbage collected.
I've not explored every program domain, but in general I see two kinds of program memory access patterns.
The first is a fairly generic input -> transform -> output. This is your generic request handler for instance. You receive a payload, run some transform on that (and maybe a DB request) and then produce a response.
In this model, Arc is very fitting for some shared (im)mutable state. Like DB connections, configuration and so on.
The second pattern is something like: state + input -> transform -> new state. Eg. you're mutating your app state based on some input. This fits stuff like games, but also retained UIs, programming language interpreters and so on on.
Using ARCs here muddles the ownership. The gamedev ecosystem has found a way to manage this by employing ECS, and while it can be overkill, the base DOD principles can still be very helpful.
Treat your data as what it is; data. Use indices/keys instead of pointers to represent relations. Keep it simple.
This is something I have noticed while I'm by no means seasoned enough to consider myself even a mid level, some of my colleagues are and what they tend to do it plan ahead much better or pedantically, as they put it, the worst thing you will end up doing it's trying to change an architectural decision later on.
I don't think there are any Arcs in my codebase (apart from a couple of regrettable ones needed to interface with Javascript callbacks in WASM - this is more a WASM problem than a rust problem).
Definitely not. Arc is for immutable (or sync, e.g. atomics, mutexes) data, while borrow checker protects against concurrent mutations. I think you meant Arc<Mutex<T>> everywhere, but that code smells immediately and seasoned Rust devs don't do that.
A lot of criticism of Rust (not denying that there are also a lot useful criticisms of Rust out there) boils down to "it requires me to think/train in a different way than I used to, therefore it's hard" and goes on to how the other way is easier which is not the case but it's just familiar to them hence they think it's easier and simpler. More people should watch the talk "Simple made easy" https://www.youtube.com/watch?v=SxdOUGdseq4
I'm starting to think No True HNer goes without misidentifying a No True Scotsman fallacy.
They are clearly just saying as you become more proficient with X, Y is less of a problem. Not that if the borrow checker is blocking you that you aren't a real Rust programmer.
Let's say you're trying to get into running. You express that you can't breathe well during the exercise and it's a miserable experience. One of your friends tells you that as an experienced runner they don't encounter that in the same way anymore, and running is thus more enjoyable. Do you start screeching No True Scotsman!! at them? I think not.
> > Seasoned Rust coders don’t spend time fighting the borrow checker
> No true scotsman would ever be confused by the borrow checker.
I'd take that No true scotsman over the "Real C programmers write code without CVE" for $5000.
Also you are strawmanning the argument. GP said, "As a seasoned veteran of Rust you learn to think like the borrow checkers." vs "Real Rust programmers were born with knowledge of borrow checker".
I can't remember the last time I had any problem with the borrow checker. The junior solution is .clone(), better one is & (reference) and if you really need you can start to use <'a>. There is a mild annoyance with which function consumes what and the LLM era really helped with this.
My beef is sometimes with the ways traits are implemented or how AWS implemented Errors for the their library that is just pure madness.
I believe the benefit of Zig is that it allows you the familiarity of writing code like in C, but has other elements in the language and tooling to make things safer. For example, Zig has optionals, which can eliminate nil deference. Another example is how you can pass some debug or custom allocators during testing that have all sorts of runtime checks to detect bad memory access and resource leaks.
I have some issues with Zig's design, especially around the lack of explicit interface/trait, but I agree with the post that it is a more practical language, just because of how much simpler its adoption is.
100% I came to Rust from a primarily JavaScript/TypeScript background, and most of the idioms and approaches I was used to using translated directly into Rust.
I don't think like a C programmer, my problem is that I think like a Java/Python/Go programmer, and I'm spoiled by getting used to having a garbage collector always running behind me cleaning up my memory poops.
Even though Rust can end up with some ugly/crazy code, I love it overall because I can feel pretty safe that I'm not going to create hard-to-find memory errors.
Sure, I can (and do) write code that causes my (rust) app to crash, but so far they've all been super trivial errors to debug and fix.
I haven't tried Zig yet though. Does it give me all the same compile time memory usage guarantees?
At first, the 12 year old inside me giggled at the thought of 'memory poops', but then I realized that a garbage collector is much more analogous to a waste water treatment plant than a garbage truck and a landfill..
> Seasoned Rust coders don’t spend time fighting the borrow checker - their code is already written in a way that just works.
That hasn't been my experience at all. At best, the first version of code pops out quickly and cleanly because the author knows the appropriate idiom to choose. Refactoring rust code to handle changes in that allocation idiom is extremely expensive, even for the most seasoned developers.
Case in point:
> Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”.
Which fails to handle "these two variables didn't need to be mutated concurrently, but now they do".
I mostly don't agree with this take. A couple of my quibbles:
"Cognitive overhead: You’re constantly thinking about lifetimes, ownership, and borrow scopes, even for simple tasks. A small CLI like my notes tool suddenly feels like juggling hot potatoes."
None of this goes away if you are using C or Zig, you just get less help from the compiler.
"Developers are not idiots"
Even intelligent people will make mistakes because they are tired or distracted. Not being an idiot is recognising your own fallibility and trying to guard against it.
What I will say, that the post fails to touch on, is: The Rust compiler's ability to reason about the subset of programs that are safe is currently not good enough, it too often rejects perfectly good programs. A good example of this it the inability to express that the following is actually fine:
struct Foo {
bar: String,
baz: String,
}
impl Foo {
fn barify(&mut self) -> &mut String {
self.bar.push_str("!");
&mut self.bar
}
fn bazify(&self) -> &str {
&self.baz
}
}
fn main() {
let mut foo = Foo {
bar: "hello".to_owned(),
baz: "wordl".to_owned(),
};
let s = foo.barify();
let a = foo.bazify();
s.push_str("!!");
}
which leads to awkward constructs like
fn barify(bar: &mut String) -> &mut String {
bar.push_str("!");
bar
}
// in main
let s = barify(&mut foo.bar);
He has a point. Backlinks in Rust are too hard. You can do them safely with Rc, Weak, and RefCell, and .borrow(), but it's not trivial.
If your program runs for a short time and then exits, arena editing is an option. That seems to be what the author means by "CLI tools". It's the lifetime, not the input format.
"Rust is amazing, if you’re building something massive, multithreaded, or long-lived, where compile-time guarantees actually save your life. The borrow checker, lifetimes, and ownership rules are a boon in large systems."
Yes. That's really what Rust is for. I've written a large metaverse client in Rust, and one of the regression tests I run is to put an avatar in a tour vehicle and let it ride around for 24 hours. About 20 threads. No memory leaks. No crashes. That would take a whole QA team and lots of external tools such as Valgrind in C++, and it would be way too slow in any of the interpreted languages.
This is a really bad take, on par with the "we don't need types" post from last week.
The thing I wish we would remember, as developers, is that not all programs need to be so "safe". They really, truly don't. We all grew up loving lots of unsafe software. Star Fox 64, MS Paint, FruityLoops... the sad truth is that developers are so job-pilled and have pager-trauma, so they don't even remember why they got in the game.
I remember reading somewhere that Andrew Kelley wrote zig because he didn't have a good language to write a DAW in, and I think its so well suited to stuff like that! Make cool creative software you like in zig, and people that get hella about memory bugs can stay mad.
Meanwhile, everyone knows that memory bugs made super mario world better, not worse.
The thing I wish we would remember, as developers, is that not all programs need to be so "safe".
"Safety" is just a shorthand for "my program means what I say". Unsafety is semantic gibberish.
There's lots of reasons to write artistically gibberish code, just as there is with natural language (e.g. Lewis Carroll). Most programs aren't going for code as art though. They're trying to accomplish something definite through a computer and gibberish is directly counterproductive. If you don't mean what you write or care what you get, software seems like the wrong way to accomplish your goals. I'd still question whether you want software even in a probabilistic argument along these lines.
Even for those cases where gibberish is meaningful at a higher level (like IOCCC and poetry), it should be intentional and very carefully crafted. You can use escape hatches to accomplish this in Rust, though I make no comment on the artistic merits of doing so.
The argument you're making is that uncontrolled, unintentional gibberish is a positive attribute. I find that a difficult argument to accept. If we could wave a magic wand and make all code safe with no downsides, who among us wouldn't?
It doesn't change anything about Super Mario World speedruns because you can accomplish the same thing as arbitrary code execution inputs with binary patching. We just have this semi-irrational belief that one is cheating and one is not.
I think its a bad take because "Developers are not Idiots" and "be disciplined" are not good arguments. Its just choosing to ignore the problem rust solves.
I am fine with ignoring the problems that rust solves, but not because I'm smart and disciplined. It just fits my use-case of making fast _non-critical_ software. I don't think we should rewrite security and networking stacks in it.
And rust doesn't market itself as small and simple scripting language?
Choose the tool that fits your usecase. You would never bring wasm unity to render a static html file. But if you make a browsergame, you might want to.
I was going to say that it's greatly understating the value of the borrow checker. It guarantees no invalid memory accesses. But then it added:
> This means that basically the borrow checker can only catch issues at comptime but it will not fix the underlying issue that is developers misunderstanding memory lifetimes or overcomplicated ownership. The compiler can only enforce the rules you’re trying to follow; it can’t teach you good patterns, and it won’t save you from bad design choices.
In the short times that I wrote Rust, it never occurred to me that my lifetime annotations were incorrect. They felt like a bit of a chore but I thought said what I meant. I'm sure there's a lot of getting used to using it--like static types--and becomes second nature at some point. Regardless, code that doesn't use unsafe can't have two threads concurrently writing the same memory.
The full title is "Why Zig Feels More Practical Than Rust for Real-World CLI Tools". I don't see why CLI tools are special in any respect. The article does make some good points, but it doesn't invalidate the strength of Rust in preventing CVEs IMO. Rust or Zig may feel certain ways to use for certain people, time and data will tell.
Personally, there isn't much I do that needs the full speed of C/C++, Zig, Rust so there's plenty of GC languages. And when I do contribute to other projects, I don't get to choose the language and would be happy to use Rust, Zig, or C/C++.
> I don't see why CLI tools are special in any respect.
Because they don't grow large or need a multi-person team. CLI tools tend to be one & done. In other words, it's saying "Zig, like C, doesn't scale well. Use something else for larger, longer lived codebases."
This really comes across in the article's push that Zig treats you like an adult while Rust is a babysitter. This is not unlike the sentiment for Java back in the day. But the reality is that most codebases don't need to be clever and they do need a babysitter.
It isn't even really that -- most CLI tools are single-threaded and have a short lifespan, so your memory allocation strategy can be as simple as allocating what you need as you go along and then letting program termination clean it up.
I think this focus cuts both ways though - most "one & done" CLI tools will not be bottlenecked by a GC. Many are so performance insensitive that Python is totally fine, and for most of the rest the performance envelope of Go is more than enough. Why would I reach for Rust or Zig for these? "I like C, Zig is like C" is a totally acceptable reason, but then this whole article is preaching to the choir.
Do you know any other languages that tend to be safer than C and suitable for CLI tools but without the borrow checker? Over many years I’ve seen a lot in C++, Go, Perl, Python, Ruby, Pascal, various shells, assembly, Java, and some in Haxe, Ada, Lisp, Scheme, Julia, forms of Basic, and recently JavaScript or Typescript.
Most of those are more memory safe than C. None of them have the borrow checker. This leaves me wondering why - other than proselytizing Zig - this article would make such a direct and narrow comparison between only Zig and Rust.
With how easily go creates statically compiled binaries and cross compiles I think that might be best language for cli tools. Unfortunate becasue it's annoyingly unexpressive.
> Regardless, code that doesn't use unsafe can't have two threads concurrently writing the same memory.
It's a bit messier than that. Basically the only concurrency-related bug I ever actually want help with from the compiler is memory ordering issues. Rust chose to make those particular racey memory writes safe instead of unsafe.
I want to like Zig, but D still exists and feels like everything I want from a C-like alternative to C++ I just wish the rest of the industry had adopted it long ago. Zig has a strange syntax, and Rust is basically eating chunks of the industry, especially in programmer tooling across various languages as is Go (it powers most cloud providers and is the 2nd top choice for AI right after Python).
I remember before Rust when Go vs. D was the topic of the day. I even bought a D book and was working through it when Go was announced, and it won me over. The difference maker for me was the standard library; working with Go was just easier, full stop. That and using names like 'int64' instead of 'double', because that's what my brain likes apparently.
100% agree. I really love D the language, if I could go back in time I would find Walter and challenge him that he couldn't write {insert half of the Go std lib packages} into the STD lib for D because its too hard and impossible, in the hopes he takes the bait. I would love to see something like a 'Framework' for D that is maintained by the maintainers, but isn't necessarily the standard library, because people get really touchy when you mess with the STD lib, maybe a way to test the waters before actually adding new packages to it, having an HTTP server OOTB with D would be amazing.
> Last weekend I’ve made a simple CLI tool for myself to help me manage my notes it parses ~/.notes into a list of notes, then builds a tag index mapping strings to references into that list. Straightforward, right? Not in Rust. The borrow checker blocks you the moment you try to add a new note while also holding references to the existing ones. Mutability and borrowing collide, lifetimes show up, and suddenly you’re restructuring your code around the compiler instead of the actual problem.
I'd love to see the actual code here! When I imagine the Rust code for this, I don't really foresee complicated borrow-checker or reference issues. I imagine something like
struct Note {
filename: String,
// maybe: contents: String
}
// newtype for indices into `notes`
struct NoteIdx(usize);
struct Notes {
notes: Vec<Note>,
tag_refs: HashMap<String, Vec<NoteIdx>>
}
You store indices instead of pointers. This is very unlikely to be slower: both a usize index and a pointer are most likely 64 bits on your hardware; there's arguably one extra memory deref but because `notes` will probably be in cache I'd argue it's very unlikely you'll see a real-life performance difference.
It's not magic: you can still mess up the indices as you add and remove notes.
But it's safer: if you mess up the indices, you'll get an out-of-bounds error instead of writing to an unintended location in your process's memory.
Anyway, even if you don't care about safety, it's clear and easy to think about and reason about, and arguably easier to do printf debugging with: "this tag is mentioned in notes 3, 10 and 190, oh, let's print out what those ones are". That's better than reading raw pointers.
Maybe I'm missing something? This sort of task comes up all day every while writing Rust code. It's just a pretty normal pattern in the language. You don't store raw references for ordinary logic like this. You do need it when writing allocators, async runtimes, etc. Famously, async needs self-referential structs to store stack local state between calls to `.await`, and that's why the whole business with `Pin` exists.
> memory safety is one puzzle piece of overall software safety
So this. We currently spent about a month carefully instrumenting and coming to understand a subtle bug in our distributed radio network. This all runs on bare metal C (samd21 chips). Because timing, and hundreds of little processors, and radios were all involved, it was a pita to surface what the issue was. It was algorithmic. Not a memory problem. Writing this in rust or zig (instead of straight C) would not have fixed this problem.
I’d like to consider doing next generations of this product in zig or rust. I’m not opposed. I like the extra tools to make the product better. But they’re a small part of the picture in writing good software. The borrow checker may improve your code, it doesn’t guarantee successful software.
I don't know why anyone would write CLI tools in rust or zig. I/O is going to be your bottleneck way more often than GC, in fact I don't really get the GC hate outside of game dev, databases and other memory intensive applications. Why not use Go, Python, etc? People try to make a false dichotomy between memory safety vs. non-memory safety when really it's GC vs. no GC --- memory safety without it is going to be hard either way. More time should be spent on justifying to yourself why you shouldn't be using a GC, and less on which kind of GC-less language you use.
(If you go no GC "because it's fun" then there's no need for the post in the first place --- just use what's fun!)
Instant startup times are really nice. You definitely notice the difference. It also means that you can be a bit lazier when creating wrappers around those tools (running 1000's of times isn't a problem when the startup is 1ms, but would be a problem with 40ms of startup time).
Distribution can also be a lot easier if you don't need to care about the user having a specific version of Python or specific packages available.
I will always reach for a language that has sum types, pattern matching and async support. Catching errors at compile time is a boon too. It doesn’t have to be Rust, but after those requirements- why not?
Go is a great option for CLI tools (even though I'm not a fan of the language itself). Python CLI apps can be a big pain to distribute if you have a bunch of dependencies. I think this is also why Rust and Zig are also attractive.. like with Go it's easy to create a statically compiled binary you can just cp into /usr/local/bin.
I also loved Zig when manually typing code, but I increasingly use AI to write my code even in personal projects. In that context, I'd rather use Rust more, since the AI takes care of complex syntax anyway. Also, the rust ecosystem is bigger, so I'd rather stick to this community.
> Developers are not Idiots
I'm often distracted and AIs are idiots, so a stricter language can keep both me and AIs from doing extra dumb stuff.
What are your thoughts on nim, odin and v-lang, D-lang?
I feel like I am most interested about nim given how easy it was to pick up and how interoperable it is with C and it has a garbage collector and can change it which seems to be great for someone like me who doesn't want to worry about manual memory management right now but maybe if it becomes a bottleneck later, I can atleast fix it without worrying too much..
I have not given any of those 3 a fair enough shot just yet to make a balanced and objective decision.
Out of all of them from what little I know and my very superficial knowledge Odin seems the most appealing to me, it's primary use case from what I know is game development I feel like that could easily pivot into native desktop application development was tempted to make a couple of those in odin in the past but never found the time.
Nim I like the concept and the idea of but the python-like syntax just irks me. haha I can't seem to get into languages where indentation replaces brackets.
But the GC part of it is pretty neat, have you checked Go yet?
I am a big fan of golang lol.
Golang might be the language that I can love the most, portable, easy to write, stdlib that's goated, fast to compile with, and a great ecosystem!
But I like nim in the sense that I feel sometimes in golang that I can't change its GC and so although I do know that for most things it wouldn't be a breaker.
but still, I sometimes feel like I should've somewhat freedom to add memory management later without restarting from scratch or something y'know?
Golang is absolutely goated. This was why I also recommended V-lang, V-lang is really similar to golang except it can have memory management...
They themselves say that on the website that IIRC if you know golang, you know 70% V-lang
I genuinely prefer golang over everything but I still like nim/ V-lang too as fun languages as I feel like their ecosystem isn't that good even though I know that yes they can interop with C but still...
My understanding of odin was that its good for data oriented.
I haven't really looked into odin except joining their discord and asking them some questions.
it seems that aside from some normal syntax, it is sort of different from golang under the hood as compared to V-lang which is massively inspired by golang
After reading the HN post of sqlite which recommended using sqlite as a odt or some alternative which I agreed. I thought of creating an app in flutter similar to localsend except flutter only supports C esq and it would've been weird to take golang pass it through C and then through flutter or smth and I gave up...
I thought that odin could compile to C and I can use that but it turns out that Odin doesn't really compile to C as compared to nim and v-lang which do compile to C.
I think that nim and v-lang are the best ways to write some app like that though with flutter and I am now somewhat curious as to what you guys think would be the best way of writing highly portable apps with something personally dev-ex being similar to golang..
I have actually thought about using something like godot for this project too and seeing if godot supports something like golang or typescript or anything really. Idk I was just messing around and having a bit of fun lol i think.
“Rust lifetimes can be chore, so use a C-like language that requires you to manage them in your head”
Weird that they don’t consider other options, in particular languages with reference counting or garbage collection. Those will not solve all ownership issues, but for immutable objects, they typically do. For short-running CLI tools, garbage collecting languages may even be faster than ones with manual memory management because they may be able to postpone all memory freeing until the program exits.
Nope, rust-analyzer is incredible whereas the Zig LSP felt worse than Go.
I agree the borrow checker can be a pain though, I wish there were something like Rust with a great GC. Go has loads of other bad design decisions (err != nil, etc.) and Cargo is fantastic.
What about every Java/JS/Python/Rust/Go programmer who ever created a CVE? Out-of-bounds access is, indeed, a very common cause of dangerous vulnerabilities, but Zig eliminates it to the same extent as Rust. UAF is much lower on the list, to the point that non-memory-safety-related causes easily dominate it.[1]
The question is, then, what price in language complexity are you willing to pay to completely avoid the 8th most dangerous cause of vulnerabilities as opposed to reducing them but not eliminating them? Zig makes it easier to find UAF than in C, and not only that, but the danger of UAF exploitability can be reduced even further in the general case rather easily (https://www.cl.cam.ac.uk/~tmj32/papers/docs/ainsworth20-sp.p...). So it is certainly true that memory unsafety is a cause of dangerous vulnerabilities, but it is the spatial unsafety that's the dominant factor here, and Zig eliminates that. So if you believe (rightly, IMO) that a language should make sure to reduce common causes of dangerous vulnerabilities (as long as the price is right), then Zig does exactly that!
I don't think it's unreasonable to find the cost of Rust justified to eliminate the 8th most dangerous cause of vulnerabilities, but I think it's also not unreasonable to prefer not to pay it.
I don't think the rank on a list that includes stuff like SQL injection and path traversal tells you much about what language features are worthwhile in the C/C++ replacement space. No developer that works on something like Linux or Chromium would introduce a SQL injection vulnerability unless they experienced severe head trauma. They do still introduce use after free vulnerabilities with some regularity.
First, UAF can be made largely non-dangerous without eliminating it (as in the link above and others). It's harder to exploit to begin with, and can be made much harder still virtually for free. So the number of UAFs and the number of exploitable vulnerabilities due to UAF are not the same, and have to be treated as separate things (because they can be handled separately).
Second, I don't care if my bank card details leak because of CSRF or because of a bug in Chromium. Now, to be fair, the list of dangerous vulnerabilities weighs things by number of incidents and not by number of users affected, and it is certainly true that more people use Chrome than those who use a particular website vulnerable to CSRF. But things aren't so simple, there, too. For example, I work on the JVM, which is largely written in C++, and I can guarantee that many more people are affected by non-memory-safety vulnerabilities in Java programs than by memory-safety vulnerabilities in the JVM.
Anyway, the point is that the overall danger and incidence of vulnerabilities - and therefore the justified cost in addressing all the different factors involved - is much more complicated than "memory unsafety bad". Yes, it's bad, but different kinds of memory unsafety are bad to different degrees, and the harm can be controlled separately from the cause.
Now, I think it's obvious that even Rust fans understand there's a complex cost/benefit game here, because most software today is already written in memory-safe languages, and the very reason someone would want to use a language like Rust in the first place is because they recognise that sometimes the cost of other memory-safe languages isn't worth it, despite the importance of memory safety. If both spatial and temporal safety were always justified at any reasonable cost (that is happily paid by most software already), then there would be no reason for Rust to exist. Once you recognise that, you have to also recognise that what Rust offers must be subject to the same cost/benefit analysis that is used to justify it in the first place. And it shouldn't be surprising that the outcome would be similar: sometimes the cost may be justified, sometimes it may not be.
> I don't care if my bank card details leak because of CSRF or because of a bug in Chromium
Sure, but just by virtue of what these languages are used for, almost all CSRF vulnerabilities are not in code written in C, C++, Rust, or Zig. So if I’m targeting that space, why would I care that some Django app or whatever has a CSRF when analyzing what vulnerabilities are important to prevent for my potential Zig project?
You’re right that overall danger and incidence of vulnerabilities matter - but they matter for the actual use-case you want to use the language for. The Linux kernel for example has exploitable TOCTOU vulnerabilities at a much higher rate than most software - why would they care that TOCTOU vulnerabilities are rare in software overall when deciding what complexity to accept to reduce them?
> Unfortunately there are too many non-believers for systems programming languages with automatic resource management to take off as they should.
Or those languages had other (possibly unrelated) problems that made them less attractive.
I think that in a high-economic-value, competitive activity such as software, it is tenuous to claim that something delivers a significant positive gain and at the same time that that gain is discarded for irrational reasons. I think at least one of these is likely to be false, i.e. either the gain wasn't so substantial or there were other, rational reasons to reject it.
As proven in several cases, it is mostly caused by management not willing to keep the required investment to make it happen.
Projects like Midori, Swift, Android, MaximeVM, GraalVM, only happen when someone high enough is willing to keep it going until it takes off.
When they fail, usually it is because management backing felt through, not because there wasn't a way to sort out whatever was the cause.
Even Java had enough backing from Sun, IBM, Oracle and BEA during its early uncertainty days outside being a language for applets, until it actually took off on server and mobile phones.
If Valhala never makes it, it is because Oracle gave up funding the team after all these years, or it is impossible and it was a waste of money?
All jokes aside, it doesn’t actually take much discipline to write a small utility that stays memory safe. If you keep allocations simple, check your returns, and clean up properly, you can avoid most pitfalls. The real challenge shows up when the code grows, when inputs are hostile, or when the software has to run for years under every possible edge case. That’s where “just be careful” stops working, and why tools, fuzzing, and safer languages exist.
And a segfault would be worse than a panic, data corruption or out of memory access are the problems. But in reality, most C programs I use daily have never crashed in decades.
Yeah, I often wonder if people who have this attitude have ever tried to run a non-trivial C program they wrote with the clang sanitizers on. A humbling experience every time.
I've had at least one instance of Ghostty running on both my work and personal machine continuously since I first got access to the beta last November, and I haven't seen a single segfault in that entire time. When have you seen them?
I've seen the amount of effort Mitchell &co put into ensuring memory safety of Ghostty in the 1.2 release notes, but after upgrading I am still afraid to open a new pane while there's streaming output in the current one because in 1.1.3 that meant a crash more often than not.
So Ghostty was first publicly released on I think December 27th last year, then 1.0.1, 1.1.0, 1.1.1, and 1.1.2 were released within the next month and a half to fix bugs found by the large influx of users, and there hasn't been a segfault reported since. I would recommend that users who are finding a large number of segfaults should probably report it to the maintainers.
It makes me sad, because they demonstrated JavaScriptCore is shockingly better than V8 for node-likes. The Typescript compiler (which like basically any non-trivial typechecker is CPU bound) is consistently at least 2x faster with Bun on large projects I've worked on.
For that example sure, and admittedly the entire JavaScript/TypeScript processing ecosystem is moving in that direction. But the TypeScript compiler is not the only CPU-bound JavaScript out there.
I think the problem the practical programmer has with a statement like this is the implication that only certain languages require some basic understanding and a bit of discipline to avoid CVEs.
Rust's model has a strict model that effectively prevents certain kinds of logic errors/bugs. So that's good (if you don't mind the price). But it doesn't address all kinds of other logic errors/bugs. It's like closing one door to the barn, but there are six more still wide open.
I see rust as an incremental improvement over C, which comes at quite a hefty price. Something like zig is also an incremental improvement over C, which also comes at a price, but it looks like a significantly smaller one.
(Anyway, I'm not sure zig is even the right comp for rust. There are various languages that provide memory safety, if that's your priority, which also generally allow dropping into "unsafe" -- typically C -- where performance is needed.)
> But it doesn't address all kinds of other logic errors/bugs. It's like closing one door to the barn, but there are six more still wide open.
Could you point at some language features that exist in other languages that Rust doesn't have that help with logic errors? Sum types + exhaustive pattern matching is one of the features that Rust does have that helps a lot to address logic errors. Immutability by default, syntactic salt on using globals, trait bounds, and explicit cloning of `Arc`s are things that also help address or highlight logic bugs. There are some high level bugs that the language doesn't protect you from, but I know of now language that would. Things like path traversal bugs, where passing in `../../secret` let's an attacker access file contents that weren't intended by the developer.
The only feature that immediately comes to mind that Rust doesn't have that could help with correctness is constraining existing types, like specifying that an u8 value is only valid between 1 and 100. People are working on that feature under the name "pattern in types".
> The words of every C programmer who created a CVE.
Much of Zig's user base seems to be people new to systems programming. Coming from a managed code background, writing native code feels like being a powerful wizard casting fireball everywhere. After you write a few unsafe programs without anything going obviously wrong, you feel invincible. You start to think the people crowing about memory safety are doing it because they're stupid, or, cowards, or both. You find it easy to allocate and deallocate when needed: "just" use defer, right? Therefore, it someone screws up, that's a personal fault. You're just better, right?
You know who used to think that way?
Doctors.
Ignaz Semmelweis famously discovered that hand-washing before childbirth decreased morality by an order of magnitude. He died poor and locked in an asylum because doctors of the day were too proud to acknowledge the need to adopt safety measures. If mandatory pre-surgical hand-washing step prevented complication, that implied the surgeon had a deficiency in cleanliness and diligence, right?
So they demonized Semmelweis and patients continued for decades to die needlessly. I'm sure that if those doctors had been on the internet today, they would say, as the Zig people do say, "skill issue".
It takes a lot of maturity to accept that even the most skilled practitioners of an art need safety measures.
that is the precise point at which the article lost me. ironically it's often good programmers who don't "get" the benefit of building memory management and discipline into the language, rather than leaving it to be a cognitive burden on every programmer.
Absolutely, big factor is undefined behavior which makes it look like everything works. Until it doesn't. I quit C long ago because I don't want to deal with manual memory management in any language. I was overwhelmed by Zigs approach as well. Rust is pretty much the only language making it bearable to me.
are you saying that such understanding isn't enough or that every C programmer who said that didn't understand those things?
C and Zig aren't the same. I would wager that syntax differences between languages can help you see things in one language that are much harder to see in another. I'm not saying that Zig or C are good or bad for this, or that one is better than the other in terms of the ease of seeing memory problems with your eyes, I'm just saying that I would bet that there's some syntax that could be employed which make memory usage much more clear to the developer, instead of requiring that the developer keep track of these things in their mind.
Even if you must manually annotate each function so that some metaprogram that runs at compile time can check that nothing is out of place could help detect memory leaks, I would think. or something; that's just an idea. There's a whole metaprogramming world of possibilities here that Zig allows that C simply doesn't. I think there's a lot of room for tooling like this to detect problems without forcing you to contort yourself into strange shapes simply to make the compiler happy.
> are you saying that such understanding isn't enough or that every C programmer who said that didn't understand those things?
Probably both. They're words of hubris.
C and Zig give the appearance of practicality because they allow you to take shortcuts under the assumption that you know what you're doing, whereas Rust does not; it forces you to confront the edge cases in terms of ownership and provenance and lifetime and even some aspects of concurrency right away, and won't compile until you've handled them all.
And it's VERY frustrating when you're first starting because it can feel so needlessly bureaucratic.
But then after awhile it clicks: Ownership is HARD. Lifetimes are HARD. And suddenly when going back to C and friends, you find yourself thinking about these things at the design phase rather than at the debugging phase - and write better, safer code because of it.
And then when you go back to Rust again, you breathe a sigh of relief because you know that these insidious things are impossible to screw up.
This article seems pretty confused about what the borrow checker does or does not do - I've never heard compile time enforcement listed as a negative of the borrow checker before. It might do the author good to try writing some (non-trivial) memory management in both Zig and Rust some time.
This feels like it was written by chatgpt & there are lots of ads. The worst rust articles are those explaining why rust is "better" than cpp for xyz. This is the worst kind of zig article
zig & rust have a somewhat thin middle area in the venn diagram.
I can guarantee you that none of it was written by ChatGPT or any other LLM.
As for the Ads, even though it's my site, I'd urge you to turn on adblocker, pi-hole or anything like that, I won't mind.
I have ads on there yes, but since I primarily write tech articles for a target audience of tech people you can imagine that most readers have some sort of adblocker either browser, network or otherwise.
So my grand total monthly income from ads basically covers hosting costs and so on.
It's definitely LLM generated or at least LLM assisted. The fact that it has serious errors like saying std.heap.page_allocator is general purpose allocator. The structuring and many sentences are pretty stereotypical LLM as well as the weird metaphors (stack dishes, heap clothes).
Edit: The author seems to be in community and I'm mistaken
Yeah this whole blog is sus. Author claims to have been around for 17 years, doesn't have a single project of note and makes naive claims. Github history has thousands of commits per year every single day to private repos, yet very little public code or record to offer credibility. Not clear where they work or what they work on. Makes inflammatory claims about hot-topic languages. Throws up ads on the blog. Personally posts blog to HN on a relatively new account, rather than it organically finding its way here. Sorry I'm not buying it.
He seems to know what he's doing, from the author's Twitter:
Post something slightly mentioning rust in r/cpp, Rust evangelists show up, post something slightly mentioning rust in r/zig, Rust evangelists show up. How is this not a cult?
> Author claims to have been around for 17 years, doesn't have a single project of note and makes naive claims.
Plenty of such people out there.
This guy appears to just personally dislike Rust for reasons undisclosed and tries to rationalize it via posts like this one.
It's like with this former coworker of my former coworker who was really argumentative, seemingly for the sake of it. I did some digging and found that his ex left him and is now happily married.
Turns out that when he was criticizing the use of if-else in Angular templates what he was really thinking about was "if someone else".
The fact that your app crashes when you run out of stack is a compiler bug, not a feature. Memory is memory. The fact that languages in the 40s split in it to stack and heap, doesn't make it a foundational mathematical law.
Yes, safety isn't correctness but if you can't even get safety then how are you supposed to get correctness?
For small apps Zig probably is more practical than Rust. Just like hiring an architect and structural engineers for a fence in your back yard is less practical than winging it.
Aren't these two points contradictory? Forgive me if I'm misunderstanding.
> Rust’s borrow checker is a a pretty powerful tool that helps ensure memory safety during compile time. It enforces a set of rules that govern how references to data can be used, preventing common programming memory safety errors such as null pointer dereferencing, dangling pointers and so on. However you may have notice the word compile time in the previous sentence. Now if you got any experience at systems programming you will know that compile time and runtime are two very different things. Basically compile time is when your code is being translated into machine code that the computer can understand, while runtime is when the program is actually running and executing its instructions. The borrow checker operates during compile time, which means that it can only catch memory safety issues that can be determined statically, before the program is actually run.
>
> This means that basically the borrow checker can only catch issues at comptime but it will not fix the underlying issue that is developers misunderstanding memory lifetimes or overcomplicated ownership. The compiler can only enforce the rules you’re trying to follow; it can’t teach you good patterns, and it won’t save you from bad design choices.
This appears to be claiming that Rust's borrow checker is only useful for preventing a subset of memory safety errors, those which can be statically analysed. Implying the existence of a non-trivial quantity of memory safety errors that slip through the net.
> The borrow checker blocks you the moment you try to add a new note while also holding references to the existing ones. Mutability and borrowing collide, lifetimes show up, and suddenly you’re restructuring your code around the compiler instead of the actual problem.
Whereas this is only A Thing because Rust enforces rules so that memory safety errors can be statically analysed and therefore the first problem isn't really a problem. (Of course you can still have memory safety problems if you try hard enough, especially if you start using `unsafe`, but it does go out of its way to "save you from bad design choices" within that context.)
If you don't want that feature, then it's not a benefit. But if you do, it is. The downside is that there will be a proportion of all possible solutions that are almost certainly safe, but will be rejected by the compiler because it can't be 100% sure that it is safe.
IMO, as a C++ developer, Swift makes the most sense to me if I were looking for a safer alternative.
I think people prefer what's familiar to them, and Swift definitely looks closer to existing C++ to me, and I believe has multiple people from the C++ WG working on it now as well, supposedly after getting fed up with the lack of language progress on C++.
The most recent versions gained a lot in the way of cross-platform availability, but the lack of a native UI framework and its association with Apple seem to put off a lot of people from even trying it.
I wish it was a lot more popular outside of the Apple ecosystem.
I'm still having a hard time understanding who is supposed to use Zig.
If I don't need absolute best performance, I can use GC-ed systems like Node, Python, Go, OCaml, or even Java (which starts fast now thanks to Graal AOT) and enjoy both the safety and expressive power of using a high-level language. When I use a GCed language, I don't have to worry about allocation, lifetimes, and so on, and the user gets a plenty good experience.
If I need the performance only manual memory management can provide (and this situation arises a lot less often than people think it does), I can justify spending the extra time expressing my thoughts in Rust, which will get me both performance and safety.
Why would I go to the trouble of using Zig instead of Rust? Zig, like Rust, incurs a complexity and ecosystem cost. It doesn't give me safety in exchange. I put in about as much effort as I would into a Rust program but don't get anything extra back. (Same goes if you substitute "C++" for "Rust".)
> All it took was some basic understanding of memory management and a bit of discipline.
Is the idea behind Zig just that it's perfectly safe if you know what you're doing --- therefore using Zig is some kind of costly signal of competence? That's like someone saying free-solo-ing a cliff face is perfectly safe if you know what you're doing. Someone falls to his death? Skill issue, right?
We have decades of experience showing that nobody, no matter how much "understanding" and "discipline" he has, can consistently write memory-safe code with manual memory management in a language that doesn't enforce memory safety rules.
So what's the value proposition for Zig?
Are you supposed to use it instead of something like Go or Python or AOT-Kotlin and spend more of your time dealing with memory than you would in one of these languages? Why?
Or are you supposed to use it instead of Rust and get, what, slightly faster compile times, maybe? And no memory safety?
If I had a penny for every time I heard that devs are not idiots, I'd be billionaire.
It's true, but devs are not infallible and that's the point of Rust. Not idiots, not infallible either.
IMO admitting that one can make mistakes even if they don't think they have is a sign of an experienced and trustworthy developer.
It's not that Rust compiler engineers think that devs are idiots, in fact you CAN have footguns in Rust, but one should never use a footgun easily, because that's how you get security vulnerabilities.
Actually, developers are idiots. Everyone is. Some just don't know it or won't admit it.
I once joined a company with a large C/C++ codebase. There I worked with some genuinely expert developers - people who were undeniably smart and deeply experienced. I'm not exaggerating and mean it.
But when I enabled the compiler warnings (which annoyed them) they had disabled and ran a static analyzer over the codebase for the first time, hundreds of classic C bugs popped up: memory leaks, potential heap corruptions, out-of-bounds array accesses, you name it.
And yet, these same people pushed back when I introduced things like libfmt to replace printf, or suggested unique_ptr and vector instead of new and malloc.
I kept hearing:
"People just need to be disciplined allocations. std::unique_ptr has bad performance"
"My implementation is more optimized than some std algorithm."
"This printf is more readable than that libfmt stuff."
etc.
The fact is, developers, especially the smart ones probably, need to be prevented from making avoidable mistakes. You're building software that processes medical data. Or steers a car. Your promise to "pay attention" and "be careful" cannot be the safeguard against catastrophe.
To be honest, the generated machine code / assembly is often more readable than the actual c++ code in c++ stdlibs. So I can sympathize with the "This printf is more readable than that libfmt stuff." comment :)
i am sorry but how does point 1 and point 2 for safety work together? You don't want the program to corrupt your program but also does not want to immediately crash the moment some invariant is not being held?
> Compile-time only: The borrow checker cannot fix logic bugs, prevent silent corruption, or make your CLI behave predictably. It only ensures memory rules are followed.
Also not really true from my experience. There have been plenty of times where the borrow checker is a MASSIVE help in multithreaded context.
This really misses a major point. If you write something in Zig, you can have some confidence in the stability of the program, if you trust yourself as a developer. If someone else writes something else in Zig, you have to live with the possibility that they have not been as responsible as you would have preferred.
Indeed. The other day I was messing around with making various associative data structures in Zig.
I stole someone else's benchmark to use, and at one point I ran into seriously buggy behavior on strings (but not integers) that wasn't caught at the point where it happened early even with -Odebug.
Turns out the benchmark was freeing the strings before it finished performing all of the operations on the data structure. That's the sort of thing that Rust makes nearly impossible, but Zig didn't catch at all.
This is true for every language. Logic bugs exist. I'll take good OS process isolation over 'written-in-Rust' though I wouldn't mind both.
That being said, you've missed the point if you can't understand that safety comes at a real cost, not an abstract or 'by any means necessary' cost, but a cost as real as the safety issues.
This blog is atrocious from an ad standpoint and the recent flood of posts feels promotional and intentionally controversial. The articles are also devoid of any interesting perspectives. Are people actually reading this?
No shade here, just a genuine question: why run ads on a blog like this? A personal technical blog probably doesn't get a ton of traffic. So what's the point? I'm honestly curious.
The benefit of Zig seems to be that it allows you to keep thinking like a C programmer. That may be great, but to a certain extent it’s also just a question of habit.
Seasoned Rust coders don’t spend time fighting the borrow checker - their code is already written in a way that just works. Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”.
The “object soup” is a particular approach that won’t work well in Rust, but it’s not a fundamentally easier approach than the alternatives, outside of familiarity.
As long as the audience accepts the framing that ergonomics doesn't matter because it can't be quantified, the hand-waving exemplified above will confound.
"This chair is guaranteed not to collapse out from under you. It might be a little less comfortable and a little heavier, but most athletic people get used to that and don't even notice!"
Let's quote the article:
> I’d say as it currently stands Rust has poor developer ergonomics but produces memory safe software, whereas Zig has good developer ergonomics and allows me to produce memory safe software with a bit of discipline.
The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic. It's true when you ride a skateboard with a helmet on, it's true when you program, it's true for sex.
Instead you see a lot of arguments with anecdotal or indeterminate language. "Most people [that I talk to] don't seem to have much trouble unless they're less experienced."
It's an amazing piece of rhetoric. In one sentence the ergonomic argument has been dismissed by denying subjectivity exists or matters and then implying that those who disagree are stupid.
> produce memory safe software with a bit of discipline
"a bit of discipline" is doing a lot of work here.
"Just don't write (memory) bugs!" hasn't produced (memory) safe C, and they've been trying for 50yrs. The best practices have been to bolt on analyzers and strict "best practice" standards to enforce what should be part of the language.
You're either writing in Rust, or you're writing in something else + using extra tools to try and achieve the same result as Rust.
As Rust Zig has type-safe enums/sum types. That alone eliminates a lot of problems with C. Plus sane error handling with good defaults that are better than Rust also contributes to code with less bugs.
Surely there is no borrow checker, but a lot of memory-safety issues with C and C++ comes from lack of good containers with sane interfaces (std::* in C++ is just bad from memory safety point of view).
If C++ gained the proper sum types, error handling and templates in Zig style 15 years ago and not the insanity that is in modern C++ Rust may not exist or be much more niche at this point.
I actively dislike Zig's memory safety story, but this isn't a real argument until you can start showing real vulnerabilities --- not models --- that exploit the gap in rigor between the two languages. Both Zig and Rust are a step function in safety past C; it is not a given that Rust is that from Zig, or that that next step matters in practice the way the one from C does.
I like Zig, although the Bun Github tracker is full of segfaults in Zig that are presumably quite exploitable. Unclear what to draw from this, though.
[1]: https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...
> The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic.
I'd agree with that if the comparison is JavaScript or Python. If the comparison is Zig (or C or C++) then I don't agree that it's universal. I personally find Rust more ergonomic than those languages (in addition to be being safer).
I would argue that good C or C++ code is actually just Rust code with extra steps. So in this sense, Rust gets you to the "desired result" much easier compared to using C or C++ because no one is there to enforce anything and make you do it.
You can argue that using C or C++ can get you to 80% of the way but most people don't actively think "okay, how do I REALLY mess up this program?" and fix all the various invariant that they forgot to handle. Even worse, this issue is endemic in higher level dynamic languages like Python too. Most people most of the time only think about the happy path.
What does that have to do with Zig though?
"It's true when you ride a skateboard with a helmet on."
Rust is not the helmet. It is not a safety net that only gives you a benefit in rare catastrophic events.
Rust is your lane assist. It relieves you from the burden of constant vigilance.
A C or C++ programmer that doesn't feel relief when writing Rust has never acquired the mindset that is required to produce safe, secure and reliable code.
Public Safety Announcement: Lane assist does not relieve you from the burden of constant vigilance.
> Rust is your lane assist. It relieves you from the burden of constant vigilance.
Interesting analogy. I love lane assist. When I love it. And hate it when it gets in the way. It can actively jerk the car in weird and surprising ways when presented with things it doesn’t cope well with. So I manage when it’s active very proactively. Rust of course has unsafe… but… to keep the analogy, that would be like driving in a peer group where everyone was always asking me if I had my lane assist on, where when I arrived at a destination, I was badgered with “did you do the whole drive with lane assist?” And if I didn’t, I’d have explained to me the routes and techniques I could have used to arrive at my destination using used lane assist the whole way.
Disclaimer, I have only dabbled a little with rust. It is the religion behind and around it that I struggle with, not the borrow checker.
I have also mostly only dabbled with Rust, and I've come to the conclusion that it is a fantastic language for a lot of things but it is very unforgiving.
The optimal way to write Python is to have your code properly structured, but you can just puke a bunch of syntax into a .py file and it'll still run. You can experiment with a file that consists entirely of "print('Hello World')" and go from there. Import a json file with `json.load(open(filename))` and boom.
Rust, meanwhile, will not let you do this. It requires you to write a lot of best-practice stuff from the start. Loading a JSON file in a function? That function owns that new data structure, you can't just keep it around. You want to keep it around? Okay, you need to do all this work. What's that? Now you need to specify a lifetime for the variable? What does that mean? How do I do that? What do I decide?
This makes Rust feel much less approachable and I think gives people a worse impression of it at the start when they start being told that they're doing it wrong - even though, from an objective memory-safety perspective, they are, it's still frustrating when you feel as though you have to learn everything to do anything. Especially in the context of the small programs you write when you're learning a language. I don't care about the 'lifetime' of this data structure if the program I'm writing is only going to run for 350ms.
As I've toiled a bit more with Rust on small projects (mine or others') I feel the negative impacts of the language's restrictions far more than I feel the positive impacts, but it is nice to know that my small "download a URL from the internet" tool isn't going to suffer from a memory safety bug and rootkit my laptop because of a maliciously crafted URL. I'm sure it has lots of other bugs waiting to be found, but at least it's not those ones.
Rust is very forgiving if the goal is not the absolutely best performance. One can rewrite Python code into Rust mostly automatically and the end result is not bad. Recent LLMs can do it without complex prompting.
The only problem is the code would be littered with Rc<RefCell<Foo>>. If Rust would have a compact notation for that a lot of pain related to fighting the borrow checker just to avoid the above would be eliminated.
>> If Rust would have a compact notation for "Rc<RefCell<Foo>>"
That sounds like Rhai or one of the other Rust-alike scripting languages.
No. It is not an invisible safeguard - it yaps and significantly increases compile time and (a matter of great debate) development effort.
It is a helmet, just accept it. Helmets are useful.
The borrow checker is never a significant portion of compile times.
It is a helmet. But at least it's a helmet in situations where you get into brain cracking accidents multiple times a day. In the end the helmet allows you to get back up and continue your journey compared to when you had no helmet.
We're talking about Zig not C. Same argument will apply to Odin.
These modern approaches are not languages that result in constant memory-safety issues like you imply.
I don't think this is any better of an argument.
Maybe yours is a more apt analogy, but as a very competent driver I can't tell you how often lane assist has driven me crazy.
If I could simply rely on it in all situations, then it would be fine. It's the death of a thousand cuts each and every time it behaves less than ideally that gets to me and I've had to turn it off in every single car a I've driven that has it.
As I mention elsewhere, C crowd didn't call languages like Pascal and Modula-2 programming with straightjacket for no reason.
Turns out not wearing that helmet, and continously falling down for 40 years at the skate park has its price.
Arguing that one language is more ergonomic but can produce the same safety if you use it unergonomically is... not very useful in a context where safety is highly valued.
> it's a universal tradeoff, that is: Safety is less ergonomic.
I'm not sure that that tradeoff is quite so universal. GC'd languages (or even GC'd implementations like Fil-C) are equally or even more memory-safe than Rust but aren't necessarily any less ergonomic. If anything, it's not an uncommon position that GC'd languages are more ergonomic since they don't forbid some useful patterns that are difficult or impossible to express in safe Rust.
The tradeoff is between performance, safety and ergonomics. With GC languages you lose the first one.
> The tradeoff is between performance, safety and ergonomics. With GC languages you lose the first one.
That's a myth that just won't die. How is it that people simultaneously believe
1) GC makes a language slow, and
2) Go is fast?
Go's also isn't the only safe GC. There are plenty of good options out there. You are unlikely to encounter a performance issue using one of these languages that you could resolve only with manual memory management.
It also drives me insane when i dump the problems i have with Rust about this exact issue, that i usually have to restructure my code to satisfy the compilers needs, and they come at me with the "Skill Issue" club...
I honestly don't even know what to respond to that, but it's kind of weird to me to honestly think that you'd need essentially a "PhD" in order to use a tool...
It's an amazing piece of marketing to corner anyone who dislikes a certain hassle as being mentally deficient - that's what "skill issue" means in this context.
Not saying that Rust is necessarily easy to pick up, but hundreds of thousands of people use Rust without a PhD in any subject.
Try and appreciate the humor in what you're replying to without fully discounting the point of it.
> As long as the audience accepts the framing that ergonomics doesn't matter because it can't be quantified, the hand-waving exemplified above will confound.
I interpreted the parent to be saying that ergonomics IS (at least partly) subjective. The subjective aspect is "what you are used to". And once you get used to Rust its ergonomics are fine, something I agree with having used Rust for a few years now.
> The Rust community should be upfront about this tradeoff
I think they are. But more to the point, I think that safety is not really something you can reasonably "trade-off", at least not for non-toy software. And I think that because I don't really see C/C++/Zig people saying "we're trading off safety for developer productivity/performance/etc". I see them saying "we can write safe code in an unsafe language by being really careful and having a good process". Maybe they're right, but I'm skeptical based on the never-ending proliferation of memory safety issues in C/C++ code.
> Seasoned Rust coders don’t spend time fighting the borrow checker
I like the fact that "fighting the borrow checker" is an idea from the period when the borrowck only understood purely lexical lifetimes. So you have to fight to explain why the thing you wrote, which is obviously correct, is in fact correct.
That's already ancient history by the time I learned Rust in 2021. But, this idea that Rust will mean "fighting the borrow checker" took off anyway even though the actual thing it's about was solved.
Now for many people it really is a significant adjustment to learn Rust if your background is exclusively say, Python, or C, or Javascript. For me it came very naturally and most people will not have that experience. But even if you're a C programmer who has never had most of this [gestures expansively] before you likely are not often "fighting the borrow checker". That diagnostic saying you can't make a pointer via a spurious mutable reference? Not the borrow checker. The warning about failing to use the result of a function? Not the borrow checker.
Now, "In Rust I had to read all the diagnostics to make my software compile" does sound less heroic than "battling with the borrow checker" but if that's really the situation maybe we need to come up with a braver way to express this.
I think the phrase _emotionally_ resonates with people who write code that would work in other languages, but the compiler rejects.
When I was learning rust (coming from python/java) it certainly felt like a battle because I "knew" the code was logically sound (at least in other languages) but it felt like I had to do all sorts of magic tricks to get it to compile. Since then I've adapted and understand better _why_ the compiler has those rules, but in the beginning it definitely felt like a fight and that the code _should_ work.
> Seasoned Rust coders don’t spend time fighting the borrow checker
My experience is that what makes your statement true, is that _seasoned_ Rust developers just sprinkle `Arc` all over the place, thus effectively switching to automatic garbage collection. Because 1) statically checked memory management is too restrictive for most kinds of non trivial data structures, and 2) the hoops of lifetimes you have to go to to please the static checker whenever you start doing anything non trivial are just above human comprehension level.
I did some quick search, not sure if this supports or denies your point:
- 151 instances of "Arc<" in Servo: https://github.com/search?q=repo%3Aservo%2Fservo+Arc%3C&type...
- 5 instances of "Arc<" in AWS SDK for Rust https://github.com/search?q=repo%3Arusoto%2Frusoto%20Arc%3C&...
- 0 instances for "Arc<" in LOC https://github.com/search?q=repo%3Acgag%2Floc%20Arc%3C&type=...
- 454 instances of "Rc<" in Servo: https://github.com/search?q=repo%3Aservo%2Fservo+Rc%3C&type=...
- 6 instances of "Rc<" in AWS SDK for Rust: https://github.com/search?q=repo%3Arusoto%2Frusoto+Rc%3C&typ...
- 0 instance for "Rc<" in LOC: https://github.com/search?q=repo%3Acgag%2Floc+Rc%3C&type=cod...
(Disclaimer: I don't know what these repos are except Servo).
Why would you expect the AWS SDK to have complicated memory management?
I don't? Those were just from a quick search and I didn't want to cherrypick either way.
Appreciate the real numbers. Would be interesting to see what percentage of data structures contain Arc, but that's a lot more work.
Servo has to interact a lot with a JS runtime, and it needs to be done in a thread safe and concurrent manner.
Plus the html processing needs to be Arc as well, so that tracks.
`Arc`s show up all over the place specifically in async code that targets Tokio runtime running in multithreaded mode. Mostly this is because `tokio::spawn` requires `Future`s to be `Send + 'static`, and this function is a building block of most libraries and frameworks built on top of Tokio.
If you use Rust for web server backend code then yes, you see `Arc`s everywhere. Otherwise their use is pretty rare, even in large projects. Rust is somewhat unique in that regard, because most Rust code that is written is not really a web backend code.
> `Arc`s show up all over the place specifically in async code that targets Tokio runtime running in multithreaded mode. Mostly this is because `tokio::spawn` requires `Future`s to be `Send + 'static`, and this function is a building block of most libraries and frameworks built on top of Tokio.
To some extent this is unavoidable. Non-'static lifetimes correspond (roughly) to a location on the program stack. Since a Future that suspends can't reasonably stay on the stack it can't have a lifetime other than 'static. Once it has to be 'static, it can't borrow anything (that's not itself 'static), so you either have to Copy your data or Rc/Arc it. This, btw, is why even tokio's spawn_local has a 'static bound on the Future.
It would be nice if it were ergonomic for library authors to push the decision about whether to use Rc<RefCell<T>> or Arc<Mutex<T>> (which are non-threadsafe and threadsafe variants of the same underlying concept) to the library consumer.
> thus effectively switching to automatic garbage collection
Arc isn't really garbage collection. It's like a reference counted smart pointer like C++ has shared_ptr.
If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically.
Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner.
Also importantly, an Arc<T> can be passed to anything expecting a &T, so you’re not necessarily bumping refcounts all over the place when using an Arc. If you only store it in one place, it’s basically equivalent to any other boxed pointer.
That's fair. It's not really a good pattern though. You get all the runtime overhead of object-soup allocation patterns, syntactic noise making it harder to read than even a primitive GC language (including one using ARC by default and implementing deterministic dropping, a pattern most languages grow out of), and the ability to easily leak [0] memory because it's not a fully garbage-collected solution.
As a rough approximation, if you're very heavy-handed with ARC then you probably shouldn't be using rust for that project.
[0] The term "leak" can be a bit hard to pin down, but here I mean something like space which is allocated and which an ordinary developer would prefer to not have allocated.
I agree that using an Arc where it's unnecessary is not good form.
However, I disagree with generalizations that you can judge the quality of code based on whether or not it uses a lot of Arc. You need to understand the architecture and what's being accomplished.
>Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner.
No, this is a subset of garbage collection called tracing garbage collection. "Garbage collection" absolutely includes refcounting.
Reference counting has always been a way to garbage collect. Those who like garbage collection have always looked down on it because it cannot handle circular references and is typically slower than the mark and sweep garbage collectors they prefer.
If you need a referecne counted garbage collector for more than a tiny minotiry of your code, then Rust was probably the wrong choice of language - use something that has a better (mark and sweep) garbage collectors. Rust is good for places where you can almost always find a single owner, and you can use reference counting for the rare exception.
Reference counting can be used as an input to the garbage collector.
However, the difference between Arc and a Garbage Collector is that the Arc does the cleanup at a deterministic point (when the last Arc is dropped) whereas a Garbage Collector is a separate thing that comes along and collects garbage later.
> If you need a referecne counted garbage collector for more than a tiny minotiry of your code
The purpose of Arc isn't to have a garbage collector. It's to provide shared ownership.
There is no reason to avoid Rust if you have an architecture that requires shared ownership of something. These reductionist generalizations are not accurate.
I think a lot of new Rust developers are taught that Arc shouldn't be abused, but they internalize it as "Arc is bad and must be avoided", which isn't true.
> Arc isn't really garbage collection. It's like a reference counted smart pointer
Reference counting is a valid form of garbage collection. It is arguably the simplest form. https://en.wikipedia.org/wiki/Garbage_collection_(computer_s...
The other forms of GC are tracing followed by either sweeping or copying.
> If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically.
Unless you have cycles, in which case the objects are not dropped. And then scanning for cyclic objects almost certainly takes place at a non-deterministic time, or never at all (and the memory is just leaked).
> Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner.
No. That's like saying "a car is a car; a vehicle is anything other than a car". No, GC encompasses reference counting, and GC can be deterministic or non-deterministic (asynchronous).
This still raises the question of why Arc is purportedly used so heavily. I've written 100s of kLoC of modern systems C++ and never needed std::shared_ptr.
For the same reason Unreal uses one.
Large scale teams always get pointer ownership wrong.
Project Zero has enough examples.
> Arc isn't really garbage collection. It's like a reference counted smart pointer like C++ has shared_ptr.
In c++ land this is very often called garbage collection too
Not sure how seasoned I am, but I reject any comparison to a cooking utensil!
I do find myself running into lifetime and borrow-checker issues much less these days when writing larger programs in rust. And while your comment is a bit cheeky, I think it gets at something real.
One of the implicit design mentalities that develops once you write rust for a while is a good understanding of where to apply the `UnsafeCell`-related types, which includes `Arc` but also `Rc` and `RefCell` and `Cell`. These all relate to inner mutability, and there are many situations where plopping in the right one of these effectively resolves some design requirement.
The other idiomatic thing that happens is that you implicitly begin structuring your abstract data layouts in terms of thunks of raw structured data and connections between them. This usually involves an indirection - i.e. you index into an array of things instead of holding a pointer to the thing.
Lastly, where lifetimes do get involved, you tend to have a prior idea of what thing they annotate. The example in the article is a good case study of that. The author is parsing a `.notes` file and building some index of it. The text of the `.notes` file is the obvious lifetime anchor here.
You would write your indexing logic with one lifetime 'src: `fn build_index<'src>(src: &'src str)`
Internally to the indexing code, references to 'src-annotated things can generally pass around freely as their lifetime converges after it.
Externally to the indexing code you'd build a string of the notes text, and passing a reference to that to the `build_index` function.
For simple CLI programs, you tend not to really need anything more than this.
It gets more hairy if you're looking at constructing complex object graphs with complex intermediate state, partial construction of sub-states, etc. Keeping track of state that's valid at some level, while temporarily broken at another level, is where it gets really annoying with multiple nested lifetimes and careful annotation required.
But it was definitely a bit of a hair-pulling journey to get to my state of quasi-peace with Rust's borrow checker.
This is exactly the opposite of what he’s saying, using Arc everywhere is hacking around the borrow checker, a seasoned rust developer will structure their code in a way that works with the borrow checker; Arc has a very specific use case and a seasoned rust developer will rarely use it
Arc<T> is all over the place if you're writing async code unfortunately. IMO Tokio using a work-stealing threaded scheduler by default and peppering literally everything with Send + Sync constraints was a huge misstep.
I mostly wind up using Arc a lot while using async streams. This tends to occur when emulating a Unix-pipeline-like architecture that also supports concurrency. Basically, "pipelines where we can process up to N items in parallel."
But in this case, the data hiding behind the Arc is almost never mutable. It's typically some shared, read-only information that needs to live until all the concurrent workers are done using it. So this is very easy to reason about: Stick a single chunk of read-only data behind the reference count, and let it get reclaimed when the final worker disappears.
Arc + work stealing scheduler is common. But work stealing schedulers are common (eg libdispatch popularized it). I believe the only alternative is thread-per core but they’re not very common/popular. For what it’s worth zig would look very similar except their novel injectable I/O syntax isn’t compatible with work stealing.
Even then, I’d agree that while Arc is used in lots of places in work stealing runtimes, I disagree that it’s used everywhere or that you can really do anything else if you want to leverage all your cores with minimum effort and not having to build your application specialized to deal with that.
Being possible with minimal effort doesn't really preclude it from it not being the default. The issue I have is huge portions of Tokio's (and other async libs) API have a Send + Sync constraint that destroy the benefit of LocalSet / spawn_local. You can't build and application with the specialized thread-per core or single-threaded runtime thing if you wanted to because of pervasive incidental complexity.
I don't care that they have a good work-stealing event loop, I care that it's the default and their APIs all expect the work-stealing implementation and unnecessarily constrain cases where you don't use that implementation. It's frustrating and I go out of my way to avoid Tokio because of it.
Edit: the issues are in Axum, not the core Tokio API. Other libs have this problem too due to aforementioned defaults.
>You can't build and application with the specialized thread-per core or single-threaded runtime thing if you wanted to because of pervasive incidental complexity. [...] It's frustrating and I go out of my way to avoid Tokio because of it.
At $dayjob we have built a large codebase (high-throughput message broker) using the thread-per-core model with tokio (ie one worker thread per CPU, pinned to that CPU, driving a single-threaded tokio Runtime) and have not had any problems. Much of our async code is !Send or !Sync (Rc, RefCell, etc) precisely because we want it to benefit from not needing to run under the default tokio multi-threaded runtime.
We don't use many external libs for async though, which is what seems to be the source of your problems. Mostly just tokio and futures-* crates.
I might be misremembering and the overbearing constraints might be in Axum (which is still a Tokio project). External libs are a huge problem in this area in general, yeah.
Single-threaded runtime doesn't require Send+Sync for spawned futures. AFAIK Tokio doesn't have a thread-per-core backend and as a sibling intimated you could build it yourself (or use something more suited for thread-per-core like Monoio or Glommio).
These extreme generalizations are not accurate, in my experience.
There are some cases where someone new to Rust will try to use Arc as a solution to every problem, but I haven't seen much code like this outside of reviewing very junior Rust developers' code.
In some application architectures Arc is a common feature and it's fine. Saying that seasoned Rust developers rarely use Arc isn't true, because some types of code require shared references with Arc. There is nothing wrong with Arc when used properly.
I think this is less confusing to people who came from modern C++ and understand how modern C++ features like shared_ptr work and when to use them. For people coming from garbage collected languages it's more tempting to reach for the Arc types to try to write code as if it was garbage collected.
This is awkward. I've written a fair amount of rust. I reach for Arc frequently. I see the memory layout implications now.
Do you tend to use a lot of Arenas?
I've not explored every program domain, but in general I see two kinds of program memory access patterns.
The first is a fairly generic input -> transform -> output. This is your generic request handler for instance. You receive a payload, run some transform on that (and maybe a DB request) and then produce a response.
In this model, Arc is very fitting for some shared (im)mutable state. Like DB connections, configuration and so on.
The second pattern is something like: state + input -> transform -> new state. Eg. you're mutating your app state based on some input. This fits stuff like games, but also retained UIs, programming language interpreters and so on on.
Using ARCs here muddles the ownership. The gamedev ecosystem has found a way to manage this by employing ECS, and while it can be overkill, the base DOD principles can still be very helpful.
Treat your data as what it is; data. Use indices/keys instead of pointers to represent relations. Keep it simple.
Arenas can definitely be a part of that solution.
This is something I have noticed while I'm by no means seasoned enough to consider myself even a mid level, some of my colleagues are and what they tend to do it plan ahead much better or pedantically, as they put it, the worst thing you will end up doing it's trying to change an architectural decision later on.
I don't think there are any Arcs in my codebase (apart from a couple of regrettable ones needed to interface with Javascript callbacks in WASM - this is more a WASM problem than a rust problem).
haha, I was about to leave the exact same comment. how are you finding wasm? I’ve been feeling like rust+react is my new favorite tech stack
> _seasoned_ Rust developers just sprinkle `Arc` all over the place
No, this couldn't be further from the truth.
If they aren't sprinkling `Arc` all over, what are they seasoning with instead?
'a
Can't be. Lifetime annotations are present in unseasoned Rust. The question was about what seasoning are being added, if not `Arc`?
How often are you writing non-trivial data structures?
Definitely not. Arc is for immutable (or sync, e.g. atomics, mutexes) data, while borrow checker protects against concurrent mutations. I think you meant Arc<Mutex<T>> everywhere, but that code smells immediately and seasoned Rust devs don't do that.
The only time I use Arc is wrapping contexts for web handlers.
That doesn’t mean there aren’t other legitimate use cases, but “all the time” is not representative of the code I read or write, personally.
I am not sure this is true. Maybe with shared async access it is. I rarely use Arc.
A lot of criticism of Rust (not denying that there are also a lot useful criticisms of Rust out there) boils down to "it requires me to think/train in a different way than I used to, therefore it's hard" and goes on to how the other way is easier which is not the case but it's just familiar to them hence they think it's easier and simpler. More people should watch the talk "Simple made easy" https://www.youtube.com/watch?v=SxdOUGdseq4
> Seasoned Rust coders don’t spend time fighting the borrow checker
No true scotsman would ever be confused by the borrow checker.
i've seen plenty of rust projects open source and otherwise that utilise Arc heavily or use clone and/or copy all over the place.
I'm starting to think No True HNer goes without misidentifying a No True Scotsman fallacy.
They are clearly just saying as you become more proficient with X, Y is less of a problem. Not that if the borrow checker is blocking you that you aren't a real Rust programmer.
Let's say you're trying to get into running. You express that you can't breathe well during the exercise and it's a miserable experience. One of your friends tells you that as an experienced runner they don't encounter that in the same way anymore, and running is thus more enjoyable. Do you start screeching No True Scotsman!! at them? I think not.
> > Seasoned Rust coders don’t spend time fighting the borrow checker
> No true scotsman would ever be confused by the borrow checker.
I'd take that No true scotsman over the "Real C programmers write code without CVE" for $5000.
Also you are strawmanning the argument. GP said, "As a seasoned veteran of Rust you learn to think like the borrow checkers." vs "Real Rust programmers were born with knowledge of borrow checker".
I can't remember the last time I had any problem with the borrow checker. The junior solution is .clone(), better one is & (reference) and if you really need you can start to use <'a>. There is a mild annoyance with which function consumes what and the LLM era really helped with this.
My beef is sometimes with the ways traits are implemented or how AWS implemented Errors for the their library that is just pure madness.
> The junior solution is .clone()
I really hope it’s an Rc/Arc that you’re cloning. Just deep cloning the value to get ownership is dangerous when you’re doing it blindly.
How did AWS mess up errors?
Maybe I am holding it wrong.
Here is one piece of the problem:
Out of curiosity, why are you borrowing that many times? The following should work:
I would have written it this way although if your crate defines `S3Error`, then I would prefer to write by implementing `From`:I believe the benefit of Zig is that it allows you the familiarity of writing code like in C, but has other elements in the language and tooling to make things safer. For example, Zig has optionals, which can eliminate nil deference. Another example is how you can pass some debug or custom allocators during testing that have all sorts of runtime checks to detect bad memory access and resource leaks.
I have some issues with Zig's design, especially around the lack of explicit interface/trait, but I agree with the post that it is a more practical language, just because of how much simpler its adoption is.
100% I came to Rust from a primarily JavaScript/TypeScript background, and most of the idioms and approaches I was used to using translated directly into Rust.
Including pulling in hundreds of dependencies, unfortunately.
I don't think like a C programmer, my problem is that I think like a Java/Python/Go programmer, and I'm spoiled by getting used to having a garbage collector always running behind me cleaning up my memory poops.
Even though Rust can end up with some ugly/crazy code, I love it overall because I can feel pretty safe that I'm not going to create hard-to-find memory errors.
Sure, I can (and do) write code that causes my (rust) app to crash, but so far they've all been super trivial errors to debug and fix.
I haven't tried Zig yet though. Does it give me all the same compile time memory usage guarantees?
At first, the 12 year old inside me giggled at the thought of 'memory poops', but then I realized that a garbage collector is much more analogous to a waste water treatment plant than a garbage truck and a landfill..
> Seasoned Rust coders don’t spend time fighting the borrow checker
Yes, they know when to give up.
Nah. Just learn to think like it.
> Seasoned Rust coders don’t spend time fighting the borrow checker - their code is already written in a way that just works.
That hasn't been my experience at all. At best, the first version of code pops out quickly and cleanly because the author knows the appropriate idiom to choose. Refactoring rust code to handle changes in that allocation idiom is extremely expensive, even for the most seasoned developers.
Case in point:
> Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”.
Which fails to handle "these two variables didn't need to be mutated concurrently, but now they do".
I mostly don't agree with this take. A couple of my quibbles:
"Cognitive overhead: You’re constantly thinking about lifetimes, ownership, and borrow scopes, even for simple tasks. A small CLI like my notes tool suddenly feels like juggling hot potatoes."
None of this goes away if you are using C or Zig, you just get less help from the compiler.
"Developers are not idiots"
Even intelligent people will make mistakes because they are tired or distracted. Not being an idiot is recognising your own fallibility and trying to guard against it.
What I will say, that the post fails to touch on, is: The Rust compiler's ability to reason about the subset of programs that are safe is currently not good enough, it too often rejects perfectly good programs. A good example of this it the inability to express that the following is actually fine:
which leads to awkward constructs likeLooking at your code I have more confidence that quoted statement is false.
Which statement and why? The code is obviously stupid and convoluted because I threw it together in a minute to illustrate a point.
He has a point. Backlinks in Rust are too hard. You can do them safely with Rc, Weak, and RefCell, and .borrow(), but it's not trivial.
If your program runs for a short time and then exits, arena editing is an option. That seems to be what the author means by "CLI tools". It's the lifetime, not the input format.
"Rust is amazing, if you’re building something massive, multithreaded, or long-lived, where compile-time guarantees actually save your life. The borrow checker, lifetimes, and ownership rules are a boon in large systems."
Yes. That's really what Rust is for. I've written a large metaverse client in Rust, and one of the regression tests I run is to put an avatar in a tour vehicle and let it ride around for 24 hours. About 20 threads. No memory leaks. No crashes. That would take a whole QA team and lots of external tools such as Valgrind in C++, and it would be way too slow in any of the interpreted languages.
This is a really bad take, on par with the "we don't need types" post from last week.
The thing I wish we would remember, as developers, is that not all programs need to be so "safe". They really, truly don't. We all grew up loving lots of unsafe software. Star Fox 64, MS Paint, FruityLoops... the sad truth is that developers are so job-pilled and have pager-trauma, so they don't even remember why they got in the game.
I remember reading somewhere that Andrew Kelley wrote zig because he didn't have a good language to write a DAW in, and I think its so well suited to stuff like that! Make cool creative software you like in zig, and people that get hella about memory bugs can stay mad.
Meanwhile, everyone knows that memory bugs made super mario world better, not worse.
There's lots of reasons to write artistically gibberish code, just as there is with natural language (e.g. Lewis Carroll). Most programs aren't going for code as art though. They're trying to accomplish something definite through a computer and gibberish is directly counterproductive. If you don't mean what you write or care what you get, software seems like the wrong way to accomplish your goals. I'd still question whether you want software even in a probabilistic argument along these lines.
Even for those cases where gibberish is meaningful at a higher level (like IOCCC and poetry), it should be intentional and very carefully crafted. You can use escape hatches to accomplish this in Rust, though I make no comment on the artistic merits of doing so.
The argument you're making is that uncontrolled, unintentional gibberish is a positive attribute. I find that a difficult argument to accept. If we could wave a magic wand and make all code safe with no downsides, who among us wouldn't?
It doesn't change anything about Super Mario World speedruns because you can accomplish the same thing as arbitrary code execution inputs with binary patching. We just have this semi-irrational belief that one is cheating and one is not.
Uh I'm confused, so you think my take is bad because memory safety should not matter ?
I think its a bad take because "Developers are not Idiots" and "be disciplined" are not good arguments. Its just choosing to ignore the problem rust solves.
I am fine with ignoring the problems that rust solves, but not because I'm smart and disciplined. It just fits my use-case of making fast _non-critical_ software. I don't think we should rewrite security and networking stacks in it.
Then we're sort of in agreement.
I don't think you need the ritual and complexity that rust brings for small and simple scripts and CLI utilities...
And rust doesn't market itself as small and simple scripting language?
Choose the tool that fits your usecase. You would never bring wasm unity to render a static html file. But if you make a browsergame, you might want to.
I was going to say that it's greatly understating the value of the borrow checker. It guarantees no invalid memory accesses. But then it added:
> This means that basically the borrow checker can only catch issues at comptime but it will not fix the underlying issue that is developers misunderstanding memory lifetimes or overcomplicated ownership. The compiler can only enforce the rules you’re trying to follow; it can’t teach you good patterns, and it won’t save you from bad design choices.
In the short times that I wrote Rust, it never occurred to me that my lifetime annotations were incorrect. They felt like a bit of a chore but I thought said what I meant. I'm sure there's a lot of getting used to using it--like static types--and becomes second nature at some point. Regardless, code that doesn't use unsafe can't have two threads concurrently writing the same memory.
The full title is "Why Zig Feels More Practical Than Rust for Real-World CLI Tools". I don't see why CLI tools are special in any respect. The article does make some good points, but it doesn't invalidate the strength of Rust in preventing CVEs IMO. Rust or Zig may feel certain ways to use for certain people, time and data will tell.
Personally, there isn't much I do that needs the full speed of C/C++, Zig, Rust so there's plenty of GC languages. And when I do contribute to other projects, I don't get to choose the language and would be happy to use Rust, Zig, or C/C++.
> I don't see why CLI tools are special in any respect.
Because they don't grow large or need a multi-person team. CLI tools tend to be one & done. In other words, it's saying "Zig, like C, doesn't scale well. Use something else for larger, longer lived codebases."
This really comes across in the article's push that Zig treats you like an adult while Rust is a babysitter. This is not unlike the sentiment for Java back in the day. But the reality is that most codebases don't need to be clever and they do need a babysitter.
It isn't even really that -- most CLI tools are single-threaded and have a short lifespan, so your memory allocation strategy can be as simple as allocating what you need as you go along and then letting program termination clean it up.
I think this focus cuts both ways though - most "one & done" CLI tools will not be bottlenecked by a GC. Many are so performance insensitive that Python is totally fine, and for most of the rest the performance envelope of Go is more than enough. Why would I reach for Rust or Zig for these? "I like C, Zig is like C" is a totally acceptable reason, but then this whole article is preaching to the choir.
Do you know any other languages that tend to be safer than C and suitable for CLI tools but without the borrow checker? Over many years I’ve seen a lot in C++, Go, Perl, Python, Ruby, Pascal, various shells, assembly, Java, and some in Haxe, Ada, Lisp, Scheme, Julia, forms of Basic, and recently JavaScript or Typescript.
Most of those are more memory safe than C. None of them have the borrow checker. This leaves me wondering why - other than proselytizing Zig - this article would make such a direct and narrow comparison between only Zig and Rust.
OCaml for example,
Unix system programming in OCaml, from 1991
https://ocaml.github.io/ocamlunix/
With how easily go creates statically compiled binaries and cross compiles I think that might be best language for cli tools. Unfortunate becasue it's annoyingly unexpressive.
Indeed, but it is considered a strength by Go enthusiasts. Not being able to build clever abstractions is a feature.
So is the error handling boilerplate.
> Regardless, code that doesn't use unsafe can't have two threads concurrently writing the same memory.
It's a bit messier than that. Basically the only concurrency-related bug I ever actually want help with from the compiler is memory ordering issues. Rust chose to make those particular racey memory writes safe instead of unsafe.
I want to like Zig, but D still exists and feels like everything I want from a C-like alternative to C++ I just wish the rest of the industry had adopted it long ago. Zig has a strange syntax, and Rust is basically eating chunks of the industry, especially in programmer tooling across various languages as is Go (it powers most cloud providers and is the 2nd top choice for AI right after Python).
I remember before Rust when Go vs. D was the topic of the day. I even bought a D book and was working through it when Go was announced, and it won me over. The difference maker for me was the standard library; working with Go was just easier, full stop. That and using names like 'int64' instead of 'double', because that's what my brain likes apparently.
100% agree. I really love D the language, if I could go back in time I would find Walter and challenge him that he couldn't write {insert half of the Go std lib packages} into the STD lib for D because its too hard and impossible, in the hopes he takes the bait. I would love to see something like a 'Framework' for D that is maintained by the maintainers, but isn't necessarily the standard library, because people get really touchy when you mess with the STD lib, maybe a way to test the waters before actually adding new packages to it, having an HTTP server OOTB with D would be amazing.
D would be great, unfortunately they never got the killer application for mass adoption.
> Last weekend I’ve made a simple CLI tool for myself to help me manage my notes it parses ~/.notes into a list of notes, then builds a tag index mapping strings to references into that list. Straightforward, right? Not in Rust. The borrow checker blocks you the moment you try to add a new note while also holding references to the existing ones. Mutability and borrowing collide, lifetimes show up, and suddenly you’re restructuring your code around the compiler instead of the actual problem.
I'd love to see the actual code here! When I imagine the Rust code for this, I don't really foresee complicated borrow-checker or reference issues. I imagine something like
You store indices instead of pointers. This is very unlikely to be slower: both a usize index and a pointer are most likely 64 bits on your hardware; there's arguably one extra memory deref but because `notes` will probably be in cache I'd argue it's very unlikely you'll see a real-life performance difference.It's not magic: you can still mess up the indices as you add and remove notes.
But it's safer: if you mess up the indices, you'll get an out-of-bounds error instead of writing to an unintended location in your process's memory.
Anyway, even if you don't care about safety, it's clear and easy to think about and reason about, and arguably easier to do printf debugging with: "this tag is mentioned in notes 3, 10 and 190, oh, let's print out what those ones are". That's better than reading raw pointers.
Maybe I'm missing something? This sort of task comes up all day every while writing Rust code. It's just a pretty normal pattern in the language. You don't store raw references for ordinary logic like this. You do need it when writing allocators, async runtimes, etc. Famously, async needs self-referential structs to store stack local state between calls to `.await`, and that's why the whole business with `Pin` exists.
> memory safety is one puzzle piece of overall software safety
So this. We currently spent about a month carefully instrumenting and coming to understand a subtle bug in our distributed radio network. This all runs on bare metal C (samd21 chips). Because timing, and hundreds of little processors, and radios were all involved, it was a pita to surface what the issue was. It was algorithmic. Not a memory problem. Writing this in rust or zig (instead of straight C) would not have fixed this problem.
I’d like to consider doing next generations of this product in zig or rust. I’m not opposed. I like the extra tools to make the product better. But they’re a small part of the picture in writing good software. The borrow checker may improve your code, it doesn’t guarantee successful software.
I don't know why anyone would write CLI tools in rust or zig. I/O is going to be your bottleneck way more often than GC, in fact I don't really get the GC hate outside of game dev, databases and other memory intensive applications. Why not use Go, Python, etc? People try to make a false dichotomy between memory safety vs. non-memory safety when really it's GC vs. no GC --- memory safety without it is going to be hard either way. More time should be spent on justifying to yourself why you shouldn't be using a GC, and less on which kind of GC-less language you use.
(If you go no GC "because it's fun" then there's no need for the post in the first place --- just use what's fun!)
Instant startup times are really nice. You definitely notice the difference. It also means that you can be a bit lazier when creating wrappers around those tools (running 1000's of times isn't a problem when the startup is 1ms, but would be a problem with 40ms of startup time).
Distribution can also be a lot easier if you don't need to care about the user having a specific version of Python or specific packages available.
I will always reach for a language that has sum types, pattern matching and async support. Catching errors at compile time is a boon too. It doesn’t have to be Rust, but after those requirements- why not?
> in fact I don't really get the GC hate outside of game dev
Most mobile games are implemented in a system with GC (Unity with il2cpp), and it's not even a /good/ GC, it's Boehm.
Go is a great option for CLI tools (even though I'm not a fan of the language itself). Python CLI apps can be a big pain to distribute if you have a bunch of dependencies. I think this is also why Rust and Zig are also attractive.. like with Go it's easy to create a statically compiled binary you can just cp into /usr/local/bin.
Not Python because getting Python to run on different machines is an absolute pain.
Not Go because of its anaemic type system.
I also loved Zig when manually typing code, but I increasingly use AI to write my code even in personal projects. In that context, I'd rather use Rust more, since the AI takes care of complex syntax anyway. Also, the rust ecosystem is bigger, so I'd rather stick to this community.
> Developers are not Idiots
I'm often distracted and AIs are idiots, so a stricter language can keep both me and AIs from doing extra dumb stuff.
What are your thoughts on nim, odin and v-lang, D-lang?
I feel like I am most interested about nim given how easy it was to pick up and how interoperable it is with C and it has a garbage collector and can change it which seems to be great for someone like me who doesn't want to worry about manual memory management right now but maybe if it becomes a bottleneck later, I can atleast fix it without worrying too much..
I have not given any of those 3 a fair enough shot just yet to make a balanced and objective decision.
Out of all of them from what little I know and my very superficial knowledge Odin seems the most appealing to me, it's primary use case from what I know is game development I feel like that could easily pivot into native desktop application development was tempted to make a couple of those in odin in the past but never found the time.
Nim I like the concept and the idea of but the python-like syntax just irks me. haha I can't seem to get into languages where indentation replaces brackets.
But the GC part of it is pretty neat, have you checked Go yet?
I am a big fan of golang lol. Golang might be the language that I can love the most, portable, easy to write, stdlib that's goated, fast to compile with, and a great ecosystem!
But I like nim in the sense that I feel sometimes in golang that I can't change its GC and so although I do know that for most things it wouldn't be a breaker.
but still, I sometimes feel like I should've somewhat freedom to add memory management later without restarting from scratch or something y'know?
Golang is absolutely goated. This was why I also recommended V-lang, V-lang is really similar to golang except it can have memory management...
They themselves say that on the website that IIRC if you know golang, you know 70% V-lang
I genuinely prefer golang over everything but I still like nim/ V-lang too as fun languages as I feel like their ecosystem isn't that good even though I know that yes they can interop with C but still...
Odin has no primary use case, it just happens that a lot of the members in the community have made or are interested in game making
My understanding of odin was that its good for data oriented.
I haven't really looked into odin except joining their discord and asking them some questions.
it seems that aside from some normal syntax, it is sort of different from golang under the hood as compared to V-lang which is massively inspired by golang
After reading the HN post of sqlite which recommended using sqlite as a odt or some alternative which I agreed. I thought of creating an app in flutter similar to localsend except flutter only supports C esq and it would've been weird to take golang pass it through C and then through flutter or smth and I gave up...
I thought that odin could compile to C and I can use that but it turns out that Odin doesn't really compile to C as compared to nim and v-lang which do compile to C.
I think that nim and v-lang are the best ways to write some app like that though with flutter and I am now somewhat curious as to what you guys think would be the best way of writing highly portable apps with something personally dev-ex being similar to golang..
I have actually thought about using something like godot for this project too and seeing if godot supports something like golang or typescript or anything really. Idk I was just messing around and having a bit of fun lol i think.
From those only Nim and D are interesting.
We don't need yet another language with manual memory management in the 21st century, and V doesn't look that would ever be that relevant.
Because it doesn't actually enforce anything and lets you blow your foot off just like C?
“Rust lifetimes can be chore, so use a C-like language that requires you to manage them in your head”
Weird that they don’t consider other options, in particular languages with reference counting or garbage collection. Those will not solve all ownership issues, but for immutable objects, they typically do. For short-running CLI tools, garbage collecting languages may even be faster than ones with manual memory management because they may be able to postpone all memory freeing until the program exits.
Nope, rust-analyzer is incredible whereas the Zig LSP felt worse than Go.
I agree the borrow checker can be a pain though, I wish there were something like Rust with a great GC. Go has loads of other bad design decisions (err != nil, etc.) and Cargo is fantastic.
Syntactically speaking, Gleam fits the bill. It's very new/immature though and isn't in the same performance bracket since it runs on the BEAM.
"All it took was some basic understanding of memory management and a bit of discipline."
The words of every C programmer who created a CVE.
What about every Java/JS/Python/Rust/Go programmer who ever created a CVE? Out-of-bounds access is, indeed, a very common cause of dangerous vulnerabilities, but Zig eliminates it to the same extent as Rust. UAF is much lower on the list, to the point that non-memory-safety-related causes easily dominate it.[1]
The question is, then, what price in language complexity are you willing to pay to completely avoid the 8th most dangerous cause of vulnerabilities as opposed to reducing them but not eliminating them? Zig makes it easier to find UAF than in C, and not only that, but the danger of UAF exploitability can be reduced even further in the general case rather easily (https://www.cl.cam.ac.uk/~tmj32/papers/docs/ainsworth20-sp.p...). So it is certainly true that memory unsafety is a cause of dangerous vulnerabilities, but it is the spatial unsafety that's the dominant factor here, and Zig eliminates that. So if you believe (rightly, IMO) that a language should make sure to reduce common causes of dangerous vulnerabilities (as long as the price is right), then Zig does exactly that!
I don't think it's unreasonable to find the cost of Rust justified to eliminate the 8th most dangerous cause of vulnerabilities, but I think it's also not unreasonable to prefer not to pay it.
[1]: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html
I don't think the rank on a list that includes stuff like SQL injection and path traversal tells you much about what language features are worthwhile in the C/C++ replacement space. No developer that works on something like Linux or Chromium would introduce a SQL injection vulnerability unless they experienced severe head trauma. They do still introduce use after free vulnerabilities with some regularity.
First, UAF can be made largely non-dangerous without eliminating it (as in the link above and others). It's harder to exploit to begin with, and can be made much harder still virtually for free. So the number of UAFs and the number of exploitable vulnerabilities due to UAF are not the same, and have to be treated as separate things (because they can be handled separately).
Second, I don't care if my bank card details leak because of CSRF or because of a bug in Chromium. Now, to be fair, the list of dangerous vulnerabilities weighs things by number of incidents and not by number of users affected, and it is certainly true that more people use Chrome than those who use a particular website vulnerable to CSRF. But things aren't so simple, there, too. For example, I work on the JVM, which is largely written in C++, and I can guarantee that many more people are affected by non-memory-safety vulnerabilities in Java programs than by memory-safety vulnerabilities in the JVM.
Anyway, the point is that the overall danger and incidence of vulnerabilities - and therefore the justified cost in addressing all the different factors involved - is much more complicated than "memory unsafety bad". Yes, it's bad, but different kinds of memory unsafety are bad to different degrees, and the harm can be controlled separately from the cause.
Now, I think it's obvious that even Rust fans understand there's a complex cost/benefit game here, because most software today is already written in memory-safe languages, and the very reason someone would want to use a language like Rust in the first place is because they recognise that sometimes the cost of other memory-safe languages isn't worth it, despite the importance of memory safety. If both spatial and temporal safety were always justified at any reasonable cost (that is happily paid by most software already), then there would be no reason for Rust to exist. Once you recognise that, you have to also recognise that what Rust offers must be subject to the same cost/benefit analysis that is used to justify it in the first place. And it shouldn't be surprising that the outcome would be similar: sometimes the cost may be justified, sometimes it may not be.
> I don't care if my bank card details leak because of CSRF or because of a bug in Chromium
Sure, but just by virtue of what these languages are used for, almost all CSRF vulnerabilities are not in code written in C, C++, Rust, or Zig. So if I’m targeting that space, why would I care that some Django app or whatever has a CSRF when analyzing what vulnerabilities are important to prevent for my potential Zig project?
You’re right that overall danger and incidence of vulnerabilities matter - but they matter for the actual use-case you want to use the language for. The Linux kernel for example has exploitable TOCTOU vulnerabilities at a much higher rate than most software - why would they care that TOCTOU vulnerabilities are rare in software overall when deciding what complexity to accept to reduce them?
Ideally neither Zig nor Rust would matter.
Languages like Modula-3 or Oberon would have taken over the world of systems programming.
Unfortunately there are too many non-believers for systems programming languages with automatic resource management to take off as they should.
Despite everything, kudos to Apple for pushing Swift no matter what, as it seems to be only way for adoption.
> Unfortunately there are too many non-believers for systems programming languages with automatic resource management to take off as they should.
Or those languages had other (possibly unrelated) problems that made them less attractive.
I think that in a high-economic-value, competitive activity such as software, it is tenuous to claim that something delivers a significant positive gain and at the same time that that gain is discarded for irrational reasons. I think at least one of these is likely to be false, i.e. either the gain wasn't so substantial or there were other, rational reasons to reject it.
As proven in several cases, it is mostly caused by management not willing to keep the required investment to make it happen.
Projects like Midori, Swift, Android, MaximeVM, GraalVM, only happen when someone high enough is willing to keep it going until it takes off.
When they fail, usually it is because management backing felt through, not because there wasn't a way to sort out whatever was the cause.
Even Java had enough backing from Sun, IBM, Oracle and BEA during its early uncertainty days outside being a language for applets, until it actually took off on server and mobile phones.
If Valhala never makes it, it is because Oracle gave up funding the team after all these years, or it is impossible and it was a waste of money?
Segfaults go brrr.
All jokes aside, it doesn’t actually take much discipline to write a small utility that stays memory safe. If you keep allocations simple, check your returns, and clean up properly, you can avoid most pitfalls. The real challenge shows up when the code grows, when inputs are hostile, or when the software has to run for years under every possible edge case. That’s where “just be careful” stops working, and why tools, fuzzing, and safer languages exist.
And a segfault would be worse than a panic, data corruption or out of memory access are the problems. But in reality, most C programs I use daily have never crashed in decades.
My assumption is a small utility becomes a big utility.
Yeah, I often wonder if people who have this attitude have ever tried to run a non-trivial C program they wrote with the clang sanitizers on. A humbling experience every time.
The amount of seggfaults I have seen with Ghostty did not raise my spirits.
I've had at least one instance of Ghostty running on both my work and personal machine continuously since I first got access to the beta last November, and I haven't seen a single segfault in that entire time. When have you seen them?
I've seen the amount of effort Mitchell &co put into ensuring memory safety of Ghostty in the 1.2 release notes, but after upgrading I am still afraid to open a new pane while there's streaming output in the current one because in 1.1.3 that meant a crash more often than not.
Look at the issue tracker and its history too.
Google: "wikipedia Evidence of absence"
Also, https://github.com/ghostty-org/ghostty/issues?q=segfault
So Ghostty was first publicly released on I think December 27th last year, then 1.0.1, 1.1.0, 1.1.1, and 1.1.2 were released within the next month and a half to fix bugs found by the large influx of users, and there hasn't been a segfault reported since. I would recommend that users who are finding a large number of segfaults should probably report it to the maintainers.
Bun is much worse in this regard too.
It makes me sad, because they demonstrated JavaScriptCore is shockingly better than V8 for node-likes. The Typescript compiler (which like basically any non-trivial typechecker is CPU bound) is consistently at least 2x faster with Bun on large projects I've worked on.
When Typescript finishes their Go rewrite that will become irrelevant, and I rather have the compiler from the same people that design the language.
For that example sure, and admittedly the entire JavaScript/TypeScript processing ecosystem is moving in that direction. But the TypeScript compiler is not the only CPU-bound JavaScript out there.
segfaults raise my belief in spirits
Possibly a good Halloween costume idea to go as a segfault. It would scare some people.
I haven't seen a single one.
I think the problem the practical programmer has with a statement like this is the implication that only certain languages require some basic understanding and a bit of discipline to avoid CVEs.
Rust's model has a strict model that effectively prevents certain kinds of logic errors/bugs. So that's good (if you don't mind the price). But it doesn't address all kinds of other logic errors/bugs. It's like closing one door to the barn, but there are six more still wide open.
I see rust as an incremental improvement over C, which comes at quite a hefty price. Something like zig is also an incremental improvement over C, which also comes at a price, but it looks like a significantly smaller one.
(Anyway, I'm not sure zig is even the right comp for rust. There are various languages that provide memory safety, if that's your priority, which also generally allow dropping into "unsafe" -- typically C -- where performance is needed.)
> But it doesn't address all kinds of other logic errors/bugs. It's like closing one door to the barn, but there are six more still wide open.
Could you point at some language features that exist in other languages that Rust doesn't have that help with logic errors? Sum types + exhaustive pattern matching is one of the features that Rust does have that helps a lot to address logic errors. Immutability by default, syntactic salt on using globals, trait bounds, and explicit cloning of `Arc`s are things that also help address or highlight logic bugs. There are some high level bugs that the language doesn't protect you from, but I know of now language that would. Things like path traversal bugs, where passing in `../../secret` let's an attacker access file contents that weren't intended by the developer.
The only feature that immediately comes to mind that Rust doesn't have that could help with correctness is constraining existing types, like specifying that an u8 value is only valid between 1 and 100. People are working on that feature under the name "pattern in types".
> The words of every C programmer who created a CVE.
Much of Zig's user base seems to be people new to systems programming. Coming from a managed code background, writing native code feels like being a powerful wizard casting fireball everywhere. After you write a few unsafe programs without anything going obviously wrong, you feel invincible. You start to think the people crowing about memory safety are doing it because they're stupid, or, cowards, or both. You find it easy to allocate and deallocate when needed: "just" use defer, right? Therefore, it someone screws up, that's a personal fault. You're just better, right?
You know who used to think that way?
Doctors.
Ignaz Semmelweis famously discovered that hand-washing before childbirth decreased morality by an order of magnitude. He died poor and locked in an asylum because doctors of the day were too proud to acknowledge the need to adopt safety measures. If mandatory pre-surgical hand-washing step prevented complication, that implied the surgeon had a deficiency in cleanliness and diligence, right?
So they demonized Semmelweis and patients continued for decades to die needlessly. I'm sure that if those doctors had been on the internet today, they would say, as the Zig people do say, "skill issue".
It takes a lot of maturity to accept that even the most skilled practitioners of an art need safety measures.
that is the precise point at which the article lost me. ironically it's often good programmers who don't "get" the benefit of building memory management and discipline into the language, rather than leaving it to be a cognitive burden on every programmer.
"Actually memory management is easy, you just have to...."
- Every C programmer I've talked to
No its not, if it was that easy C wouldn't have this many memory related issues...
It may be easy to do memory management, but it's not too easy to detect if you've made a fatal mistake when such mistakes won't cause apparent defects
avoiding all memory management mistakes is not easy, and the bigger the codebase becomes, the more exponential the chance for disaster gets
Absolutely, big factor is undefined behavior which makes it look like everything works. Until it doesn't. I quit C long ago because I don't want to deal with manual memory management in any language. I was overwhelmed by Zigs approach as well. Rust is pretty much the only language making it bearable to me.
Came here to add the same comment. Had it on my clipboard already to post. You said it better
are you saying that such understanding isn't enough or that every C programmer who said that didn't understand those things?
C and Zig aren't the same. I would wager that syntax differences between languages can help you see things in one language that are much harder to see in another. I'm not saying that Zig or C are good or bad for this, or that one is better than the other in terms of the ease of seeing memory problems with your eyes, I'm just saying that I would bet that there's some syntax that could be employed which make memory usage much more clear to the developer, instead of requiring that the developer keep track of these things in their mind.
Even if you must manually annotate each function so that some metaprogram that runs at compile time can check that nothing is out of place could help detect memory leaks, I would think. or something; that's just an idea. There's a whole metaprogramming world of possibilities here that Zig allows that C simply doesn't. I think there's a lot of room for tooling like this to detect problems without forcing you to contort yourself into strange shapes simply to make the compiler happy.
> are you saying that such understanding isn't enough or that every C programmer who said that didn't understand those things?
Probably both. They're words of hubris.
C and Zig give the appearance of practicality because they allow you to take shortcuts under the assumption that you know what you're doing, whereas Rust does not; it forces you to confront the edge cases in terms of ownership and provenance and lifetime and even some aspects of concurrency right away, and won't compile until you've handled them all.
And it's VERY frustrating when you're first starting because it can feel so needlessly bureaucratic.
But then after awhile it clicks: Ownership is HARD. Lifetimes are HARD. And suddenly when going back to C and friends, you find yourself thinking about these things at the design phase rather than at the debugging phase - and write better, safer code because of it.
And then when you go back to Rust again, you breathe a sigh of relief because you know that these insidious things are impossible to screw up.
Just understanding the rules are not enough, you also need to be consistently good so that you never make a mistake that gets into production.
On both your average days and your bad days.
Over the 40 to 50 years that your carer lasts.
I guess those kind of developers exist, but I know that I'm not one of them.
This article seems pretty confused about what the borrow checker does or does not do - I've never heard compile time enforcement listed as a negative of the borrow checker before. It might do the author good to try writing some (non-trivial) memory management in both Zig and Rust some time.
> So when it comes to memory management there are two [THREE] terms you really need to know, [THE REGISTER BANK,] the stack and the heap.
Edits mine.
I like to keep the spacetime topologies complete.
Constant = time atom of value.
Register = time sequence of values.
Stack = time hierarchy of values.
Heap = time graph of values.
This feels like it was written by chatgpt & there are lots of ads. The worst rust articles are those explaining why rust is "better" than cpp for xyz. This is the worst kind of zig article
zig & rust have a somewhat thin middle area in the venn diagram.
I can guarantee you that none of it was written by ChatGPT or any other LLM.
As for the Ads, even though it's my site, I'd urge you to turn on adblocker, pi-hole or anything like that, I won't mind.
I have ads on there yes, but since I primarily write tech articles for a target audience of tech people you can imagine that most readers have some sort of adblocker either browser, network or otherwise.
So my grand total monthly income from ads basically covers hosting costs and so on.
It's definitely LLM generated or at least LLM assisted. The fact that it has serious errors like saying std.heap.page_allocator is general purpose allocator. The structuring and many sentences are pretty stereotypical LLM as well as the weird metaphors (stack dishes, heap clothes).
Edit: The author seems to be in community and I'm mistaken
I think people have just ingested so much LLM they think it's normal to use bullet point lists in the middle of an essay.
Why does a personal blog need so many advertising options?
Yeah this whole blog is sus. Author claims to have been around for 17 years, doesn't have a single project of note and makes naive claims. Github history has thousands of commits per year every single day to private repos, yet very little public code or record to offer credibility. Not clear where they work or what they work on. Makes inflammatory claims about hot-topic languages. Throws up ads on the blog. Personally posts blog to HN on a relatively new account, rather than it organically finding its way here. Sorry I'm not buying it.
He seems to know what he's doing, from the author's Twitter:
> Author claims to have been around for 17 years, doesn't have a single project of note and makes naive claims.
Plenty of such people out there.
This guy appears to just personally dislike Rust for reasons undisclosed and tries to rationalize it via posts like this one.
It's like with this former coworker of my former coworker who was really argumentative, seemingly for the sake of it. I did some digging and found that his ex left him and is now happily married.
Turns out that when he was criticizing the use of if-else in Angular templates what he was really thinking about was "if someone else".
What you’ve described there is a 90% match for most people in tech.
I love the irony to see the C crowd rediscovering Modula-2 and Object Pascal safety, through Zig.
Apparently it isn't programming with a straightjacket any longer, like on Usenet discussions.
The fact that your app crashes when you run out of stack is a compiler bug, not a feature. Memory is memory. The fact that languages in the 40s split in it to stack and heap, doesn't make it a foundational mathematical law.
Yes, safety isn't correctness but if you can't even get safety then how are you supposed to get correctness?
For small apps Zig probably is more practical than Rust. Just like hiring an architect and structural engineers for a fence in your back yard is less practical than winging it.
Behold, I've brought you a compiler bug.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
Aren't these two points contradictory? Forgive me if I'm misunderstanding.
> Rust’s borrow checker is a a pretty powerful tool that helps ensure memory safety during compile time. It enforces a set of rules that govern how references to data can be used, preventing common programming memory safety errors such as null pointer dereferencing, dangling pointers and so on. However you may have notice the word compile time in the previous sentence. Now if you got any experience at systems programming you will know that compile time and runtime are two very different things. Basically compile time is when your code is being translated into machine code that the computer can understand, while runtime is when the program is actually running and executing its instructions. The borrow checker operates during compile time, which means that it can only catch memory safety issues that can be determined statically, before the program is actually run. > > This means that basically the borrow checker can only catch issues at comptime but it will not fix the underlying issue that is developers misunderstanding memory lifetimes or overcomplicated ownership. The compiler can only enforce the rules you’re trying to follow; it can’t teach you good patterns, and it won’t save you from bad design choices.
This appears to be claiming that Rust's borrow checker is only useful for preventing a subset of memory safety errors, those which can be statically analysed. Implying the existence of a non-trivial quantity of memory safety errors that slip through the net.
> The borrow checker blocks you the moment you try to add a new note while also holding references to the existing ones. Mutability and borrowing collide, lifetimes show up, and suddenly you’re restructuring your code around the compiler instead of the actual problem.
Whereas this is only A Thing because Rust enforces rules so that memory safety errors can be statically analysed and therefore the first problem isn't really a problem. (Of course you can still have memory safety problems if you try hard enough, especially if you start using `unsafe`, but it does go out of its way to "save you from bad design choices" within that context.)
If you don't want that feature, then it's not a benefit. But if you do, it is. The downside is that there will be a proportion of all possible solutions that are almost certainly safe, but will be rejected by the compiler because it can't be 100% sure that it is safe.
IMO, as a C++ developer, Swift makes the most sense to me if I were looking for a safer alternative.
I think people prefer what's familiar to them, and Swift definitely looks closer to existing C++ to me, and I believe has multiple people from the C++ WG working on it now as well, supposedly after getting fed up with the lack of language progress on C++.
The most recent versions gained a lot in the way of cross-platform availability, but the lack of a native UI framework and its association with Apple seem to put off a lot of people from even trying it.
I wish it was a lot more popular outside of the Apple ecosystem.
Can you share some swift resources so I can look into the features of swift compared to zig and C?
Definitely check out swift.org/ . A few pages that are good jumping points are:
https://docs.swift.org/swift-book/documentation/the-swift-pr...
https://swift.org/documentation/cxx-interop/
https://swift.org/blog/swift-everywhere-windows-interop/
https://www.douggregor.net/posts/swift-for-cxx-practitioners...
that sounds like a query for llm
I'm still having a hard time understanding who is supposed to use Zig.
If I don't need absolute best performance, I can use GC-ed systems like Node, Python, Go, OCaml, or even Java (which starts fast now thanks to Graal AOT) and enjoy both the safety and expressive power of using a high-level language. When I use a GCed language, I don't have to worry about allocation, lifetimes, and so on, and the user gets a plenty good experience.
If I need the performance only manual memory management can provide (and this situation arises a lot less often than people think it does), I can justify spending the extra time expressing my thoughts in Rust, which will get me both performance and safety.
Why would I go to the trouble of using Zig instead of Rust? Zig, like Rust, incurs a complexity and ecosystem cost. It doesn't give me safety in exchange. I put in about as much effort as I would into a Rust program but don't get anything extra back. (Same goes if you substitute "C++" for "Rust".)
> All it took was some basic understanding of memory management and a bit of discipline.
Is the idea behind Zig just that it's perfectly safe if you know what you're doing --- therefore using Zig is some kind of costly signal of competence? That's like someone saying free-solo-ing a cliff face is perfectly safe if you know what you're doing. Someone falls to his death? Skill issue, right?
We have decades of experience showing that nobody, no matter how much "understanding" and "discipline" he has, can consistently write memory-safe code with manual memory management in a language that doesn't enforce memory safety rules.
So what's the value proposition for Zig?
Are you supposed to use it instead of something like Go or Python or AOT-Kotlin and spend more of your time dealing with memory than you would in one of these languages? Why?
Or are you supposed to use it instead of Rust and get, what, slightly faster compile times, maybe? And no memory safety?
If I had a penny for every time I heard that devs are not idiots, I'd be billionaire.
It's true, but devs are not infallible and that's the point of Rust. Not idiots, not infallible either.
IMO admitting that one can make mistakes even if they don't think they have is a sign of an experienced and trustworthy developer.
It's not that Rust compiler engineers think that devs are idiots, in fact you CAN have footguns in Rust, but one should never use a footgun easily, because that's how you get security vulnerabilities.
Actually, developers are idiots. Everyone is. Some just don't know it or won't admit it.
I once joined a company with a large C/C++ codebase. There I worked with some genuinely expert developers - people who were undeniably smart and deeply experienced. I'm not exaggerating and mean it.
But when I enabled the compiler warnings (which annoyed them) they had disabled and ran a static analyzer over the codebase for the first time, hundreds of classic C bugs popped up: memory leaks, potential heap corruptions, out-of-bounds array accesses, you name it.
And yet, these same people pushed back when I introduced things like libfmt to replace printf, or suggested unique_ptr and vector instead of new and malloc.
I kept hearing:
"People just need to be disciplined allocations. std::unique_ptr has bad performance" "My implementation is more optimized than some std algorithm." "This printf is more readable than that libfmt stuff." etc.
The fact is, developers, especially the smart ones probably, need to be prevented from making avoidable mistakes. You're building software that processes medical data. Or steers a car. Your promise to "pay attention" and "be careful" cannot be the safeguard against catastrophe.
To be honest, the generated machine code / assembly is often more readable than the actual c++ code in c++ stdlibs. So I can sympathize with the "This printf is more readable than that libfmt stuff." comment :)
i am sorry but how does point 1 and point 2 for safety work together? You don't want the program to corrupt your program but also does not want to immediately crash the moment some invariant is not being held?
> Compile-time only: The borrow checker cannot fix logic bugs, prevent silent corruption, or make your CLI behave predictably. It only ensures memory rules are followed.
Also not really true from my experience. There have been plenty of times where the borrow checker is a MASSIVE help in multithreaded context.
This really misses a major point. If you write something in Zig, you can have some confidence in the stability of the program, if you trust yourself as a developer. If someone else writes something else in Zig, you have to live with the possibility that they have not been as responsible as you would have preferred.
Indeed. The other day I was messing around with making various associative data structures in Zig.
I stole someone else's benchmark to use, and at one point I ran into seriously buggy behavior on strings (but not integers) that wasn't caught at the point where it happened early even with -Odebug.
Turns out the benchmark was freeing the strings before it finished performing all of the operations on the data structure. That's the sort of thing that Rust makes nearly impossible, but Zig didn't catch at all.
This is true for every language. Logic bugs exist. I'll take good OS process isolation over 'written-in-Rust' though I wouldn't mind both.
That being said, you've missed the point if you can't understand that safety comes at a real cost, not an abstract or 'by any means necessary' cost, but a cost as real as the safety issues.
Ah, sweet flame war fuel.
Maybe we'll even get a tabs vs. spaces article next.
Does it?
As far as my understanding of zig goes....it can compile into C....so if you really want secure C code you can compile zig into C?
Not quite, it can translate C into zig using the `translate-c` command that it comes with. But it compiles directly into machine code
There is a C backend, so you can also compile Zig into C if you want.
yea but, that's extremely edge case, or at least I have not yet encountered a need for that personally.
Zig can compile into C, but it's not portable C
While Zig prevents certain kinds of memory safety issues that C does not, it still suffers from memory-safety issues not found in safe Rust.
building houses or power lines without regulations feels more practical as well
This blog is atrocious from an ad standpoint and the recent flood of posts feels promotional and intentionally controversial. The articles are also devoid of any interesting perspectives. Are people actually reading this?
Two words: Skill issue.
The catgirls have no problems producing lots of great software in Rust. It seems more such software comes out every day, nya :3
No shade here, just a genuine question: why run ads on a blog like this? A personal technical blog probably doesn't get a ton of traffic. So what's the point? I'm honestly curious.
when I think of rust I think of beauties like this,
self.last.as_ref().unwrap().borrow().next.as_ref().unwrap().clone()
I know it can be improved but that's what I think of