> It gets weirder: in Haskell, exceptions can be thrown to other threads!
What's really interesting is that because of purity, you have to have asynchronous exceptions otherwise you give up a lot of modularity. At least that's what Simons Marlow and Peyton Jones argue in Asynchronous Exceptions in Haskell (2006): https://www.microsoft.com/en-us/research/wp-content/uploads/...
> While the semi-asynchronous approach avoids breaking synchronization abstractions, it is non-modular in that the target code must be written to use the signalling mechanism. Worse still (for us), the semi-asynchronous approach is simply incompatible with a purely-functional language, such as Concurrent Haskell. The problem is that polling a global flag is not a functional operation, yet in a Concurrent Haskell program, most of the time is spent in purely-functional code. On the other hand, since there is absolutely no problem with abandoning a purely-functional computation at any point, asynchronous exceptions are safe in a functional setting. In short, in a functional setting, fully-asynchronous exceptions are both necessary and safe — whereas in an imperative context fully-asynchronous exceptions are not the only solution and are unsafe.
If you can read PLTese, it's really quite a nice paper.
It's maybe interesting to note that the `async` library in use here is very simple and easy to understand. Nearly every function is one or two lines. Likewise `TQueue` is extremely simple (and easy to prove correct) thanks to STM, and also generally has good performance.
A lot of the complexity here is just hidden in Haskell's runtime, which implements async processing based on green threads, besides other features such as GC. Though to be fair, the software transactional memory (STM) featureset is quite unique to Haskell since it relies on the availability of pure functions to ensure correctness. It's kind of hard to imagine a full equivalent to it in other well-known languages.
While the async library is great, then everything that came before (forkIO, MVar's, etc) was already simple enough - it's only the exception handling that was missing.
I read that book many years ago, but I haven't looked into Haskell for a long time. Is it still relevant today? I imagine many things have changed in 12 years!
I don't know how async is in other languages but I find Pythons async incredibly difficult to use, and I kinda feel validated about how poor chatGPT is at it as well.
Is it because it is just a very hard thing, or is it because its a synchronous language, with async bolted on? (I'm talking about a purely language point of view, not from a python VM / GIL point of view)
The easiest language I’ve used for async is Clojure—mostly because the language is immutable by default and ~99% of the code is referentially transparent. That doesn’t magically solve async, but it removes an entire class of headaches by nudging you away from shared state and side effects. You don’t need locks if there’s nothing to lock.
Async is hard, no doubt—but some languages are designed to reduce the surface area of what can go wrong. I’ve heard great things about Erlang, Elixir, and BEAM-based languages in general. They treat async not as an add-on, but as a core architectural principle.
> In bare-metal embedded systems or inside the operating system, it’s not unusual to manually break computation into state machines, driven by interrupts.
Although not the topic of TFA, in fact, the footnotes that this is "a whole different ball game." Does anyone have any good source for this aspect of 'low-level'/OS development? I'm more than capable of chasing down sources from a more high level introduction or overview, so anything would be helpful. This concept seems like it may just be a more pragmatic description of embedded/OS development than what I've read previously.
After reading the title I was expecting a "pick two". From my anecdotal experience haskell is usually far from simple, but other configurations are possible too.
> 1. Compose the program into several threads of execution, traditionally scheduled and ran by the operating system
The step 0 is missing:
Compose the program into several lanes of execution, traditionally executed via SIMD.
This is a massive piece of performance left on the table on modern computer architectures, by assuming threading is the first manifestation of concurrency.
SIMD has been somewhat of a massive failure in this regard. Unlike threads, most languages seem to ignore its existence and abdicate its usage to the sufficiently complex compiler.
I wish there was better author time feedback to the developer on where they're getting such a perf boost. As far as I'm aware there's no popular linting or blue squiggle to guide you in the right direction.
In games it seems like the popular pattern is to rewrite everything entirely in an entity component system framework.
Agreed completely. Most auto-vectorization approaches are hit-miss and you still cannot have big-binaries, where instruction set is decided dynamically, trivially.
ISPC comes close, but does come with a learning curve.
I would say that Highway [1] comes close. Can't say anything about ISPC because in gamedev work it never even came into consideration for multiple platforms.
I’m not familiar with Haskell concurrency. The combination of green threads and large memory allocations due to immutable data structures sounds like it would be hard to implement a web server handling 10k+ concurrent requests on commodity hardware?
Btw. too bad author talks about microsecond guarantees usage but does not provide a link, that would be interesting reading.
> sounds like it would be hard to implement a web server handling 10k+ concurrent requests on commodity hardware?
In practice, it is not. The canonical Haskell compiler, GHC, is excellent at transforming operations on immutable data, as Haskell programs are written, into efficient mutations, at the runtime level. Also, since web development is quite popular in the Haskell community, lots of people have spent many hours optimizing this precise use-case.
In my experience, the real downside is that compilation times are a bit long -- the compiler is doing a LOT of work after all.
> The canonical Haskell compiler, GHC, is excellent at transforming operations on immutable data, as Haskell programs are written, into efficient mutations, at the runtime level.
Yes, at the level of native machine code and memory cells, there's not that much of a difference between immutability + garbage collection, and higher level source code that mutates. Thanks to GC you are going to overwrite the same memory locations over and over again, too.
Programmers for some reason really don't understand that generational garbage collection provides locality. I am really surprised how often I see C/C++/Rust types not understand this.
I think that only applies to a moving GC. A conservative GC (like the Boehm GC for C) doesn't move any items around, and thus doesn't do anything for locality.
Of course, even a moving GC has limits, itwon't turn a hashtable into something that has local accesses.
The interaction of laziness and purity means that the memory costs are not always what you think. Purity means that it's a lot safer to share structure between old and new versions of a data structure where an imperative language would have to do defensive copying, and laziness means that you can incrementally amortise the cost of expensive rebalancing operations (Okasaki is the standard reference for this).
> Warp is a high-performance HTTP server library written in Haskell, a purely functional programming language. Both Yesod, a web application framework, and mighty, an HTTP server, are implemented over Warp. According to our throughput benchmark, mighty provides performance on a par with nginx.
> [...] large memory allocations due to immutable data structures sounds [...]
Why would there be large memory allocations because of immutable data structures? Btw, you can also use immutable data structure in eg Rust fairly easily. And Haskell also supports mutation and mutable data structures.
However, Haskell can use a lot of memory, but that's more to do with pervasive 'boxing' by default, and perhaps laziness.
Tries (like scala’s Vector) or trie maps (the core map types of Scala, Clojure and probably Haskell?) aren’t copied on updates.
In fact, whether a data structure is an immutable or persistent data structure or merely an unmodifiable data structure (like Kotlin uses) is based on whether it requires full copies on most updates or not. In FP languages, immutable data structures aren’t “specialized” at all.
> whether a data structure is an immutable or persistent data structure or merely an unmodifiable data structure...
This hurt my brain. It seems that in some places (e.g. Java land) unmodifiable refers to something that you can't modify but could just be a wrapper around a structure that can be modified. In that case they use immutable to mean something that is nowhere modifiable.
I may be misrepresenting this idea, but I think the terminology is so poor that it deserves to be misunderstood.
// Using mutability.
// `increment` is void, and makes 2 bigger for everyone.
increment(2);
// Typical Java "safety".
// It's still void, but now it throws a RuntimeException
// because the developers are saving you from making everyone's 2 bigger.
increment(2);
// Immutable
// Returns 3
increment(2);
Doesn't it depend on the data structure? Eg prepending to a list is actually cheaper with immutable data structures: you keep the original list and add a new head pointing to its head. Now you have two lists available in your program, but only one stored in memory. Yay!
It doesn't actually have "large memory allocations" due to immutable data structures. This is a meme that isn't true. Immutable data structures, especially at small scale, do not have huge performance penalties. You don't copy the entire structure over and over...you copy the O(log n) spine.
Haskell's GC is also fast when you are mostly generating garbage, which is inherently true for web server handlers.
When data is immutable, it can be freely shared. Changes to the data essentially uses copy-on-write. And it only writes the delta change, since you don't need a deep copy due to immutability. Add that the garbage collectors of Haskell and Erlang are designed to work with a high allocation rate and have 0 cost for dead data, and this is much faster than what people think.
The way you implement a webserver in either Haskell or Erlang is rather trivial. Whenever there's an incoming request, you make a thread to handle it. So you don't have 1 webserver serving 10k requests. You have 10k webservers serving 1 request each. And since they are started from the same core data, they'll share that due to immutability. See also old-style Apache or PHP and fork().
Or you do as Erlang's BEAM VM: each thread has it's own memory area which is GC'ed individually. This means upon request termination, you just terminate the thread and the memory is reclaimed with no need for a GC.
In the abstract, this is very similar to spawning a unix process for every request, never free-ing any memory, and letting the memory allocation die with the process.
I thought it was a bit odd that the author claims there’s no mutexes in sight, the TVar is effectively a mutex guard unless I’m misunderstanding this? (I’ve written exactly 0 lines of Haskel). Or is the claim that the lack of ceremony and accidental complexity around threading is the real win for concurrency here?
No, a TVar is not a mutex guard. A TVar is a software transactional memory (STM) variable. STM works just like a database: you batch together a sequence of operations into a transaction and then execute them. During execution of a transaction, all changes made to the contents of the TVar are stored in a transaction log. If some other transaction occurs during the execution then the whole thing is aborted and re-run.
This can take any ordinary Haskell data structure and give you a lock-free concurrent data structure with easy-to-use transactional semantics. How it performs is another matter! That depends on the amount of contention and the cost of re-playing transactions.
This library is full of STM-oriented data structures. They perform better than a simple `TVar (Map k v)`.
It's kind of a fun trick actually. The stock Map is just a tree. The STM Map is also a tree [1] but with TVars at each node. So this helps a lot with contention - you only contend along a "spine" instead of across the whole tree, which is O(log n).
[1] Technically a HAMT a la unordered-containers - trie, tree, you get the idea :)
I know you say it depends on how much contention one sees but I'm interested in the performance hit. Also, is STM the "standard" (or accepted) way to do async in Haskell?
You are correct, Haskell has quite a few mutex-like types. MVar is one of them.
However, if memory serves me right, TVar is a building block for the transactional memory subsystem. The guard on TVar with, say, modifyTVar is not really stopping execution at entrance but simply indicating that the block modifies the variable. In my mental model, some magic happens in an STM block that checks if two concurrent STM blocks acted upon the same data at the same time, and if so, it reverts the computations of one of the blocks and repeats them with new data.
To my knowledge, Haskell is the only programming language (+runtime) that has a working transactional memory subsystem. It has been in the language for about 20 years, and in that time many have tried (and failed) to also implement STM.
Clojure's STM never really took off because, for various reasons, it's not as easy to compose as Haskell's (where you can build up a big library of STM blocks and piece them together at the very edges of your program). As such Clojure's STM implementation doesn't actually have a great reputation within the Clojure ecosystem where it isn't usually used in most production codebases (whereas in Haskell STM is often one of the first tools used in any production codebase with concurrency).
Basically it's the difference between focusing only on transactional variables without having a good way of marking what is and isn't part of a larger transaction and having a higher-order abstraction of an `STM` action that clearly delineates what things are transactions and what aren't.
My impression at least watching chatter over the last several years isn’t that it has a bad reputation but rather that people haven’t found a need for it, atoms are good enough for vast bulk of shared mutable state. Heck even Datomic, an actual bona fide database, doesn’t need STM it’s apparently all just an atom.
But I’ve never heard someone say it messed up in any way, that it was buggy or hard to use or failed to deliver on its promises.
Clojure atoms use STM, though. I've been writing Clojure for almost a decade now, it's not that STM isn't great, it's just that immutable data will carry you a very long way - you just don't need coordinated mutation except in very narrow circumstances. In those circumstances STM is great! I have no complaints. But it just doesn't come up very often.
I would say to the contrary it would come up all the time if the right idioms were in place.
For example, when it comes to concurrent access to a map the Clojure community generally forces a dichotomy, either stick a standard Clojure map in an atom and get fully atomic semantics at the expense of serial write performance or use a Java ConcurrentMap at the expense of inter-key atomicity (or do a more gnarly atom around a map itself containing atoms which gets quite messy quite fast).
Such a stark tradeoff doesn't need to exist! In theory STM gives you exactly the granularity you need where you can access the keys that you need atomicity for and only those keys together while allowing concurrent writes to anything else that doesn't touch those keys (this is exactly how e.g. the stm-containers library for Haskell works that's linked elsewhere).
“ Taking on the design and implementation of an STM was a lot to add atop designing a programming language. In practice, the STM is rarely needed or used. It is quite
common for Clojure programs to use only atoms for state, and even then only one or a handful of atoms in an entire program. But when a program needs coordinated state it really needs it, and without the STM I did not think Clojure would be fully practical.”
Haha, I read The Joy of Clojure way back in 2013 and conflated the different reference types with STM. So thanks for mentioning that, I always thought it weird that you'd need STM for vars and atoms too.
That said, I have never used a ref, nor seen one in use outside of a demo blogpost.
You missed a very important detail, the language runtime.
While Haskell's runtime is designed for Haskell needs, Clojure has to be happy with whatever JVM designers considered relevant for Java the language, the same on the other platforms targeted by Clojure.
This is yet another example of a platform being designed for a language, and being a guest language on a platform.
I don't think this is a limitation of the JVM. When I've used Clojure's STM implementation it's been perfectly serviceable (barring the composability issues I mentioned). Likewise when I've used the various STM libraries in Scala. Eta (basically a Haskell implementation on the JVM that unfortunately stalled in development) also had a fine STM implementation.
It's more of a combination of API and language decisions rather than the underlying JVM.
> Implication of Using STM
Running I/O Inside STM— There is a strict boundary between the STM world and the ZIO world. This boundary propagates even deeper because we are not allowed to execute arbitrary effects in the STM universe. Performing side effects and I/O operations inside a transaction is problematic. In the STM the only effect that exists is the STM itself. We cannot print something or launch a missile inside a transaction as it will nondeterministically get printed on every reties that transaction does that.
Does Zio actually offer any protection here, or is it just telling the reader that they're on their own and should be wary of footguns?
STM happens inside the STM monad while regular effects happen in the ZIO monad. If you try to do ZIO effects inside an STM transaction you'll get a type error.
Scala doesn't enforce purity like Haskell though so it wont stop you if you call some normal Scala or Java code with side effects. In practice its not a problem because you're wrapping any effectful outside APIs before introducing them into your code.
If you lock a section of code (to protect data), there's no guarantee against mutations of that data from other sections of code.
If you lock the data itself, you can freely pass it around and anyone can operate on it concurrently (and reason about it as if it were single-threaded).
It's the same approach as a transactional database, where you share one gigantic bucket of mutable state with many callers, yet no-one has to put acquire/release/synchronise into their SQL statements.
No a TVar isn't a mutex guard. As a sibling comment points out it gives you transactional semantics similar to most relational databases.
Here's an example in perhaps more familiar pseudocode.
var x = "y is greater than 0"
var y = 1
forkAndRun {() =>
y = y - 1
if (y <= 0) {
x = "y is less than or equal to 0"
}
}
forkAndRun {() =>
y = y + 1
if (y > 0) {
x = "y is greater than 0"
}
}
In the above example, it's perfectly possible, depending on how the forked code blocks interact with each other, to end up with
x = "y is less than or equal to 0"
y = 1
because we have no guarantee of atomicity/transactionality in what runs within the `forkAndRun` blocks.
The equivalent of what that Haskell code is doing is replacing `var` with a new keyword `transactional_var` and introducing another keyword `atomically` such that we can do
transactional_var x = "y is greater than 0"
transactional_var y = 1
forkAndRun {
atomically {() =>
y = y - 1
if (y <= 0) {
x = "y is less than or equal to 0"
}
}
}
forkAndRun {
atomically {() =>
y = y + 1
if (y > 0) {
x = "y is greater than 0"
}
}
}
and never end up with a scenario where `x` and `y` disagree with each other, because all their actions are done atomically together and `x` and `y` are specifically marked so that in an atomic block all changes to the variables either happen together or are all rolled back together (and tried again), just like in a database.
`transactional_var` is the equivalent of a `TVar` and `atomically` is just `atommically`.
As siblings note, TVar is a transactional variable. However, it's not just protective against concurrent writes but also against concurrent reads of altered variables, so it offers true atomicity across any accessed state in a transaction.
So if you have a thread altering `foo` and checking that `foo+bar` isn't greater than 5 and a thread altering `bar` and checking the same, then it's guaranteed that `foo+bar` does not exceed 5. Whereas if only write conflicts were detected (as is default with most databases) then `foo+bar` could end up greater than 5 through parallel changes.
My favourite thing about Haskell concurrency is that there are no colored functions [0]. Writing code in IO, or Async, or the next big thing (asychronous higher-order effect system of the future??), doesn't require language support like Python or Rust.
The one construct that unlocks this lack of colored functions, STM, did require runtime support (as opposed to language support), which at least is transparent to downstream developers
Coloured functions are a feature - not a bug, Haskell is full of them, and they are exactly what make STM safe in Haskell, but abandonware in other languages which have tried.
2. The way you call a function depends on its color.
`<-` or `>>=` vs `=`
3. You can only call a red function from within another red function.
This should sound pretty familiar! You can only call an IO function from within another IO function. STM in this case makes a third colour:
IO can call IO functions.
IO can call STM functions. (*)
IO can call pure functions.
STM can call STM functions.
STM can call pure functions.
pure functions can call pure functions.
(*) calling into an STM block from IO is what makes it 'happen for real': it's the `atomically` which has type STM a -> IO a.
Having these coloured functions is what made STM achievable back in the mid-late 2000s, since the mechanism to prevent STM or pure functions from calling IO was already in-place.
Other languages either tried to figure out how to contain the side-effects and gave up, or just released STM and put the onus on the user not to use side effects.
It is a shame that the people you are answering is being downvoted, I also understand the importance of coloring functions, but look at the examples that person put, python and rust. In those, executing a colored function (at least the async related ones) propagates up to the top of the program, that is a cost that we have to interiorize, but I would be lying if I told you I wouldn't he happy with such behavior. I do a lot of js/ts and I would love to just be able to "inline" await without making my current scope recursively to the top of the program like it can be done with F# with the Async.StartAsTask operation.
Correct? Anyone who has worked with concurrency in Haskell is probably laughing... :)
Haskell's IO type system doesn't model concurrency at all. `IO a` could be a fork and join, an infinite event loop, really anything, it's a black box in terms of "correctness". Better than using JavaScript maybe, but hardly "correct" in any sort of formal, tractable sense.
Yeah me too, I'll invest in bitcoin early, live like a hermit off a coast somewhere, and school kids on HN, "cabal hell! I'll scream, no, Conda hell with powershell hooks in vscode you ingrates, my llm"
I've written low thousands of lines of Haskell. Similar to mikojan, I love Haskell in theory, but ended up not enjoying it as much in practice.
1. The multitude of string-y types. I end up converting between String, Text, Lazy Text, ByteString, Lazy ByteString, and I forget what else. Each library wants me to pass in a specific string type, and each other library returns a different string type. LLMs are good at this, also for a while I had a ton of helper functions to convert between any two string types. Still, perhaps there's a better way?
2. The error messages. I come from Elm, so I'm spoiled. Yes, I understand a language with HKTs will never have as nice error messages. Yes, LLMs are pretty good at explaining GHC error messages.
3. The stdlib. Haskell gets a lot of credit for safety, but a `head` blows up instead of returning a `Maybe`. I know there are other - safer - preludes, but I don't know how to choose between them. I don't know how using a different prelude would impact my projects.
I feel like my next step is either towards Idris, with its polished standard library (and dependent types baked into the language!), or towards something simpler and more Elm-like (Gleam perhaps, or Roc). But if you can sell me on Haskell, I'm all ears!
I'm not going to sell you on anything. All of the things you've mentioned are true. Loosely, the multitude of string types and the state of the standard library come from the same place: the language is 30+ years old! There are many warts to be found.
However, if you decide to start learning, the path is hard, especially if you come from a non-computer-science background like me. I attempted to learn Haskell twice; I bounced off the first time, quite hard, and didn't try again for years.
What worked for me is a combination of two things:
* Having a goal in mind, that has nothing with the choice of language. For me, it was building a personal website
* The book Haskell Programming from First Principles [0]
> Having a goal in mind, that has nothing with the choice of language
Yes, yes, that's exactly what my encounters with Haskell looked like. The last one is ~1k lines of code backend for a personal project. I feel that's about as much as I could manage at this point.
> The book Haskell Programming from First Principles
That book is getting recommended all the time! I'm concerned if it's perhaps a little too basic for me. (I understand monads, monad transformers, have some notion of final tagless and free monad. Yet I get perpetually confused by various relatively simple things.)
I guess what I'm missing is haskell-language-server to help me a little. Here I'm confused about the interplay between `haskell-stack` (which is in Debian repos and which I think I'd like to use), ghcup, cabal, and haskell-language-server.
I’m not the OP, but static types, Hindley-Milner type inference, algebraic data types, and pattern matching can be quite ergonomic. I have also come to appreciate functional programming and how it makes reasoning about and testing code easier.
Yeah, just last week I updated the numeric precision of a value across an entire user journey in a big enterprise application, spanning many functions and data types. Thanks to Haskell, I could do it in a day.
In any other language I've used (barring maybe Ada) that is a refactoring that would take at least days, if not weeks, to manually track down all the places it interacts with the system directly or indirectly, because mixing e.g. int with long long is not a type error.
In Haskell, I change the type and the compiler spits out a list of locations that needs to change. This is repeated for a few iterations until all transitive interactions are worked out. Done!
No one wants to be a python programmer. It's a practical language to get things done. It isn't a language to make you feel proud of yourself nor about the current state of our industry.
I’ve enjoyed making it go faster by finding quirks, but at this point it’s mostly become “OK, what else can I offload to C?”
I should really learn Rust. Or Zig. I tried Nim (best of both worlds, Python-esque code that compiles to C!), but it wasn’t nearly as fast as my Python + C for my specific use case.
You may no longer be interested in this kind of thing, but if you are there might be some ideas of note over at https://github.com/c-blake/nio/blob/main/db-bench.md (in particular the demo/gbyGen.nim program).
If that is what you want to do, you can do that in any language. It's just that when you do it in e.g. Java, you have to spend a lot longer proving correctness before discovering that it doesn't work.
I love it because I can spend all my time noodling over types and never ship a product that would have been great shipped in a late night wine-fueled session of 1999 PHP.
It's incredible that given how fuzzy and inaccurate human memory is, we treat any LLM that can't perfectly recite volumes of information as somehow beneath us.
What about Haskell concurrency isn't maintainable?
The concurrency stuff in the stdlib + the mainstays in the ecosystem are pretty stable and noncontroversial..there's stuff in Haskell that churn but this is not one of them.
Isn't concurrency in Rust a notorious pain point? Or am I confusing it with async which is different? [I am stuck in an era before parallelism, so I don't really understand these things]
This why I haven't fully embraced Rust yet. Whenever I ask about safely mutating shared state (see sibling comment), I'm met with silence, or some comment like: Rust guarantees that you aren't mutating shared state.
I'm surprised you say this. The core type system guarantees is that there is no aliasing while mutating, and no mutation while it is aliased. You get single writer multiple reader for free without overhead.
If you want multiple writers, you can always use the Arc container and use the built-in lock.
Not sure what you are pointing out, so let me spell out what I said earlier.
1. You get single-writer multiple reader for free from the type system, without any runtime overhead.
2. For the same reason, the type system does not allow multiple writers. If you want multiple writers, then you are forced to use locks. Once you use locks, the runtime guarantees safety for this case.
You're not adding any new information. I already understand you completely.
> 1. You get single-writer multiple reader for free from the type system, without any runtime overhead.
Take the example from the article which is accepted by the Haskell type system:
writeTBCQueue :: TBCQueue a -> a -> STM ()
writeTBCQueue q v = do
stillOpen <- readTVar q.open
when stillOpen $ writeTBQueue q.queue v
Rust would reject this because of multiple writers.
Also, thinking about it more, I'm now very skeptical of Rust even providing 'single-writer multiple-reader'. Is it in fact single-reader-writer xor multiple-reader? In other words, how does it handle a goblin constantly moving money between accounts while a gnome is constantly trying to count the total amount?
goblinBankerThread = forever $ do
seed <- randomSeed
atomically $ do
(acctA, acctB) <- chooseTwoRandomAccounts seed
if (amount acctA) > (amount acctB)
then moveAmount $5 acctA acctB
else moveAmount $5 acctB acctA
gnomeAccountantThread = forever $
atomically $ do
accounts <- readAllAccounts
assert someConstantAmount allAccounts
Yes, Rust is 100% safe because it would reject this code, so it would never run. Not running code also has guaranteed no-overhead!
2. For the same reason, the type system does not allow multiple writers. If you want multiple writers, then you are forced to use locks
* Locks are problematic, which is why I chose STM over locks in the first place.
* Locks are in all the languages. Does your comment about 100% safety really apply to all languages ?
Your first point is not comparing the same thing. STM is wonderful, but as you no doubt know, it is meant for many TVars to be read/modified. This necessarily has overhead (transactional logs), performs poorly under contention and also is subject to livelock, and has no fairness.
In your goblin example, I believe the gnomeAccountantThread would have to constantly retry, because the writer (if successful) would have produced a new version of two accounts, which would trip up the reader, forcing it to start again. In general, Haskell's STM is built for short-lived transactions; for longer running transactions or those that touch a lot of objects, you'd need something like multi-versioned objects seen in databases or epochs to get a consistent snapshot. Neither Rust nor Haskell is suited to this example out of the box.
For your second question, you assume axiomatically that locks are problematic. They aren't in Rust (except, see later about deadlocks). Unlike any other language with in-place mutation, Rust will force you to use a mutex in order to share something for read-write (in a multiple writer scenario), otherwise it won't compile. You have to use lock() in order to get access to the underlying object, and once you have that object, the type system makes sure only the owner can mutate it. In C/C++/Java/Go, you don't get this guarantee at all ... it is possible to mistakenly use the object without using a mutex. So, there is not guarantee of safety in the other languages. There is a 100% guarantee in Rust.
---
That said, the problematic part about locks (whether it is mutexes or MVars in Haskell) is deadlocks, which is solved by having a deterministic lock order. In your Haskell example, if acctA and acctB were MVars, you'd do
let (first, second) = if acctA < acctB then (acctA, acctB) else (acctB, acctA)
withMVar first $ \_ ->
withMVar second $ \_ -> do
...
Correctness in concurrency is actually one of Rust's strong suit. Any pain felt is because Rust is low level and does not prescribe a single concurrency mechanism, leaving each coder to figure out the benefits and constraints of each library.
Relevant: https://github.com/Marthog/rust-stm which has usage instructions. It's memory safe as defined in Safe Rust, but unlike the Haskell implementation it's not "safe" in a correctness sense, because Rust does not afford the same control about mutability and purity. (At least, not yet - future additions to the language may improve this somewhat.)
From the footnotes:
> It gets weirder: in Haskell, exceptions can be thrown to other threads!
What's really interesting is that because of purity, you have to have asynchronous exceptions otherwise you give up a lot of modularity. At least that's what Simons Marlow and Peyton Jones argue in Asynchronous Exceptions in Haskell (2006): https://www.microsoft.com/en-us/research/wp-content/uploads/...
> While the semi-asynchronous approach avoids breaking synchronization abstractions, it is non-modular in that the target code must be written to use the signalling mechanism. Worse still (for us), the semi-asynchronous approach is simply incompatible with a purely-functional language, such as Concurrent Haskell. The problem is that polling a global flag is not a functional operation, yet in a Concurrent Haskell program, most of the time is spent in purely-functional code. On the other hand, since there is absolutely no problem with abandoning a purely-functional computation at any point, asynchronous exceptions are safe in a functional setting. In short, in a functional setting, fully-asynchronous exceptions are both necessary and safe — whereas in an imperative context fully-asynchronous exceptions are not the only solution and are unsafe.
If you can read PLTese, it's really quite a nice paper.
It's maybe interesting to note that the `async` library in use here is very simple and easy to understand. Nearly every function is one or two lines. Likewise `TQueue` is extremely simple (and easy to prove correct) thanks to STM, and also generally has good performance.
A lot of the complexity here is just hidden in Haskell's runtime, which implements async processing based on green threads, besides other features such as GC. Though to be fair, the software transactional memory (STM) featureset is quite unique to Haskell since it relies on the availability of pure functions to ensure correctness. It's kind of hard to imagine a full equivalent to it in other well-known languages.
Quibble: Both Clojure and Scala have a Software Transactional Memory implementation, and the original Clojure Ant demo showed this.
While the async library is great, then everything that came before (forkIO, MVar's, etc) was already simple enough - it's only the exception handling that was missing.
https://www.oreilly.com/library/view/parallel-and-concurrent... is a great resource for those who want to go deeper into this
The author is thinking of updating the book to a second edition as well. Looking forward to it
noice
I read that book many years ago, but I haven't looked into Haskell for a long time. Is it still relevant today? I imagine many things have changed in 12 years!
The fundamentals are the same, and `async` is as relevant as it was back then. The ecosystem is extremely stable in that regard.
"Fast" in title. But I don't see any benchmarks, or discussion on how to reason about the performance of STM code.
I don't know how async is in other languages but I find Pythons async incredibly difficult to use, and I kinda feel validated about how poor chatGPT is at it as well.
Is it because it is just a very hard thing, or is it because its a synchronous language, with async bolted on? (I'm talking about a purely language point of view, not from a python VM / GIL point of view)
The easiest language I’ve used for async is Clojure—mostly because the language is immutable by default and ~99% of the code is referentially transparent. That doesn’t magically solve async, but it removes an entire class of headaches by nudging you away from shared state and side effects. You don’t need locks if there’s nothing to lock.
Async is hard, no doubt—but some languages are designed to reduce the surface area of what can go wrong. I’ve heard great things about Erlang, Elixir, and BEAM-based languages in general. They treat async not as an add-on, but as a core architectural principle.
It's because ``asyncio`` is a dogwater library that's barely functional and full of footguns. The ecosystem is about the same quality too.
Indeed, as much as I dislike async in general, asyncio is its own special hell.
From the footnotes:
> In bare-metal embedded systems or inside the operating system, it’s not unusual to manually break computation into state machines, driven by interrupts.
Although not the topic of TFA, in fact, the footnotes that this is "a whole different ball game." Does anyone have any good source for this aspect of 'low-level'/OS development? I'm more than capable of chasing down sources from a more high level introduction or overview, so anything would be helpful. This concept seems like it may just be a more pragmatic description of embedded/OS development than what I've read previously.
After reading the title I was expecting a "pick two". From my anecdotal experience haskell is usually far from simple, but other configurations are possible too.
> 1. Compose the program into several threads of execution, traditionally scheduled and ran by the operating system
The step 0 is missing:
Compose the program into several lanes of execution, traditionally executed via SIMD.
This is a massive piece of performance left on the table on modern computer architectures, by assuming threading is the first manifestation of concurrency.
SIMD has been somewhat of a massive failure in this regard. Unlike threads, most languages seem to ignore its existence and abdicate its usage to the sufficiently complex compiler.
I wish there was better author time feedback to the developer on where they're getting such a perf boost. As far as I'm aware there's no popular linting or blue squiggle to guide you in the right direction.
In games it seems like the popular pattern is to rewrite everything entirely in an entity component system framework.
Agreed completely. Most auto-vectorization approaches are hit-miss and you still cannot have big-binaries, where instruction set is decided dynamically, trivially.
ISPC comes close, but does come with a learning curve.
I would say that Highway [1] comes close. Can't say anything about ISPC because in gamedev work it never even came into consideration for multiple platforms.
1. https://google.github.io/highway/en/master/
I’m not familiar with Haskell concurrency. The combination of green threads and large memory allocations due to immutable data structures sounds like it would be hard to implement a web server handling 10k+ concurrent requests on commodity hardware?
Btw. too bad author talks about microsecond guarantees usage but does not provide a link, that would be interesting reading.
> sounds like it would be hard to implement a web server handling 10k+ concurrent requests on commodity hardware?
In practice, it is not. The canonical Haskell compiler, GHC, is excellent at transforming operations on immutable data, as Haskell programs are written, into efficient mutations, at the runtime level. Also, since web development is quite popular in the Haskell community, lots of people have spent many hours optimizing this precise use-case.
In my experience, the real downside is that compilation times are a bit long -- the compiler is doing a LOT of work after all.
> The canonical Haskell compiler, GHC, is excellent at transforming operations on immutable data, as Haskell programs are written, into efficient mutations, at the runtime level.
Yes, at the level of native machine code and memory cells, there's not that much of a difference between immutability + garbage collection, and higher level source code that mutates. Thanks to GC you are going to overwrite the same memory locations over and over again, too.
Programmers for some reason really don't understand that generational garbage collection provides locality. I am really surprised how often I see C/C++/Rust types not understand this.
I think that only applies to a moving GC. A conservative GC (like the Boehm GC for C) doesn't move any items around, and thus doesn't do anything for locality.
Of course, even a moving GC has limits, itwon't turn a hashtable into something that has local accesses.
The interaction of laziness and purity means that the memory costs are not always what you think. Purity means that it's a lot safer to share structure between old and new versions of a data structure where an imperative language would have to do defensive copying, and laziness means that you can incrementally amortise the cost of expensive rebalancing operations (Okasaki is the standard reference for this).
> Warp is a high-performance HTTP server library written in Haskell, a purely functional programming language. Both Yesod, a web application framework, and mighty, an HTTP server, are implemented over Warp. According to our throughput benchmark, mighty provides performance on a par with nginx.
Source: https://aosabook.org/en/posa/warp.html
> [...] large memory allocations due to immutable data structures sounds [...]
Why would there be large memory allocations because of immutable data structures? Btw, you can also use immutable data structure in eg Rust fairly easily. And Haskell also supports mutation and mutable data structures.
However, Haskell can use a lot of memory, but that's more to do with pervasive 'boxing' by default, and perhaps laziness.
No reason. OC probably thinks that immutable data structures are always copied when being operated on.
Yes indeed, unless you use ropes or other specialised structures
Lists aren’t copied on prepend.
Tries (like scala’s Vector) or trie maps (the core map types of Scala, Clojure and probably Haskell?) aren’t copied on updates.
In fact, whether a data structure is an immutable or persistent data structure or merely an unmodifiable data structure (like Kotlin uses) is based on whether it requires full copies on most updates or not. In FP languages, immutable data structures aren’t “specialized” at all.
> whether a data structure is an immutable or persistent data structure or merely an unmodifiable data structure...
This hurt my brain. It seems that in some places (e.g. Java land) unmodifiable refers to something that you can't modify but could just be a wrapper around a structure that can be modified. In that case they use immutable to mean something that is nowhere modifiable.
I may be misrepresenting this idea, but I think the terminology is so poor that it deserves to be misunderstood.
Think about mutability in Java land this way:
Doesn't it depend on the data structure? Eg prepending to a list is actually cheaper with immutable data structures: you keep the original list and add a new head pointing to its head. Now you have two lists available in your program, but only one stored in memory. Yay!
Well luckily, Haskell is full of said "specialized structures."
containers and unordered-containers handle most of your needs and they only copy their trees' spines (O log n) on update.
Haskell also support eg arrays with O(1) in-place updates just fine.
Not really. You might want to look into “ Purely functional data structures” by Chris Okazaki.
It doesn't actually have "large memory allocations" due to immutable data structures. This is a meme that isn't true. Immutable data structures, especially at small scale, do not have huge performance penalties. You don't copy the entire structure over and over...you copy the O(log n) spine.
Haskell's GC is also fast when you are mostly generating garbage, which is inherently true for web server handlers.
Deforestation helps with that
A composition of catamorphic and anamorphic functions can eliminate a lot of the in-between allocations (a hylomorphism)
Basically it looks like you’re building a ton of intermediate structure then consuming it - meaning much of the in-between stuff can be eliminated.
Interesting optimizations and a little mind blowing when you see it.
You obviously haven’t ran anything on the BEAM (Erlang’s VM).
Correct. Erlang also uses green threads?
Yes. And immutable data structures.
When data is immutable, it can be freely shared. Changes to the data essentially uses copy-on-write. And it only writes the delta change, since you don't need a deep copy due to immutability. Add that the garbage collectors of Haskell and Erlang are designed to work with a high allocation rate and have 0 cost for dead data, and this is much faster than what people think.
The way you implement a webserver in either Haskell or Erlang is rather trivial. Whenever there's an incoming request, you make a thread to handle it. So you don't have 1 webserver serving 10k requests. You have 10k webservers serving 1 request each. And since they are started from the same core data, they'll share that due to immutability. See also old-style Apache or PHP and fork().
Web servers handling lots of small requests are actually pretty easy to garbage collect to: you just delete all the data at the end of the request.
Either you have a specialised GC that works like this, or probably a good general generational GC can pick up on this pattern on its own.
Or you do as Erlang's BEAM VM: each thread has it's own memory area which is GC'ed individually. This means upon request termination, you just terminate the thread and the memory is reclaimed with no need for a GC.
In the abstract, this is very similar to spawning a unix process for every request, never free-ing any memory, and letting the memory allocation die with the process.
nah bro, warp is quite performant. think there were some consultancies that wrote haskal web app for their clients.
I thought it was a bit odd that the author claims there’s no mutexes in sight, the TVar is effectively a mutex guard unless I’m misunderstanding this? (I’ve written exactly 0 lines of Haskel). Or is the claim that the lack of ceremony and accidental complexity around threading is the real win for concurrency here?
No, a TVar is not a mutex guard. A TVar is a software transactional memory (STM) variable. STM works just like a database: you batch together a sequence of operations into a transaction and then execute them. During execution of a transaction, all changes made to the contents of the TVar are stored in a transaction log. If some other transaction occurs during the execution then the whole thing is aborted and re-run.
This can take any ordinary Haskell data structure and give you a lock-free concurrent data structure with easy-to-use transactional semantics. How it performs is another matter! That depends on the amount of contention and the cost of re-playing transactions.
https://hackage.haskell.org/package/stm-containers
This library is full of STM-oriented data structures. They perform better than a simple `TVar (Map k v)`.
It's kind of a fun trick actually. The stock Map is just a tree. The STM Map is also a tree [1] but with TVars at each node. So this helps a lot with contention - you only contend along a "spine" instead of across the whole tree, which is O(log n).
[1] Technically a HAMT a la unordered-containers - trie, tree, you get the idea :)
> How it performs is another matter!
I know you say it depends on how much contention one sees but I'm interested in the performance hit. Also, is STM the "standard" (or accepted) way to do async in Haskell?
You are correct, Haskell has quite a few mutex-like types. MVar is one of them.
However, if memory serves me right, TVar is a building block for the transactional memory subsystem. The guard on TVar with, say, modifyTVar is not really stopping execution at entrance but simply indicating that the block modifies the variable. In my mental model, some magic happens in an STM block that checks if two concurrent STM blocks acted upon the same data at the same time, and if so, it reverts the computations of one of the blocks and repeats them with new data.
To my knowledge, Haskell is the only programming language (+runtime) that has a working transactional memory subsystem. It has been in the language for about 20 years, and in that time many have tried (and failed) to also implement STM.
I think Clojure has some kind of STM too?
Haskell's STM is pretty world-class though. That's fair to say :)
Clojure's STM never really took off because, for various reasons, it's not as easy to compose as Haskell's (where you can build up a big library of STM blocks and piece them together at the very edges of your program). As such Clojure's STM implementation doesn't actually have a great reputation within the Clojure ecosystem where it isn't usually used in most production codebases (whereas in Haskell STM is often one of the first tools used in any production codebase with concurrency).
Basically it's the difference between focusing only on transactional variables without having a good way of marking what is and isn't part of a larger transaction and having a higher-order abstraction of an `STM` action that clearly delineates what things are transactions and what aren't.
My impression at least watching chatter over the last several years isn’t that it has a bad reputation but rather that people haven’t found a need for it, atoms are good enough for vast bulk of shared mutable state. Heck even Datomic, an actual bona fide database, doesn’t need STM it’s apparently all just an atom.
But I’ve never heard someone say it messed up in any way, that it was buggy or hard to use or failed to deliver on its promises.
Clojure atoms use STM, though. I've been writing Clojure for almost a decade now, it's not that STM isn't great, it's just that immutable data will carry you a very long way - you just don't need coordinated mutation except in very narrow circumstances. In those circumstances STM is great! I have no complaints. But it just doesn't come up very often.
I would say to the contrary it would come up all the time if the right idioms were in place.
For example, when it comes to concurrent access to a map the Clojure community generally forces a dichotomy, either stick a standard Clojure map in an atom and get fully atomic semantics at the expense of serial write performance or use a Java ConcurrentMap at the expense of inter-key atomicity (or do a more gnarly atom around a map itself containing atoms which gets quite messy quite fast).
Such a stark tradeoff doesn't need to exist! In theory STM gives you exactly the granularity you need where you can access the keys that you need atomicity for and only those keys together while allowing concurrent writes to anything else that doesn't touch those keys (this is exactly how e.g. the stm-containers library for Haskell works that's linked elsewhere).
That’s incorrect. Only refs+dosync use stm. https://clojure.org/reference/refs
Not atoms.
From Hickey’s History of Clojure paper:
“ Taking on the design and implementation of an STM was a lot to add atop designing a programming language. In practice, the STM is rarely needed or used. It is quite common for Clojure programs to use only atoms for state, and even then only one or a handful of atoms in an entire program. But when a program needs coordinated state it really needs it, and without the STM I did not think Clojure would be fully practical.”
https://dl.acm.org/doi/pdf/10.1145/3386321
Atoms do an atomic compare and swap. It’s not the same thing.
Haha, I read The Joy of Clojure way back in 2013 and conflated the different reference types with STM. So thanks for mentioning that, I always thought it weird that you'd need STM for vars and atoms too.
That said, I have never used a ref, nor seen one in use outside of a demo blogpost.
PS I agree atoms and stm are both solid though — and that you can go a very long way without touching either!
You missed a very important detail, the language runtime.
While Haskell's runtime is designed for Haskell needs, Clojure has to be happy with whatever JVM designers considered relevant for Java the language, the same on the other platforms targeted by Clojure.
This is yet another example of a platform being designed for a language, and being a guest language on a platform.
I don't think this is a limitation of the JVM. When I've used Clojure's STM implementation it's been perfectly serviceable (barring the composability issues I mentioned). Likewise when I've used the various STM libraries in Scala. Eta (basically a Haskell implementation on the JVM that unfortunately stalled in development) also had a fine STM implementation.
It's more of a combination of API and language decisions rather than the underlying JVM.
https://zio.dev/reference/stm/
> Implication of Using STM Running I/O Inside STM— There is a strict boundary between the STM world and the ZIO world. This boundary propagates even deeper because we are not allowed to execute arbitrary effects in the STM universe. Performing side effects and I/O operations inside a transaction is problematic. In the STM the only effect that exists is the STM itself. We cannot print something or launch a missile inside a transaction as it will nondeterministically get printed on every reties that transaction does that.
Does Zio actually offer any protection here, or is it just telling the reader that they're on their own and should be wary of footguns?
STM happens inside the STM monad while regular effects happen in the ZIO monad. If you try to do ZIO effects inside an STM transaction you'll get a type error.
Scala doesn't enforce purity like Haskell though so it wont stop you if you call some normal Scala or Java code with side effects. In practice its not a problem because you're wrapping any effectful outside APIs before introducing them into your code.
I am not super familiar with it, great question - my guess would be its the latter! Does Haskell's provide protections?
Mutexes lock code, TVars lock data.
If you lock a section of code (to protect data), there's no guarantee against mutations of that data from other sections of code.
If you lock the data itself, you can freely pass it around and anyone can operate on it concurrently (and reason about it as if it were single-threaded).
It's the same approach as a transactional database, where you share one gigantic bucket of mutable state with many callers, yet no-one has to put acquire/release/synchronise into their SQL statements.
No a TVar isn't a mutex guard. As a sibling comment points out it gives you transactional semantics similar to most relational databases.
Here's an example in perhaps more familiar pseudocode.
In the above example, it's perfectly possible, depending on how the forked code blocks interact with each other, to end up with because we have no guarantee of atomicity/transactionality in what runs within the `forkAndRun` blocks.The equivalent of what that Haskell code is doing is replacing `var` with a new keyword `transactional_var` and introducing another keyword `atomically` such that we can do
and never end up with a scenario where `x` and `y` disagree with each other, because all their actions are done atomically together and `x` and `y` are specifically marked so that in an atomic block all changes to the variables either happen together or are all rolled back together (and tried again), just like in a database.`transactional_var` is the equivalent of a `TVar` and `atomically` is just `atommically`.
As siblings note, TVar is a transactional variable. However, it's not just protective against concurrent writes but also against concurrent reads of altered variables, so it offers true atomicity across any accessed state in a transaction.
So if you have a thread altering `foo` and checking that `foo+bar` isn't greater than 5 and a thread altering `bar` and checking the same, then it's guaranteed that `foo+bar` does not exceed 5. Whereas if only write conflicts were detected (as is default with most databases) then `foo+bar` could end up greater than 5 through parallel changes.
A mutex locks the thread, which doesn't happen with TVar. For mutexes TMVar would be used.
My favourite thing about Haskell concurrency is that there are no colored functions [0]. Writing code in IO, or Async, or the next big thing (asychronous higher-order effect system of the future??), doesn't require language support like Python or Rust.
The one construct that unlocks this lack of colored functions, STM, did require runtime support (as opposed to language support), which at least is transparent to downstream developers
[0]: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
Coloured functions are a feature - not a bug, Haskell is full of them, and they are exactly what make STM safe in Haskell, but abandonware in other languages which have tried.
`<-` or `>>=` vs `=` This should sound pretty familiar! You can only call an IO function from within another IO function. STM in this case makes a third colour: (*) calling into an STM block from IO is what makes it 'happen for real': it's the `atomically` which has type STM a -> IO a.Having these coloured functions is what made STM achievable back in the mid-late 2000s, since the mechanism to prevent STM or pure functions from calling IO was already in-place.
Other languages either tried to figure out how to contain the side-effects and gave up, or just released STM and put the onus on the user not to use side effects.
It is a shame that the people you are answering is being downvoted, I also understand the importance of coloring functions, but look at the examples that person put, python and rust. In those, executing a colored function (at least the async related ones) propagates up to the top of the program, that is a cost that we have to interiorize, but I would be lying if I told you I wouldn't he happy with such behavior. I do a lot of js/ts and I would love to just be able to "inline" await without making my current scope recursively to the top of the program like it can be done with F# with the Async.StartAsTask operation.
This is also an advantage of blocking code. It’s just regular code. The async stuff is handled by the operating system.
Correct? Anyone who has worked with concurrency in Haskell is probably laughing... :)
Haskell's IO type system doesn't model concurrency at all. `IO a` could be a fork and join, an infinite event loop, really anything, it's a black box in terms of "correctness". Better than using JavaScript maybe, but hardly "correct" in any sort of formal, tractable sense.
In another life I will be a Haskell programmer
Yeah me too, I'll invest in bitcoin early, live like a hermit off a coast somewhere, and school kids on HN, "cabal hell! I'll scream, no, Conda hell with powershell hooks in vscode you ingrates, my llm"
There's no time like the present. Feel free to reach out if I can help you along your journey
Not the person you're replying to, but I'll bite:
I've written low thousands of lines of Haskell. Similar to mikojan, I love Haskell in theory, but ended up not enjoying it as much in practice.
1. The multitude of string-y types. I end up converting between String, Text, Lazy Text, ByteString, Lazy ByteString, and I forget what else. Each library wants me to pass in a specific string type, and each other library returns a different string type. LLMs are good at this, also for a while I had a ton of helper functions to convert between any two string types. Still, perhaps there's a better way?
2. The error messages. I come from Elm, so I'm spoiled. Yes, I understand a language with HKTs will never have as nice error messages. Yes, LLMs are pretty good at explaining GHC error messages.
3. The stdlib. Haskell gets a lot of credit for safety, but a `head` blows up instead of returning a `Maybe`. I know there are other - safer - preludes, but I don't know how to choose between them. I don't know how using a different prelude would impact my projects.
I feel like my next step is either towards Idris, with its polished standard library (and dependent types baked into the language!), or towards something simpler and more Elm-like (Gleam perhaps, or Roc). But if you can sell me on Haskell, I'm all ears!
The manual string conversions are annoying. But with https://hackage.haskell.org/package/string-conversions-0.4.0... you just add a `cs` and don't think more about it.
Oooh this is very nice!
I'm not going to sell you on anything. All of the things you've mentioned are true. Loosely, the multitude of string types and the state of the standard library come from the same place: the language is 30+ years old! There are many warts to be found.
However, if you decide to start learning, the path is hard, especially if you come from a non-computer-science background like me. I attempted to learn Haskell twice; I bounced off the first time, quite hard, and didn't try again for years.
What worked for me is a combination of two things:
* Having a goal in mind, that has nothing with the choice of language. For me, it was building a personal website
* The book Haskell Programming from First Principles [0]
and if you have more questions, reach out.
[0]: https://haskellbook.com/
> Having a goal in mind, that has nothing with the choice of language
Yes, yes, that's exactly what my encounters with Haskell looked like. The last one is ~1k lines of code backend for a personal project. I feel that's about as much as I could manage at this point.
> The book Haskell Programming from First Principles
That book is getting recommended all the time! I'm concerned if it's perhaps a little too basic for me. (I understand monads, monad transformers, have some notion of final tagless and free monad. Yet I get perpetually confused by various relatively simple things.)
I guess what I'm missing is haskell-language-server to help me a little. Here I'm confused about the interplay between `haskell-stack` (which is in Debian repos and which I think I'd like to use), ghcup, cabal, and haskell-language-server.
Hence GHC extensions? Overloaded Strings don’t help? It’s been about 20 years since I wrote Haskell in production.
Choose this life instead!
It's a lot of fun!
I fear that in another life I will be a JavaScript programmer
Why not python ?
I’m not the OP, but static types, Hindley-Milner type inference, algebraic data types, and pattern matching can be quite ergonomic. I have also come to appreciate functional programming and how it makes reasoning about and testing code easier.
Yeah, just last week I updated the numeric precision of a value across an entire user journey in a big enterprise application, spanning many functions and data types. Thanks to Haskell, I could do it in a day.
In any other language I've used (barring maybe Ada) that is a refactoring that would take at least days, if not weeks, to manually track down all the places it interacts with the system directly or indirectly, because mixing e.g. int with long long is not a type error.
In Haskell, I change the type and the compiler spits out a list of locations that needs to change. This is repeated for a few iterations until all transitive interactions are worked out. Done!
[dead]
No one wants to be a python programmer. It's a practical language to get things done. It isn't a language to make you feel proud of yourself nor about the current state of our industry.
I’ve enjoyed making it go faster by finding quirks, but at this point it’s mostly become “OK, what else can I offload to C?”
I should really learn Rust. Or Zig. I tried Nim (best of both worlds, Python-esque code that compiles to C!), but it wasn’t nearly as fast as my Python + C for my specific use case.
What exactly do you write, where your Python+C is faster than Nim which compiles to optimized C?
Generating millions of rows of synthetic data for testing RDBMS.
Tbf, I didn't spend much time trying to optimize the Nim code once I got a working PoC, so it's entirely possible that I could've made it faster.
You may no longer be interested in this kind of thing, but if you are there might be some ideas of note over at https://github.com/c-blake/nio/blob/main/db-bench.md (in particular the demo/gbyGen.nim program).
The concept of "the Pythonic Way" indicates to me that there are people who are proud of being Python programmers.
May God have mercy on their souls.
Could u be more specific ?
Can’t stand Python. Have to use Python. Wish it would DIAF to be honest.
They really aren't anything alike at all.
I love Haskell because I can write provably correct code that still doesn’t work
If that is what you want to do, you can do that in any language. It's just that when you do it in e.g. Java, you have to spend a lot longer proving correctness before discovering that it doesn't work.
I love it because I can spend all my time noodling over types and never ship a product that would have been great shipped in a late night wine-fueled session of 1999 PHP.
Now you would vibe it and ship it during the ol' drink.
[dead]
A Haskell quote I like is: “I’ve only proven this correct, I haven’t tried it.”
Isn’t that one of Dijkstra’s supposed comments?
It's Knuth! [1]
"Beware of bugs in the above code; I have only proved it correct, not tried it."
[1]: https://www-cs-faculty.stanford.edu/~knuth/faq.html
It's incredible that given how fuzzy and inaccurate human memory is, we treat any LLM that can't perfectly recite volumes of information as somehow beneath us.
Oh so this could be the inspiration on Virtual Threads and Structured Concurrency in Java
Rust has a bunch of these while being maintainable.
What about Haskell concurrency isn't maintainable?
The concurrency stuff in the stdlib + the mainstays in the ecosystem are pretty stable and noncontroversial..there's stuff in Haskell that churn but this is not one of them.
Isn't concurrency in Rust a notorious pain point? Or am I confusing it with async which is different? [I am stuck in an era before parallelism, so I don't really understand these things]
This why I haven't fully embraced Rust yet. Whenever I ask about safely mutating shared state (see sibling comment), I'm met with silence, or some comment like: Rust guarantees that you aren't mutating shared state.
I'm surprised you say this. The core type system guarantees is that there is no aliasing while mutating, and no mutation while it is aliased. You get single writer multiple reader for free without overhead.
If you want multiple writers, you can always use the Arc container and use the built-in lock.
>> or some comment like: Rust guarantees that you aren't mutating shared state.
> The core type system guarantees that there is no [sharing] while mutating, and no mutation while [sharing]
Not sure what you are pointing out, so let me spell out what I said earlier.
1. You get single-writer multiple reader for free from the type system, without any runtime overhead.
2. For the same reason, the type system does not allow multiple writers. If you want multiple writers, then you are forced to use locks. Once you use locks, the runtime guarantees safety for this case.
Either way, you get 100% safety.
You're not adding any new information. I already understand you completely.
> 1. You get single-writer multiple reader for free from the type system, without any runtime overhead.
Take the example from the article which is accepted by the Haskell type system:
Rust would reject this because of multiple writers.Also, thinking about it more, I'm now very skeptical of Rust even providing 'single-writer multiple-reader'. Is it in fact single-reader-writer xor multiple-reader? In other words, how does it handle a goblin constantly moving money between accounts while a gnome is constantly trying to count the total amount?
Yes, Rust is 100% safe because it would reject this code, so it would never run. Not running code also has guaranteed no-overhead!2. For the same reason, the type system does not allow multiple writers. If you want multiple writers, then you are forced to use locks
* Locks are problematic, which is why I chose STM over locks in the first place.
* Locks are in all the languages. Does your comment about 100% safety really apply to all languages ?
Your first point is not comparing the same thing. STM is wonderful, but as you no doubt know, it is meant for many TVars to be read/modified. This necessarily has overhead (transactional logs), performs poorly under contention and also is subject to livelock, and has no fairness.
In your goblin example, I believe the gnomeAccountantThread would have to constantly retry, because the writer (if successful) would have produced a new version of two accounts, which would trip up the reader, forcing it to start again. In general, Haskell's STM is built for short-lived transactions; for longer running transactions or those that touch a lot of objects, you'd need something like multi-versioned objects seen in databases or epochs to get a consistent snapshot. Neither Rust nor Haskell is suited to this example out of the box.
For your second question, you assume axiomatically that locks are problematic. They aren't in Rust (except, see later about deadlocks). Unlike any other language with in-place mutation, Rust will force you to use a mutex in order to share something for read-write (in a multiple writer scenario), otherwise it won't compile. You have to use lock() in order to get access to the underlying object, and once you have that object, the type system makes sure only the owner can mutate it. In C/C++/Java/Go, you don't get this guarantee at all ... it is possible to mistakenly use the object without using a mutex. So, there is not guarantee of safety in the other languages. There is a 100% guarantee in Rust.
---
That said, the problematic part about locks (whether it is mutexes or MVars in Haskell) is deadlocks, which is solved by having a deterministic lock order. In your Haskell example, if acctA and acctB were MVars, you'd do
Correctness in concurrency is actually one of Rust's strong suit. Any pain felt is because Rust is low level and does not prescribe a single concurrency mechanism, leaving each coder to figure out the benefits and constraints of each library.
STM is Haskell's feature for safe shared&mutable state.
What is Rust's feature for safe shared&mutable state?
Relevant: https://github.com/Marthog/rust-stm which has usage instructions. It's memory safe as defined in Safe Rust, but unlike the Haskell implementation it's not "safe" in a correctness sense, because Rust does not afford the same control about mutability and purity. (At least, not yet - future additions to the language may improve this somewhat.)
That would be Arc<Mutex<T>>, which works but is no STM.
Additionally, the actor model is pretty easy to implement: https://ryhl.io/blog/actors-with-tokio/
Rust is nearly the answer if anyone understood the question.