> “We’re heading into a new age of AI-assisted coding, and right now, it’s difficult to predict how that will play out. But if I had to place a bet, I would say that in the long run, AIs are more likely to generate high-quality code in a language like Gleam. Gleam makes it quick and easy for AIs to check their code, get instant feedback, and iterate. That should be an advantage compared to languages that are slow to build, have cryptic error messages, and can’t catch mistakes at build-time.”
Interesting point and one I haven't seen before. Almost like arguing that AI will work best with things it can learn quickly, rather than things that have lots of examples.
I feel like now that LLMs are getting better the quality of the examples matters more than the quantity.
Garbage in, garbage out. If you confuse it with a lot of Junior-level code and have a languages that constantly changes best practices, the output might not be great.
On the other hand, if you have a languages that was carefully designed from the start and avoids making breaking changes, if it has great first party documentation and a unified code style everyone adheres to, the LLM will have an easier time.
The later also happens to be better for humans. Honestly the best bet is to make a good language for humans. Generative AI is still evolving rapidly so no point in designing the lang for current weaknesses.
If the main win of starting over with a new language is that you don't have a giant glut of legacy example code and documentation targeting no-longer-the-best-practice, maybe there's a solution where you take an established modern language like rust or go and feed the LLM a more curated set of material from which to learn.
Like instead of "the entire internet", here's a few hundred best-practice projects, some known up-to-date documentation/tutorials, and a whitelist of 3rd party modules that you're allowed to consider using.
It feels like it should be true that a referentially transparent type safe language would be the 'right' language for ai coding since each code block is stateless you should be able to in parellel decompose problems and test them infinitely down.
If you have good enough (LSP+MCP) tooling, I'd expect that the LLM can learning quickly vs the LLM having lots of examples would converge towards being the same thing. At the very least it could generate many potential examples, put them all through the tooling to deterministic to get many "true" examples and then learn from that.
> […] rather than things that have lots of examples.
Well, one glaring issue with the assumption of the quality of LLM output being mostly dependent on a large volume of examples online would be Sturgeon's law.
I can’t say where AI will end up but I firmly believe it will pick winners and losers in the next generations of programming languages. Not always for the better.
Any language that is difficult for an AI to understand will have to get popular by needing far less boilerplate code for AIs to write in the first place. We may finally start designing better APIs. Or lean into it and make much worse ones that necessitate AI. Look especially to an AI company to create a free razor and sell you the blades.
Beautiful. I’ve taken a few cracks at learning Gleam, but I found I quickly get stuck in abstraction hell—building types on types in types without coding any behavior. I would probably have more success learning Erlang first, just to get a handle on those functional patterns the BEAM was built for. I should take another crack at it.
Just FYI: unlike many pure FPs, building types on types is generally not a pattern that you do in erlang (or Elixir) and is largely considered an anti pattern in both communities.
You might not get the "handle" you're looking for?
For what it’s worth, I don’t think there’s much about Gleam’s design that is specific to “the functional patterns the BEAM was built for.” If you’re getting stuck in abstraction hell, consider asking the community for advice on what would be more idiomatic.
Yes, I've created single-file Gleam executables by compiling to JavaScript and then using Node's experimental SEA (single executable application) feature. As a bonus, typically I've found the JavaScript targets to run a good deal faster for number-crunching tasks.
Hefty. The process is effectively just injecting all of the JS into the Node interpreter executable, so it's the size of the interpreter plus whatever you stuff inside. It's close to 50MB.
I can't speak to Gleam, but for Elixir I just used Burrito to create a single executable: https://github.com/burrito-elixir/burrito I think it works for just Erlang too.
I haven't used it, but from the docs, I don't see why this wouldn't work for any language that compiles to beam files. You might need to adjust the build setup a bit.
Personally, I think I'd prefer something that worked without unpacking, but I don't actually need something like this, so my preferences aren't super important :D
No, the VM needs to be installed on the machine, similar to C#, Java, Python, etc.
There have been some projects for creating self-extracting executable archives for the VM, and some projects for compiling BEAM programs to native code, but nothing has become well established yet.
> “We’re heading into a new age of AI-assisted coding, and right now, it’s difficult to predict how that will play out. But if I had to place a bet, I would say that in the long run, AIs are more likely to generate high-quality code in a language like Gleam. Gleam makes it quick and easy for AIs to check their code, get instant feedback, and iterate. That should be an advantage compared to languages that are slow to build, have cryptic error messages, and can’t catch mistakes at build-time.”
Interesting point and one I haven't seen before. Almost like arguing that AI will work best with things it can learn quickly, rather than things that have lots of examples.
I feel like now that LLMs are getting better the quality of the examples matters more than the quantity.
Garbage in, garbage out. If you confuse it with a lot of Junior-level code and have a languages that constantly changes best practices, the output might not be great.
On the other hand, if you have a languages that was carefully designed from the start and avoids making breaking changes, if it has great first party documentation and a unified code style everyone adheres to, the LLM will have an easier time.
The later also happens to be better for humans. Honestly the best bet is to make a good language for humans. Generative AI is still evolving rapidly so no point in designing the lang for current weaknesses.
If the main win of starting over with a new language is that you don't have a giant glut of legacy example code and documentation targeting no-longer-the-best-practice, maybe there's a solution where you take an established modern language like rust or go and feed the LLM a more curated set of material from which to learn.
Like instead of "the entire internet", here's a few hundred best-practice projects, some known up-to-date documentation/tutorials, and a whitelist of 3rd party modules that you're allowed to consider using.
It feels like it should be true that a referentially transparent type safe language would be the 'right' language for ai coding since each code block is stateless you should be able to in parellel decompose problems and test them infinitely down.
If you have good enough (LSP+MCP) tooling, I'd expect that the LLM can learning quickly vs the LLM having lots of examples would converge towards being the same thing. At the very least it could generate many potential examples, put them all through the tooling to deterministic to get many "true" examples and then learn from that.
> […] rather than things that have lots of examples.
Well, one glaring issue with the assumption of the quality of LLM output being mostly dependent on a large volume of examples online would be Sturgeon's law.
I can’t say where AI will end up but I firmly believe it will pick winners and losers in the next generations of programming languages. Not always for the better.
Any language that is difficult for an AI to understand will have to get popular by needing far less boilerplate code for AIs to write in the first place. We may finally start designing better APIs. Or lean into it and make much worse ones that necessitate AI. Look especially to an AI company to create a free razor and sell you the blades.
Beautiful. I’ve taken a few cracks at learning Gleam, but I found I quickly get stuck in abstraction hell—building types on types in types without coding any behavior. I would probably have more success learning Erlang first, just to get a handle on those functional patterns the BEAM was built for. I should take another crack at it.
Just FYI: unlike many pure FPs, building types on types is generally not a pattern that you do in erlang (or Elixir) and is largely considered an anti pattern in both communities.
You might not get the "handle" you're looking for?
For what it’s worth, I don’t think there’s much about Gleam’s design that is specific to “the functional patterns the BEAM was built for.” If you’re getting stuck in abstraction hell, consider asking the community for advice on what would be more idiomatic.
Amazing to hear success stories of Gleam in production! Running on the beam really feels like a super power
For Gleam//Erlang is there an easy way to package up an executable you can distribute without also shipping Erlang?
Yes, I've created single-file Gleam executables by compiling to JavaScript and then using Node's experimental SEA (single executable application) feature. As a bonus, typically I've found the JavaScript targets to run a good deal faster for number-crunching tasks.
How big is a hello world executable in that case?
Hefty. The process is effectively just injecting all of the JS into the Node interpreter executable, so it's the size of the interpreter plus whatever you stuff inside. It's close to 50MB.
Oof, well that’s not ideal.
I can't speak to Gleam, but for Elixir I just used Burrito to create a single executable: https://github.com/burrito-elixir/burrito I think it works for just Erlang too.
I haven't used it, but from the docs, I don't see why this wouldn't work for any language that compiles to beam files. You might need to adjust the build setup a bit.
Personally, I think I'd prefer something that worked without unpacking, but I don't actually need something like this, so my preferences aren't super important :D
No, the VM needs to be installed on the machine, similar to C#, Java, Python, etc.
There have been some projects for creating self-extracting executable archives for the VM, and some projects for compiling BEAM programs to native code, but nothing has become well established yet.
You can compile to javascript as well.