I wasn't sure which one I'd end up using. I suspected CPS would make it easier to do tail-call optimisation, which is very important to me. But once I implemented CPS, I found I could not figure out how to assign a type to the continuation term that CPS introduces. I had no such typing problem with ANF, so that's what I continued with. I haven't tried the tail-call optimisation in ANF yet.
I’m interested in this because I’m writing this LC dialect as a deliberate, extensible IR with a targeted “bottom-dialect” (a System F, if you will) that can be translated to any number of lower representations.
My goal is use it as an extensible dialect that can be built up to any arbitrarily complex dialect of LC (Calculus of Constructions, Algebraic Effects, etc). Those “higher dialects” themselves can be targets for some “front end” language that’s palatable to end-programmers.
There’s a lot of thoughtful insights here in this paper, namely the idea of “combining steps” to optimize/remove IRs. This also corresponds to the same practices in eg Koka (which goes straight to C or wasm/js; heck maybe they cite this paper?)
I personally reflect on this because I think an extensible IR is just a stop-gap to an optimized process, allowing you to “grow” the language semantics you want, but ultimately you’ll have to rewrite bc of perf. This happened with rustc, as well.
Why is the text rendering on this pdf so bad? The curves in letters doesn't look smooth (apparent after zooming in). It's not a scanned pdf but it looks like it was converted from postscript
So maybe the question is, why is text in .ps files so bad? Are they bad after being printed too?
It's not in the abstract, but towards the end of the paper:
> Our analysis suggests that the language of A-normal forms is a good intermediate representation for compilers.
I messed around with both CPS and ANF transforms for a while. Here are more approachable treatments of them:
I wasn't sure which one I'd end up using. I suspected CPS would make it easier to do tail-call optimisation, which is very important to me. But once I implemented CPS, I found I could not figure out how to assign a type to the continuation term that CPS introduces. I had no such typing problem with ANF, so that's what I continued with. I haven't tried the tail-call optimisation in ANF yet.Also worth reading: Compiling with Continuations, Continued (2007) [1]
[1] https://www.microsoft.com/en-us/research/wp-content/uploads/...
I work on an open source system-f level lambda calculus named System R (https://github.com/olsonjeffery/system_r).
I’m interested in this because I’m writing this LC dialect as a deliberate, extensible IR with a targeted “bottom-dialect” (a System F, if you will) that can be translated to any number of lower representations.
My goal is use it as an extensible dialect that can be built up to any arbitrarily complex dialect of LC (Calculus of Constructions, Algebraic Effects, etc). Those “higher dialects” themselves can be targets for some “front end” language that’s palatable to end-programmers.
There’s a lot of thoughtful insights here in this paper, namely the idea of “combining steps” to optimize/remove IRs. This also corresponds to the same practices in eg Koka (which goes straight to C or wasm/js; heck maybe they cite this paper?)
I personally reflect on this because I think an extensible IR is just a stop-gap to an optimized process, allowing you to “grow” the language semantics you want, but ultimately you’ll have to rewrite bc of perf. This happened with rustc, as well.
Thanks for sharing!
I've stared at that paper before and I still don't understand it even though I think I've done pretty much exactly what it talks about doing: https://github.com/bablr-lang/language-cstml/blob/trunk/lib/...
Why is the text rendering on this pdf so bad? The curves in letters doesn't look smooth (apparent after zooming in). It's not a scanned pdf but it looks like it was converted from postscript
So maybe the question is, why is text in .ps files so bad? Are they bad after being printed too?