This is fun. But the bit at the beginning about philosophy is not correct. Parmenides did not believe in what we would call essences, but really did believe that nothing ever changes (along with his fellow Eliatic philosopher Zeno, of paradox fame). The idea that change is an illusion is pretty silly, and so Plato and especially Aristotle worked out what's wrong with that and proposed the idea of _forms_ in part to account for the nature of change. Aristotle extended Plato's idea and grounded it in material reality which we observe via the senses, and that's where the concept of essence really comes from - "essence" comes from the Latin "essentia" which was coined to deal with the tricky Greek οὐσία (ousia - "being") that Aristotle uses in his discussions of change.
Anyone who likes this might also like Stefan Miller’s paper, “a simple category theoretical understanding of category theory diagrams“, appearing in SIGBOVIK 2014. See https://sigbovik.org/2014/proceedings.pdf (starts on PDF page 65, or page 57 if you go by margin page numbers)
Category theory is what you get when you take mappings instead of sets as the primitive objects of your universe. At first this might seem a perverse thing to do as mappings seem more complex than sets, but that is just because traditionally mappings have usually been defined in terms of sets.
In set theory you can specify that two sets be equal and you can also specify that one set be an element of another.
In category theory you can specify that two mappings be equal and you can also specify that two mappings compose end to end to produce a third mapping.
Category theory can be used to express some requirements in a very concise way.
> Category theory is what you get when you take mappings instead of sets as the primitive objects of your universe.
I'm not sure about that, because you still need some concept of set (or collection or class) to define a category, because you need a set of objects and mappings between them (technically that's a "small" category, but to define any larger category would require at least as much set-theoretical complication).
More exactly, whereas in set theory it's the membership relation between sets and their elements that is basic, in category theory it's the mapping between objects.
Nevertheless, the basic concepts of set theory can also be defined within category theory, so in that sense they're inter-translatable. In each case though, you need some ambient idea of a collection (or class or set) of the basic objects. Tom Leinster has a brilliantly clear and succinct (8 pages) exposition of how this is done here
https://arxiv.org/abs/1212.6543
The thing is, even defining first-order logic requires a (potentially infinite) collection of variables and constant terms; and set theory is embedded in first-order logic, so both set theory and category theory are on the same footing in seemingly requiring a prior conception of some kind of potentially infinite "collection". To be honest I'm a bit puzzled as to how that works logically
So category theory is really the theory of composition of mappings. I conjecture that all programming can be seen as just the composition of mappings. If this is correct then category theory is a theory of programming.
Category theory is actually a ‘simplified’ graph theory, i.e. you can see categories as a restricted class of graphs. E.G. ‘Category Theory for Computing Science’ introduces categories this way (a category is a directed graph with associative composition and identity; the free category on a graph is the graph with all identities and compositions filled in). But the restrictions (associative composition and identity) are harmless and natural for programming applications where there's always a notion of ‘do nothing’ or ‘do one thing after another’, and unlock a lot of higher structure.
If you allowed infinite graphs maybe. How would you define a functor or natural transformation in graph theory? Seems like you would need to construct a conceptual system that is just equivalent to category theory
That is probably what they mean by specifying that they compose.
If all you know is that you have two mappings you don't know they compose, until you get the additional information about their sources and targets. In a way that's what the source and targets are: just labels of what you can compose them with.
Take a mapping a and precompose it with the identity mapping i.
By the definition of the identity mapping the resulting composition is equal to a.
i;a = a
(Here ';' represents forward composition. Mathematicians tend to use backward composition represented by '∘' but I find backward composition awkward and error-prone and so avoid using it.)
Now, if there is another mapping j that is different from i, such that
j;a = a
then the mapping a loses information.
By this I mean that if you are given the value of a(x) you cannot always determine what x was.
To understand this properly you may need to work through a simple example by drawing circles, dots and arrows on a piece of paper.
If there is no such j then mapping a is said to be a monomorphism or injection (the set theoretic term) and it does not lose information.
This specification of the property 'loses information' only involves mapping equality and mapping composition.
It does not involve sets or elements of sets.
An example of a mapping that loses information would be the capitalization of strings of letters.
An example of a mapping that you would not want to lose information would be zip file compression.
If you alter the above specification to use post-composition (a;i = a and a;j = a) instead of pre-composition you get epimorphisms or surjections which capture the idea that a mapping constrains all the values in its codomain.
I like to think of this as the mapping does not return uninitialized values or 'junk' as it is sometimes called.
At the 2018(?) ICFP, I sat between John Wiegley and Conal Elliot.
They talked about expressing and solving a programming problem in category theory, and then mapping the solution into whatever programming language their employer was using.
From what they said, they were having great success producing efficient and effective solutions following this process.
I decided to look for other cases where this process worked.
I found several, but one off the top of my head is high dimensional analysis, where t-SNE was doing okay, and a group decided to start with CT and try to build something better, and produced UMAP, which is much better.
In short, this does work, and you can find much better solutions this way.
Its of course the theory behind monads that since Eugenio Moggi are used to model computational effects in pure functional languages. Effects such as state, optional return types (used in turn for error handling) (maybe monad), input/output (reader writer monad) and others. Beyond effects, Wadler used monads for parsers (monadic parsing).
The Curry-Howard "isomorphism" (slogan: propositions are types, proofs are programs/functions) map code to logic in a categorical way described first by certain book of Lambek-Scott with uses in formal software verification.
Categories provide abstraction. You first distill the behavior of how Haskell (or you other pet functional language) work with Hask, the category of Haskell types, and then you can apply your abstract distillate to other categories and obtain task-oriented, tailored computing concepts that enrich bare language capabilities, providing applications including 1) probabilistic programs 2) automatic differentiation. Conal Elliott has very concrete work along this lines. When he speaks of CCCs (following Lambek) he alludes to cartesian closed categories, the crucial property of having a type constructor for function spaces and higher order functions. See his "compiling to categories" for a very concrete, hands-on feel. Another application he shows is in hardware synthesis (baking your functional algorithm to a netlist of logical gates for automating the design of custom hw accelerators).
In short, why categories? computational effects, formal verification and the equivalence of simply-typed lambda-calculus with cartesian closed categories, with lambda-calculus being the backbone of functional programming language semantics.
I'd phrase this a tiny bit differently: monads give a model of effects in _impure_ languages and are important for that reason. The fact that Haskell chooses to emphasize monads in the language itself is cool, but their utility is not restricted to pure functional languages; quite the opposite! In a pure functional language you don't need to think about effects at all, and the only model you need is mathematical functions, which are much simpler.
one of the things i took away about category theory is that it allows you to avoid repeating certain arguments which boil down to so-called “abstract nonsense” ie they have nothing to do with the specific objects you’re dealing with but rather are a consequence of very generic mapping relationships between them. maybe better versed people can give specifics.
as a very broad example there are multiple ways to define a “homology” (ex simplicial, singular, etc) functor associating certain groups to topological spaces as invariants. but the arguments needed to prove properties of the relationships between those groups can be derived from very general properties of the definitions and don’t need to be re-argued from the very fine definitions of each type of homology.
Category theory is popular in computer science because, at a fundamental level, they're very compatible ways of seeing the world.
In computing, we think about:
- a set of states
- with transformations between them
- including a ‘do nothing’ transformation
- that can be composed associatively (a sequence of statements `{a; b;}; c` transforms the state in the same way as a sequence of statements `a; {b; c;}`)
- but only in certain ways: some states are unreachable from other states
This is exactly the sort of thing category theory studies, so there's a lot of cross-pollination between the disciplines. Computation defines interesting properties of certain categories like ‘computation’ or ‘polynomial efficiency’ that can help category theorists track down interesting beasts to study in category theory and other branches of mathematics that have their own relationships to category theory. Meanwhile, category theory can give suggestions to computer science both about what sort of things the states and transformations can mean and also what the consequences are of defining them in different ways, i.e. how we can capture more expressive power or efficiency without straying too far from the comfort of our ‘do this then do that’ mental model.
This latter is really helpful in computer science, especially in programming language or API design, because in general it's a really hard problem to say, given a particular set of basic building blocks, what properties they'll have when combined in all the possible ways. Results in category theory usually look like that: given a set of building blocks of a particular form, you will always be able to compose them in such a way that the result has a desired property; or, no matter how they're combined, the result will never have a particular undesired property.
As an aside, it's common in a certain computer science subculture (mostly the one that likes category theory) to talk about computing in the language of typed functional programming, but if you don't already have a deep understanding of how functional programming represents computation this can hide the forest behind the trees: when a functional programmer says ‘type’ or ‘typing context’ you can think about sets of potential (sub)states of the computer.
> - with transformations between them
>
> - including a ‘do nothing’ transformation
>
> - that can be composed associatively (a sequence of statements `{a; b;}; c` transforms the state in the same way as a sequence of statements `a; {b; c;}`)
And this right here is that monoid in the famous "A monad is just a monoid in the category of endofunctors" meme.
Still, what's in your opinion, the advantage of thinking in category theory rather than set theory? (For programming, not - algebraic geometry.)
I mean, all examples I heard can be directly treated with groups, monoids, and regular functions.
I know some abstract concepts that can be defined in a nice way with CT but not nearly as easy - set theory, e.g. (abstract) tensor product. Yet, for other concepts, including quantum mechanics, I have found that there is "abstract overhead" of CT with little added value.
It's just a good set of models to use to think about all sorts of different mathematical systems, kind of like a unified vocabulary. Beyond undergraduate level, category theory these days plays a huge role within many vast fields - e.g., algebraic geometry, algebraic topology, or representation theory.
I think your reply overstates the importance of category theory in mathematics and doesn't give any hint on what it is about.
IMO a better reply would be: category theory appeared to unify the concepts around using discrete objects to prove the properties of continous objects in topology, like fundamental groups, homology groups and homothopy groups. It is only practically useful for very advanced proofs like 2nd Weil Conjecture. Any usage of it in programming is only an analogy and is not mathematically rigorous (see https://math.andrej.com/2016/08/06/hask-is-not-a-category/)
You have a function that does A() and another function that does B().
Upon careful inspection or after just writing/using them 10,000s of times[1] you realize they are both special cases of one general function f()[2]. Congrats, you're likely doing CT now, but barely scratching the surface, though.
Let's say you find a way to do a function factory that generates explicit instances of f() -> A() and f() -> B() at runtime for your different use cases as they are needed. You do this 100 times, 1,000 times[1] with many different functions, in many different contexts. You eventually realize that if all your functions and their signatures had the same structure[3] it would be quite easy to mix some (or all?) of them with each other, allowing you to handle a perhaps infinite amount of complexity in a way that's very clean to conceptualize and visualize. Isn't this just FP? Yes, they're very intimately related.
By this point you're 99.9999% doing CT now, but remember to shower regularly, touch grass etc.
CT formalized these structures with mathematical language, and it turns out that this line of thinking is very useful in many fields like ours (CS), Math, Physics, etc.
1. Which is what happened to me.
2. Which sometimes is a way more elegant and simple solution.
3. This term is fundamental and has way more meaning than what I could write here and what one would think on a first approach to it.
UML doesn't give ideas for how to actually structure things. Category theory is primarily a theory of nice ways things can be put together or form relationships while maintaining invariants.
Insofar as ‘computation’ is about mapping one state or value to another state or value, it has a lot to do with CT!
The question of whether CT is _useful_ for studying computation is different, and there are certainly other lenses you can see computation through that some people would argue are better. But it's hard to deny that they're _related_.
I mean, technically almost all of math can be related to other math one way or another. To say the CT has a lot to do with computation is definitely a stretch. CT is not a recognized Computer Science subject. It's mostly used in the functional programming community to name certain concepts and theorems, but then applied to a specific type system (so it's not actually doing CT, since your restricting yourself to a single category, whereas CT is really about connecting different categories by generalizing over them).
It makes it so much more complicated than what is needed to understand natural transformation. Natural transformation is just mapping between two functors. You can discover the laws yourself just from this.
I hate this particular mix of prose and formalism. Too complicated to be pop-sci, too informal to be, well, formal. I got to this part:
> We know that two orders are isomorphic if there are two functors, such that going from one to the other and back again leads you to the same object.
And I have no clue what is a functor, nor order. "Functor" wasn't defined, and "order" is defined as "thin category", which in turn remains undefined.
Seems to me like in order to understand this text you already need to understand category theory. If that's the case, then why would you be reading it?
This is fun. But the bit at the beginning about philosophy is not correct. Parmenides did not believe in what we would call essences, but really did believe that nothing ever changes (along with his fellow Eliatic philosopher Zeno, of paradox fame). The idea that change is an illusion is pretty silly, and so Plato and especially Aristotle worked out what's wrong with that and proposed the idea of _forms_ in part to account for the nature of change. Aristotle extended Plato's idea and grounded it in material reality which we observe via the senses, and that's where the concept of essence really comes from - "essence" comes from the Latin "essentia" which was coined to deal with the tricky Greek οὐσία (ousia - "being") that Aristotle uses in his discussions of change.
Anyone who likes this might also like Stefan Miller’s paper, “a simple category theoretical understanding of category theory diagrams“, appearing in SIGBOVIK 2014. See https://sigbovik.org/2014/proceedings.pdf (starts on PDF page 65, or page 57 if you go by margin page numbers)
What's the thing with category theory? I see this topic discussed quite frequently here but I don't get it why people are so into it
Category theory is what you get when you take mappings instead of sets as the primitive objects of your universe. At first this might seem a perverse thing to do as mappings seem more complex than sets, but that is just because traditionally mappings have usually been defined in terms of sets.
In set theory you can specify that two sets be equal and you can also specify that one set be an element of another.
In category theory you can specify that two mappings be equal and you can also specify that two mappings compose end to end to produce a third mapping.
Category theory can be used to express some requirements in a very concise way.
> Category theory is what you get when you take mappings instead of sets as the primitive objects of your universe.
I'm not sure about that, because you still need some concept of set (or collection or class) to define a category, because you need a set of objects and mappings between them (technically that's a "small" category, but to define any larger category would require at least as much set-theoretical complication).
More exactly, whereas in set theory it's the membership relation between sets and their elements that is basic, in category theory it's the mapping between objects.
Nevertheless, the basic concepts of set theory can also be defined within category theory, so in that sense they're inter-translatable. In each case though, you need some ambient idea of a collection (or class or set) of the basic objects. Tom Leinster has a brilliantly clear and succinct (8 pages) exposition of how this is done here https://arxiv.org/abs/1212.6543
The thing is, even defining first-order logic requires a (potentially infinite) collection of variables and constant terms; and set theory is embedded in first-order logic, so both set theory and category theory are on the same footing in seemingly requiring a prior conception of some kind of potentially infinite "collection". To be honest I'm a bit puzzled as to how that works logically
So category theory is really the theory of composition of mappings. I conjecture that all programming can be seen as just the composition of mappings. If this is correct then category theory is a theory of programming.
You don't need category theory to connect dots with arrows, graph theory is enough for this.
Category theory is actually a ‘simplified’ graph theory, i.e. you can see categories as a restricted class of graphs. E.G. ‘Category Theory for Computing Science’ introduces categories this way (a category is a directed graph with associative composition and identity; the free category on a graph is the graph with all identities and compositions filled in). But the restrictions (associative composition and identity) are harmless and natural for programming applications where there's always a notion of ‘do nothing’ or ‘do one thing after another’, and unlock a lot of higher structure.
If you allowed infinite graphs maybe. How would you define a functor or natural transformation in graph theory? Seems like you would need to construct a conceptual system that is just equivalent to category theory
It can very much be. Here’s one example of this phenomenon (there are many others but this is the most famous): https://wiki.haskell.org/Curry-Howard-Lambek_correspondence
> you can also specify that two mappings compose
Two mappings with corresponding domain/codomain have to compose by definition of a category. It's not something you can specify.
Yes. When you are specifying a system you are building the category that you want it to live in.
That is probably what they mean by specifying that they compose.
If all you know is that you have two mappings you don't know they compose, until you get the additional information about their sources and targets. In a way that's what the source and targets are: just labels of what you can compose them with.
> Category theory can be used to express some requirements in a very concise way.
Could you give an example in programming, what can be easier expressed in CT than with sets and functions?
> Category theory can be used to express some requirements in a very concise way.
Can you an example?
Take a mapping a and precompose it with the identity mapping i. By the definition of the identity mapping the resulting composition is equal to a.
(Here ';' represents forward composition. Mathematicians tend to use backward composition represented by '∘' but I find backward composition awkward and error-prone and so avoid using it.)Now, if there is another mapping j that is different from i, such that
then the mapping a loses information. By this I mean that if you are given the value of a(x) you cannot always determine what x was. To understand this properly you may need to work through a simple example by drawing circles, dots and arrows on a piece of paper.If there is no such j then mapping a is said to be a monomorphism or injection (the set theoretic term) and it does not lose information.
This specification of the property 'loses information' only involves mapping equality and mapping composition. It does not involve sets or elements of sets.
An example of a mapping that loses information would be the capitalization of strings of letters. An example of a mapping that you would not want to lose information would be zip file compression.
If you alter the above specification to use post-composition (a;i = a and a;j = a) instead of pre-composition you get epimorphisms or surjections which capture the idea that a mapping constrains all the values in its codomain. I like to think of this as the mapping does not return uninitialized values or 'junk' as it is sometimes called.
Bartosz Milewski works through this in more detail (including from the set-theoretic side) in the last 10 minutes of https://www.youtube.com/watch?v=O2lZkr-aAqk&list=PLbgaMIhjbm... and the first 10 minutes of https://www.youtube.com/watch?v=NcT7CGPICzo&list=PLbgaMIhjbm....
> Category theory is what you get when you take mappings instead of sets as the primitive objects of your universe
Why have I never seen it explained like this before. Wow, thank you!
At the 2018(?) ICFP, I sat between John Wiegley and Conal Elliot. They talked about expressing and solving a programming problem in category theory, and then mapping the solution into whatever programming language their employer was using. From what they said, they were having great success producing efficient and effective solutions following this process.
I decided to look for other cases where this process worked.
I found several, but one off the top of my head is high dimensional analysis, where t-SNE was doing okay, and a group decided to start with CT and try to build something better, and produced UMAP, which is much better.
In short, this does work, and you can find much better solutions this way.
(random link https://stats.stackexchange.com/questions/402668/intuitive-e... )
Its of course the theory behind monads that since Eugenio Moggi are used to model computational effects in pure functional languages. Effects such as state, optional return types (used in turn for error handling) (maybe monad), input/output (reader writer monad) and others. Beyond effects, Wadler used monads for parsers (monadic parsing).
The Curry-Howard "isomorphism" (slogan: propositions are types, proofs are programs/functions) map code to logic in a categorical way described first by certain book of Lambek-Scott with uses in formal software verification.
Categories provide abstraction. You first distill the behavior of how Haskell (or you other pet functional language) work with Hask, the category of Haskell types, and then you can apply your abstract distillate to other categories and obtain task-oriented, tailored computing concepts that enrich bare language capabilities, providing applications including 1) probabilistic programs 2) automatic differentiation. Conal Elliott has very concrete work along this lines. When he speaks of CCCs (following Lambek) he alludes to cartesian closed categories, the crucial property of having a type constructor for function spaces and higher order functions. See his "compiling to categories" for a very concrete, hands-on feel. Another application he shows is in hardware synthesis (baking your functional algorithm to a netlist of logical gates for automating the design of custom hw accelerators).
In short, why categories? computational effects, formal verification and the equivalence of simply-typed lambda-calculus with cartesian closed categories, with lambda-calculus being the backbone of functional programming language semantics.
I'd phrase this a tiny bit differently: monads give a model of effects in _impure_ languages and are important for that reason. The fact that Haskell chooses to emphasize monads in the language itself is cool, but their utility is not restricted to pure functional languages; quite the opposite! In a pure functional language you don't need to think about effects at all, and the only model you need is mathematical functions, which are much simpler.
one of the things i took away about category theory is that it allows you to avoid repeating certain arguments which boil down to so-called “abstract nonsense” ie they have nothing to do with the specific objects you’re dealing with but rather are a consequence of very generic mapping relationships between them. maybe better versed people can give specifics.
as a very broad example there are multiple ways to define a “homology” (ex simplicial, singular, etc) functor associating certain groups to topological spaces as invariants. but the arguments needed to prove properties of the relationships between those groups can be derived from very general properties of the definitions and don’t need to be re-argued from the very fine definitions of each type of homology.
i think.
Category theory is popular in computer science because, at a fundamental level, they're very compatible ways of seeing the world.
In computing, we think about:
- a set of states
- with transformations between them
- including a ‘do nothing’ transformation
- that can be composed associatively (a sequence of statements `{a; b;}; c` transforms the state in the same way as a sequence of statements `a; {b; c;}`)
- but only in certain ways: some states are unreachable from other states
This is exactly the sort of thing category theory studies, so there's a lot of cross-pollination between the disciplines. Computation defines interesting properties of certain categories like ‘computation’ or ‘polynomial efficiency’ that can help category theorists track down interesting beasts to study in category theory and other branches of mathematics that have their own relationships to category theory. Meanwhile, category theory can give suggestions to computer science both about what sort of things the states and transformations can mean and also what the consequences are of defining them in different ways, i.e. how we can capture more expressive power or efficiency without straying too far from the comfort of our ‘do this then do that’ mental model.
This latter is really helpful in computer science, especially in programming language or API design, because in general it's a really hard problem to say, given a particular set of basic building blocks, what properties they'll have when combined in all the possible ways. Results in category theory usually look like that: given a set of building blocks of a particular form, you will always be able to compose them in such a way that the result has a desired property; or, no matter how they're combined, the result will never have a particular undesired property.
As an aside, it's common in a certain computer science subculture (mostly the one that likes category theory) to talk about computing in the language of typed functional programming, but if you don't already have a deep understanding of how functional programming represents computation this can hide the forest behind the trees: when a functional programmer says ‘type’ or ‘typing context’ you can think about sets of potential (sub)states of the computer.
Still, what's in your opinion, the advantage of thinking in category theory rather than set theory? (For programming, not - algebraic geometry.)
I mean, all examples I heard can be directly treated with groups, monoids, and regular functions.
I know some abstract concepts that can be defined in a nice way with CT but not nearly as easy - set theory, e.g. (abstract) tensor product. Yet, for other concepts, including quantum mechanics, I have found that there is "abstract overhead" of CT with little added value.
It's just a good set of models to use to think about all sorts of different mathematical systems, kind of like a unified vocabulary. Beyond undergraduate level, category theory these days plays a huge role within many vast fields - e.g., algebraic geometry, algebraic topology, or representation theory.
I think your reply overstates the importance of category theory in mathematics and doesn't give any hint on what it is about.
IMO a better reply would be: category theory appeared to unify the concepts around using discrete objects to prove the properties of continous objects in topology, like fundamental groups, homology groups and homothopy groups. It is only practically useful for very advanced proofs like 2nd Weil Conjecture. Any usage of it in programming is only an analogy and is not mathematically rigorous (see https://math.andrej.com/2016/08/06/hask-is-not-a-category/)
Wasn't that corrected already? I mean categorical definition of Hask?
For me.. It's a very useful mental model for thinking about architecture & logic
Examples? I haven't really seen many applications of CT, even though I looked for them since I find the idea of CT interesting
You have a function that does A() and another function that does B().
Upon careful inspection or after just writing/using them 10,000s of times[1] you realize they are both special cases of one general function f()[2]. Congrats, you're likely doing CT now, but barely scratching the surface, though.
Let's say you find a way to do a function factory that generates explicit instances of f() -> A() and f() -> B() at runtime for your different use cases as they are needed. You do this 100 times, 1,000 times[1] with many different functions, in many different contexts. You eventually realize that if all your functions and their signatures had the same structure[3] it would be quite easy to mix some (or all?) of them with each other, allowing you to handle a perhaps infinite amount of complexity in a way that's very clean to conceptualize and visualize. Isn't this just FP? Yes, they're very intimately related.
By this point you're 99.9999% doing CT now, but remember to shower regularly, touch grass etc.
CT formalized these structures with mathematical language, and it turns out that this line of thinking is very useful in many fields like ours (CS), Math, Physics, etc.
1. Which is what happened to me.
2. Which sometimes is a way more elegant and simple solution.
3. This term is fundamental and has way more meaning than what I could write here and what one would think on a first approach to it.
Why not simply use UML?
UML doesn't give ideas for how to actually structure things. Category theory is primarily a theory of nice ways things can be put together or form relationships while maintaining invariants.
Very different thing.
CT is more of a way to abstract all mathematics.
Lets software designers use fancy words and ask for a raise.
Had to reduce the page to 67% to get out of "Fisher Price" font size, but otherwise quite interesting.
this has surfaced HN top at least 5 times
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
It has been updated as per the author's Mastodon https://mathstodon.xyz/@abuseofnotation/115298450513159834
> In the course of this book, we learned that programming/computer science is the study of the category of types in programming languages.
This is a golden quote.
It's also wrong, since computer science is traditionally mostly about computation, which has nothing to do with CT
Insofar as ‘computation’ is about mapping one state or value to another state or value, it has a lot to do with CT!
The question of whether CT is _useful_ for studying computation is different, and there are certainly other lenses you can see computation through that some people would argue are better. But it's hard to deny that they're _related_.
I mean, technically almost all of math can be related to other math one way or another. To say the CT has a lot to do with computation is definitely a stretch. CT is not a recognized Computer Science subject. It's mostly used in the functional programming community to name certain concepts and theorems, but then applied to a specific type system (so it's not actually doing CT, since your restricting yourself to a single category, whereas CT is really about connecting different categories by generalizing over them).
When I read this title I thought it was going to be rings and groups in bikinis. I’m so dumb.
It makes it so much more complicated than what is needed to understand natural transformation. Natural transformation is just mapping between two functors. You can discover the laws yourself just from this.
I hate this particular mix of prose and formalism. Too complicated to be pop-sci, too informal to be, well, formal. I got to this part:
> We know that two orders are isomorphic if there are two functors, such that going from one to the other and back again leads you to the same object.
And I have no clue what is a functor, nor order. "Functor" wasn't defined, and "order" is defined as "thin category", which in turn remains undefined.
Seems to me like in order to understand this text you already need to understand category theory. If that's the case, then why would you be reading it?
There are too many pictures in this for my taste. I am currently reading this one, and I like it better so far: https://doi.org/10.1142/13670
It's author's intention, since the title explicitly says "Illustrated" =)
Fair enough.