My understanding of this is that P-completeness for a problem implies that any problem in P can be transformed into it with a polynomial-time reduction. Deterministic Turing machines (more precisely, the problem of determining the future state of a deterministic Turing machine) are in P.
Not with a polynomial-time reduction though. Quoting from [1]:
> Generically, reductions stronger than polynomial-time reductions are used, since all languages in P (except the empty language and the language of all strings) are P-complete under polynomial-time reductions.
Turing completeness and P completeness are completely different things. There is no sense in which P-completeness is a "more specific" version of Turing-completeness.
Knuth's Art of Computer Programming was built around assembly language for a fantasy computer which is inspired more or less by the Turing machine (program counter is an index into a program 'state', instructions transform a data 'state' and transition to a different program 'state') whereas Structure and Interpretation of Computer Programs is more inspired by Church.
The pinnacle of undergraduate CS education, I think, is compilers, which is where those approaches are ultimately unified on a practical level (you make a machine that transforms one to the other) but the introductory course for the non-professional programmer or the person who aspires to writing compilers someday is still pretty controversial.
> It's a big controversy in CS education, isn't it?
Is it?
I think most people who have heard of the topic are familiar with the Church-Turing thesis and know that both definitions of effective calculability are equivalent.
My preference is mostly a matter of taste I think. I admire how little there is to the lambda calculus definition and how computability somehow emerges through construction and definition (which admittedly are not simple). It nicely shows that you need very little "machinery" to get a powerful computational system.
Turing machines by comparaison seem somewhat contrieved with their infinite tape, head and register even if I realise that in a lot of way they are closer to an actual computer.
The reduction in the article boils down to origami crease patterns simulating rule 110 simulating a cyclic tag system simulating a clockwise Turing machine simulating an arbitrary Turing machine (and specific Turing machines simulating the lambda calculus are known).
Do you think there is an "obvious" way to simulate the lambda calculus using origami crease patterns more directly? For example, a cyclic tag system or even rule 110 configuration simulating the lambda calculus without indirection through Turing machines.
If I may chip in, I wouldn't call it obvious or straight-forward, but multiset rewriting[1] can be implemented in terms of multiplication alone(like in Fractran), and multiplication can be implemented in origami[2], so there might be something there.
Related How to Build an Origami Computer (63 points, 2024, 15 comments) https://news.ycombinator.com/item?id=39191627
Related: Origami-Constructible Numbers[1] & Folding Primes[2]
[1] https://www.cs.mcgill.ca/~jking/papers/origami.pdf
[2] https://www.pythabacus.com/Origami%20Fractions/folding.htm
> we prove that flat origami, when viewed as a computational device, is Turing complete, or more specifically P-complete
...aren't those mutually exclusive?
I feel a mix of "those are obviously different complexity levels" and "is it like C pre-processor turing-completeness situation?"
My understanding of this is that P-completeness for a problem implies that any problem in P can be transformed into it with a polynomial-time reduction. Deterministic Turing machines (more precisely, the problem of determining the future state of a deterministic Turing machine) are in P.
Not with a polynomial-time reduction though. Quoting from [1]:
> Generically, reductions stronger than polynomial-time reductions are used, since all languages in P (except the empty language and the language of all strings) are P-complete under polynomial-time reductions.
[1] https://en.wikipedia.org/wiki/P-complete
Turing completeness and P completeness are completely different things. There is no sense in which P-completeness is a "more specific" version of Turing-completeness.
Honestly wild how you can get Turing completeness outta folding paper, never thought I'd read that today.
That's why I have always prefered Church approach to computation to Turing machines.
The lambda calculus, by its simplicity as just a rewriting language, makes it "obvious" how effective computability emerges from very little.
It's a big controversy in CS education, isn't it?
Knuth's Art of Computer Programming was built around assembly language for a fantasy computer which is inspired more or less by the Turing machine (program counter is an index into a program 'state', instructions transform a data 'state' and transition to a different program 'state') whereas Structure and Interpretation of Computer Programs is more inspired by Church.
The pinnacle of undergraduate CS education, I think, is compilers, which is where those approaches are ultimately unified on a practical level (you make a machine that transforms one to the other) but the introductory course for the non-professional programmer or the person who aspires to writing compilers someday is still pretty controversial.
> It's a big controversy in CS education, isn't it?
Is it?
I think most people who have heard of the topic are familiar with the Church-Turing thesis and know that both definitions of effective calculability are equivalent.
My preference is mostly a matter of taste I think. I admire how little there is to the lambda calculus definition and how computability somehow emerges through construction and definition (which admittedly are not simple). It nicely shows that you need very little "machinery" to get a powerful computational system.
Turing machines by comparaison seem somewhat contrieved with their infinite tape, head and register even if I realise that in a lot of way they are closer to an actual computer.
The reduction in the article boils down to origami crease patterns simulating rule 110 simulating a cyclic tag system simulating a clockwise Turing machine simulating an arbitrary Turing machine (and specific Turing machines simulating the lambda calculus are known).
Do you think there is an "obvious" way to simulate the lambda calculus using origami crease patterns more directly? For example, a cyclic tag system or even rule 110 configuration simulating the lambda calculus without indirection through Turing machines.
If I may chip in, I wouldn't call it obvious or straight-forward, but multiset rewriting[1] can be implemented in terms of multiplication alone(like in Fractran), and multiplication can be implemented in origami[2], so there might be something there.
[1] https://wiki.xxiivv.com/site/pocket_rewriting
[2] https://wiki.xxiivv.com/site/paper_product.html