In the early eighties, for example, he brought to Oxford the creators of two major formal specification languages: Cliff Jones with VDM, Jean-Raymond Abrial with Z. At Oxford, Z actually underwent a systematic rework, reminiscent of the Goethe quip reproduced above, with Frenchmen and mathematicians replaced by English mathematicians (or computer scientists). The new version enjoyed immense success in Britain, won a Queen’s Award and was used not only academically but in many mission-critical applications in industry, leading to a number of startups and work by such researchers (all having gone through Oxford at some point) as Jim Woodcock, Carroll Morgan, Jim Davies, J. Michael Spivey, Ian Hayes and Ib Holm Sørensen.
This was the world I walked into in 1986 as an undergraduate studying Mathematics and Computation. I was quite quickly indoctrinated in the ways of Z notation [1] and CSP [2] and had to learn to program in ML. I still have all the lecture and class notes and they are quite fascinating to look at so many years later. Funny to read the names of celebrated researchers that I just thought of as "the person who teachers subject X". I do recall Carroll Morgan's teaching being very entertaining and interesting. And I interacted quite a bit with Jim Davies, Jim Woodcock and Mike Spivey.
Having decided I wanted to stay and do a DPhil I managed to get through the interview with Tony Hoare (hardest question: "Where else have you applied to study?" answer: "Nowhere, I want to stay here") and that led to my DPhil being all CSP and occam [3]. I seem to remember we had an array of 16(?) transputers [4] that the university had managed to get because of a manufacturing problem (I think the dies were incorrecty placed making the pinouts weird, but someone had made a custom PCB for it).
Imagine my delight when Go came around and I got to see CSP in a new language.
Well, of course, a lot had already been done by the time I got there. But what really struck me about CSP was how easy it was to reason about concurrent programs because synchronization and communication were the same thing. In my doctoral thesis I created a provably secure multi-user file systems which went from specification to CSP to occam.
It was not a fun time.
In Java,
for example,
the concurrency & threading primitives were so low level it was almost impossible for anyone to use them and get it right.
The concurrency package introduced in 2004 brought higher level concepts
and mostly eliminated the need to risk the footguns present in the thread/runnable/synchronized constructs.
> In Java, for example, the concurrency & threading primitives were so low level it was almost impossible for anyone to use them and get it right.
I disagree with this. As long as you had an understanding of critical sections and notify & wait, typical use cases were reasonably straightforward. The issues were largely when you ventured outside of critical sections, or when you didn’t understand the extent of your shared mutable state that needed to be protected by critical sections (which would still be a problem today, for example when you move references to mutable objects between threads — the concurrent package doesn’t really help you there).
The problem with Java pre-1.5 was that the memory model wasn’t very well-defined outside of locks, and that the guarantees that were specified weren’t actually assured by most implementations [0]. That changed with the new memory model in Java 1.5, which also enabled important parts of the new concurrency package.
Now I see it written down I realize how lucky I've been to have spent time at the Programming Research Group at Oxford when Tony Hoare was running it and then to have worked for and founded a company with John Ousterhout. And, yeah, when I worked with him he wasn't a fan of threads.
> First, the null pointer is essentially inevitable if you want to model the world, which has references (things containing denotations of other things).
I took major exception to this. The real world doesn't have non-things, and references do not demand to refer to non-things.
If your domain does actually have the concept of null, just make a type for it. Then you won't accidentally use a 6 or a "foo" where a null was demanded.
Tony Hoare himself said that it wasn't needed in the type system he'd created. He added it because it was so easy to do so and couldn't resist the temptation.
OTOH seeing undefined method 'blah' for nil:NilClass isn't really any better.
> What’s the referent of “my spouse” if I’m unmarried?
* It's the same referent as all the other things you don't have. Your struct has a Spouse, why does it not also have a RivieraChateau, a SantaClaus, BankAccountInSwissFrancs. If you can answer why you left those out, you know why to leave out Spouse.
* Why stick to 0-1 Spouses. As soon as you do that you're gonna need n Spouses, where some of them will have an ex- flag set, married from&to timestamps.
> Is your point here that every pointer type for which this can be the case should include an explicitly typed null value?
* It shouldn't make a difference whether it's a pointer or a reference or a value or a class. If you believe null is one of the Integers, you should also believe that null is one of the ints. Why should your domain change to accommodate C idioms.
Making it explicit wouldn’t be particularly problematic no? Option<&Spouse> in Rust terms. Or for this specific case, a GADT (Single | Married<&Spouse>)?
It could even use a special “NULL” address. Just don’t pollute every type with it.
Polluting every type with it is just a very bad implementation which has nothing in common with the concept of NULL as proposed by Tony Hoare in the paper "Record Handling".
The concept as proposed by Hoare is strictly necessary for things like partial relations, which are encountered very frequently in practice.
It is true however that a large number of programming languages have misused the concept of a NULL reference proposed by Hoare.
As you say, there must be distinct types that may have or may not have a "nothing" value.
A bit more than 50 years. Grace Hopper retired in 1966. It's true that Grace kept un-retiring, but the most crucial stuff is all before she retired. Invention of what we'd think of as a linker-loader (which Grace called a compiler) and of the broad concept of high level programming languages all happens in the 1950s.
"To explain what I was doing in logic-driven software architecture I looked for a good metaphor and, on the spot, proposed that there was a kind of “contract” between caller and callee. He did not say anything, but his mere presence had enabled me to make my incipient ideas jell."
I hadn't realised that Hoare was present when Meyer first used the term 'contract' to describe his ideas.
Meyer makes an important point that often gets lost: the null pointer predates Hoare. NIL existed in McCarthy's Lisp in 1959, six years before Hoare added null references to ALGOL W. The "mistake," if it was one, was already widespread.
What made Hoare's 2009 confession so impactful wasn't that he was solely responsible — it's that he was the first person with that level of authority to publicly say "this was wrong."
That's what gave Rust, Swift, and Kotlin permission to design around it.
LISP was special because it had only 2 data types: atoms and lists.
Both lists and atoms could appear in any place in any function or special form.
NIL was a special atom, which was used to stand for an empty list. Because it could appear in any place in a LISP program, it could be used anywhere where one had to write that something does not exist.
In a programming language with a more complicated and also extensible system of data types the handling of "nothing" values must also be more complex.
Any optional type can be viewed as a variable-length array of its base type, whose maximum length is 1 and a null length indicates a non-existent value.
This is equivalent with the use of NIL in LISP.
However, it is better to consider optional types as a distinct method of deriving types from base types than arrays, structures or unions, because in most cases more efficient implementations are possible for optional types than implementing them as variable-length arrays that may have a null length or as tagged unions where one of the alternative types is the empty set (a.k.a. void).
I don’t know much about algol, but in Lisp, nil represents the empty list. And because the list is a recursive data structure, it always contains the empty list. It’s not the same as java where null is its own value that is part of every type. In Lisp, an atom can’t be nil because an atom is not a list.
What you say may be true for some modern LISPs, but it was false in most early LISPs, certainly in any LISP that has preceded the paper "Record Handling" of Tony Hoare.
I quote from the manual of LISP I: "Here NIL is an atomic symbol used to terminate lists".
I am not sure which is the rule in Common LISP, but in many early LISPs the predicate (atom NIL) was true.
In early LISPs, the end of a list was recognized when its CDR was an atom, instead of being another list. The atom could be different from NIL, because that final list could have been an association pair, pointing towards two associated atoms.
The fact that in early LISPs NIL was an atom, but it was also used to stand for an empty list caused some ambiguities.
EDIT:
I have searched now and in Common LISP the predicate (atom NIL) remains true, like in all early LISPs, therefore NIL is still an atom, even if using it almost anywhere is equivalent with an empty list.
In the early eighties, for example, he brought to Oxford the creators of two major formal specification languages: Cliff Jones with VDM, Jean-Raymond Abrial with Z. At Oxford, Z actually underwent a systematic rework, reminiscent of the Goethe quip reproduced above, with Frenchmen and mathematicians replaced by English mathematicians (or computer scientists). The new version enjoyed immense success in Britain, won a Queen’s Award and was used not only academically but in many mission-critical applications in industry, leading to a number of startups and work by such researchers (all having gone through Oxford at some point) as Jim Woodcock, Carroll Morgan, Jim Davies, J. Michael Spivey, Ian Hayes and Ib Holm Sørensen.
This was the world I walked into in 1986 as an undergraduate studying Mathematics and Computation. I was quite quickly indoctrinated in the ways of Z notation [1] and CSP [2] and had to learn to program in ML. I still have all the lecture and class notes and they are quite fascinating to look at so many years later. Funny to read the names of celebrated researchers that I just thought of as "the person who teachers subject X". I do recall Carroll Morgan's teaching being very entertaining and interesting. And I interacted quite a bit with Jim Davies, Jim Woodcock and Mike Spivey.
Having decided I wanted to stay and do a DPhil I managed to get through the interview with Tony Hoare (hardest question: "Where else have you applied to study?" answer: "Nowhere, I want to stay here") and that led to my DPhil being all CSP and occam [3]. I seem to remember we had an array of 16(?) transputers [4] that the university had managed to get because of a manufacturing problem (I think the dies were incorrecty placed making the pinouts weird, but someone had made a custom PCB for it).
Imagine my delight when Go came around and I got to see CSP in a new language.
[1] https://en.wikipedia.org/wiki/Z_notation
[2] https://en.wikipedia.org/wiki/Communicating_sequential_proce...
[3] https://en.wikipedia.org/wiki/Occam_(programming_language)
[4] https://en.wikipedia.org/wiki/Transputer
These are all great contributions, I even briefly used Occam/Transputer.
Most however will be most familiar with Quicksort[0] and NULL.
[0] https://en.wikipedia.org/wiki/Quicksort
I can only imagine what it must've been like throughout the 90s and teens seeing people whack their heads against concurrency.
Well, of course, a lot had already been done by the time I got there. But what really struck me about CSP was how easy it was to reason about concurrent programs because synchronization and communication were the same thing. In my doctoral thesis I created a provably secure multi-user file systems which went from specification to CSP to occam.
It was not a fun time. In Java, for example, the concurrency & threading primitives were so low level it was almost impossible for anyone to use them and get it right. The concurrency package introduced in 2004 brought higher level concepts and mostly eliminated the need to risk the footguns present in the thread/runnable/synchronized constructs.
As far back at 1995 people were warning against using threads. See for example John Ousterhout's "Why Threads are a Bad Idea (for most purposes)" <https://blog.acolyer.org/2014/12/09/why-threads-are-a-bad-id...>
> In Java, for example, the concurrency & threading primitives were so low level it was almost impossible for anyone to use them and get it right.
I disagree with this. As long as you had an understanding of critical sections and notify & wait, typical use cases were reasonably straightforward. The issues were largely when you ventured outside of critical sections, or when you didn’t understand the extent of your shared mutable state that needed to be protected by critical sections (which would still be a problem today, for example when you move references to mutable objects between threads — the concurrent package doesn’t really help you there).
The problem with Java pre-1.5 was that the memory model wasn’t very well-defined outside of locks, and that the guarantees that were specified weren’t actually assured by most implementations [0]. That changed with the new memory model in Java 1.5, which also enabled important parts of the new concurrency package.
[0] https://www.cs.tufts.edu/~nr/cs257/archive/bill-pugh/jmm2.pd...
Now I see it written down I realize how lucky I've been to have spent time at the Programming Research Group at Oxford when Tony Hoare was running it and then to have worked for and founded a company with John Ousterhout. And, yeah, when I worked with him he wasn't a fan of threads.
> First, the null pointer is essentially inevitable if you want to model the world, which has references (things containing denotations of other things).
I took major exception to this. The real world doesn't have non-things, and references do not demand to refer to non-things.
If your domain does actually have the concept of null, just make a type for it. Then you won't accidentally use a 6 or a "foo" where a null was demanded.
Tony Hoare himself said that it wasn't needed in the type system he'd created. He added it because it was so easy to do so and couldn't resist the temptation.
OTOH seeing undefined method 'blah' for nil:NilClass isn't really any better.
And I would just add:
Do we want to model the "real world"? This seems to hark back to that long-forgotten world of "animal-cat-dog" OO programming.
I spent the bulk of my career modelling a small part of the real world and OOP absolutely has a place in that.
The real world is full of relationships that may or may not exist. What’s the referent of “my spouse” if I’m unmarried?
Is your point here that every pointer type for which this can be the case should include an explicitly typed null value?
> What’s the referent of “my spouse” if I’m unmarried?
* It's the same referent as all the other things you don't have. Your struct has a Spouse, why does it not also have a RivieraChateau, a SantaClaus, BankAccountInSwissFrancs. If you can answer why you left those out, you know why to leave out Spouse.
* Why stick to 0-1 Spouses. As soon as you do that you're gonna need n Spouses, where some of them will have an ex- flag set, married from&to timestamps.
> Is your point here that every pointer type for which this can be the case should include an explicitly typed null value?
* It shouldn't make a difference whether it's a pointer or a reference or a value or a class. If you believe null is one of the Integers, you should also believe that null is one of the ints. Why should your domain change to accommodate C idioms.
Making it explicit wouldn’t be particularly problematic no? Option<&Spouse> in Rust terms. Or for this specific case, a GADT (Single | Married<&Spouse>)?
It could even use a special “NULL” address. Just don’t pollute every type with it.
Polluting every type with it is just a very bad implementation which has nothing in common with the concept of NULL as proposed by Tony Hoare in the paper "Record Handling".
The concept as proposed by Hoare is strictly necessary for things like partial relations, which are encountered very frequently in practice.
It is true however that a large number of programming languages have misused the concept of a NULL reference proposed by Hoare.
As you say, there must be distinct types that may have or may not have a "nothing" value.
Another example a web link that points to a page that no lnger exists.
It's a null pointer exception.
One of CS's heroes lauding another. I feel I know both author and subject better for reading this.
We are all very lucky to have lived through the foundation of a new science and new engineering over the last 50 years.
A bit more than 50 years. Grace Hopper retired in 1966. It's true that Grace kept un-retiring, but the most crucial stuff is all before she retired. Invention of what we'd think of as a linker-loader (which Grace called a compiler) and of the broad concept of high level programming languages all happens in the 1950s.
"To explain what I was doing in logic-driven software architecture I looked for a good metaphor and, on the spot, proposed that there was a kind of “contract” between caller and callee. He did not say anything, but his mere presence had enabled me to make my incipient ideas jell."
I hadn't realised that Hoare was present when Meyer first used the term 'contract' to describe his ideas.
Thank you for sharing this fantastic tribute
Tony Hoare has died https://news.ycombinator.com/item?id=47324054 (2038 points, 275 comments)
Meyer makes an important point that often gets lost: the null pointer predates Hoare. NIL existed in McCarthy's Lisp in 1959, six years before Hoare added null references to ALGOL W. The "mistake," if it was one, was already widespread.
What made Hoare's 2009 confession so impactful wasn't that he was solely responsible — it's that he was the first person with that level of authority to publicly say "this was wrong."
That's what gave Rust, Swift, and Kotlin permission to design around it.
LISP was special because it had only 2 data types: atoms and lists.
Both lists and atoms could appear in any place in any function or special form.
NIL was a special atom, which was used to stand for an empty list. Because it could appear in any place in a LISP program, it could be used anywhere where one had to write that something does not exist.
In a programming language with a more complicated and also extensible system of data types the handling of "nothing" values must also be more complex.
Any optional type can be viewed as a variable-length array of its base type, whose maximum length is 1 and a null length indicates a non-existent value.
This is equivalent with the use of NIL in LISP.
However, it is better to consider optional types as a distinct method of deriving types from base types than arrays, structures or unions, because in most cases more efficient implementations are possible for optional types than implementing them as variable-length arrays that may have a null length or as tagged unions where one of the alternative types is the empty set (a.k.a. void).
I don’t know much about algol, but in Lisp, nil represents the empty list. And because the list is a recursive data structure, it always contains the empty list. It’s not the same as java where null is its own value that is part of every type. In Lisp, an atom can’t be nil because an atom is not a list.
What you say may be true for some modern LISPs, but it was false in most early LISPs, certainly in any LISP that has preceded the paper "Record Handling" of Tony Hoare.
I quote from the manual of LISP I: "Here NIL is an atomic symbol used to terminate lists".
I am not sure which is the rule in Common LISP, but in many early LISPs the predicate (atom NIL) was true.
In early LISPs, the end of a list was recognized when its CDR was an atom, instead of being another list. The atom could be different from NIL, because that final list could have been an association pair, pointing towards two associated atoms.
The fact that in early LISPs NIL was an atom, but it was also used to stand for an empty list caused some ambiguities.
EDIT: I have searched now and in Common LISP the predicate (atom NIL) remains true, like in all early LISPs, therefore NIL is still an atom, even if using it almost anywhere is equivalent with an empty list.
RIP Tony H - I am reading this article with immense gratitude for someone i never met but who has affected & benefited me.
I mentioned this in the other thread on Tony Hoare;
Theories of Programming: The Life and Works of Tony Hoare published by ACM in 2021 - https://dl.acm.org/doi/book/10.1145/3477355
See the "preface" for details of the book - https://dl.acm.org/doi/10.1145/3477355.3477356
Review of the above book - https://www.researchgate.net/publication/365933441_Review_on...
PS: You can check with some lady named "Anna" on the interweb for book access :-)
this matches my experience exactly
Another small tribute recently, latest FFmpeg 8.1 release "Hoare" following their tradition of historical-related codenaming.
https://news.ycombinator.com/item?id=47413525