I
fondly remember Dijkstra! He inspired important advances in my own
professional development as related in this posting. I was fortunate to
have stimulating conversations. Interactions with Dijkstra were
generally very focused on solutions he had recently found or problems he
was trying to solve.
Dijkstra was a physicist turned programmer--a pragmatist to the core who used mathematical tools when needed to solve problems. My background in mathematics has made my
work more in the framework of mathematical abstraction applied to practical issues.
When working on the same issues, our approaches (and solutions) come from
different perspectives. Pragmatism and abstraction are both useful in research.
Listening to Dijkstra and reading his publications was always worthwhile even though we often disagreed. I learned a great deal from him and very much appreciate having interacted with him and observed him in action.
One of the finest tributes to Dijkstra is to take his work seriously. One on one, Dijkstra loved a good debate! This posting further develops some of his most important work.
Dijkstra's semaphores were a major advance in concurrent programming. Dijkstra used semaphores to implement critical sections of code in which activity in the section excludes all other activities from executing in the section. In large programs, semaphores are not easy to manage. Bugs in the use of semaphores (also known as "locks") have caused many irreproducible crashes in large software systems. A region of mutual exclusion is a generalization of a critical section that protects local variables in addition to code. As initially developed, regions of mutual exclusion are less powerful than using semaphores. A "hole" in a region of mutual allows an activity to temporarily leave the region and later return to exclude other activities. Regions with holes provide a structured way to have the full power of semaphores, without allowing their dangerous use. Consequently, system crashes can be prevented using regions of mutual exclusion with holes.
Global states are central to the Church/Turing model of computation. In this model, there is a global state with nondeterministic transition to the next state. A state transition can be initiated by a computation step that specifies that one of a number of specified steps can be executed next. The model has the property of bounded nondeterminism. Bounded nondeterminism means that there is a bound determined in advance for the number of possibilities that a system can explore given that it must always come back with an answer. In the development of a theory of concurrent computation, Dijkstra remained committed to a global state model of computation. This commitment gave rise to the “unbounded nondeterminism” controversy. Digital systems can be physically indeterminate by making use of arbiters in order to resolve reception order in communication between systems. Because of physical indeterminacy, a digital system can have the property of unbounded nondeterminism in that no bound exists when it is started on the number of possibilities that an always-responding digital system can explore. Being restricted to bounded nondeterminism is one the reasons that the Church/Turing model of computation is inadequate for digital computation. More general models of computation had to be developed beyond the Church/Turing model. The upshot was that Dijkstra and I agreed to disagree on the issue of physical indeterminacy and consequent unbounded indeterminacy.
Following 1951 work by Alan Turing, Dijkstra developed a theory of computation based on attaching predicates to global states in his wonderful book "A Disciple of Programming". He formulated preconditions for predicates in the nondeterministic transition from one nonlocal state to its successor, which systematized Turing's earlier work.
An induction principle was implicit in Dijkstra's theory of computation. Because global states do not exist in the theory of digital computation, the induction principle for proving properties of digital computation must be based on computational events rather than global states. The induction principle for digital computation is analogous to the one that Dedekind developed for the natural numbers. To prove that a predicate P holds for every natural number, Dedekind's induction principle has two steps as follows:
- Base step: Prove that P holds for 0.
- Induction Step: Prove that if P holds for natural number n, then it holds for n+1.
Dedekind proved that his induction principle rigorously characterizes the natural numbers. Analogously, to prove that a predicate P holds for every event of a digital system, event induction has two steps as follows:
- Base step: Prove that P holds for every incoming event of the system.
- Induction step: Prove that if P holds for an event e, then it holds for each event that immediately follows e.
Many kinds of properties of digital computation can be proved using event induction. Event induction can be used to prove that a particular digital system will always respond to a request. Also, it can be used to prove that a particular digital system will always have a some invariant property viewed from outside even as the system changes its behavior over time. Unlike the Church/Turing model, the event induction principle rigorously characterizes digital computation.
Dijkstra's concurrent automatic storage reclamation algorithm is a marvel. It causes no pauses in operation of a system and causes no pause between ending one phase of reclamation and beginning the next phase. However, Dijkstra's algorithm does not compactify storage. Instead, storage is reclaimed in pieces throughout memory. Consequently, the algorithm does not maintain locality of reference, which is crucial to performance on modern computer cores. Inspired by his algorithm, Henry Lieberman and I published a paper on generational automatic storage reclamation in which storage is divided into generations by time of creation and use of storage. In practice, the most recent generation has much better locality of use.
Dijkstra's work on self-stabilizing systems helped inspire new developments in Intelligent Systems. The intuition is that a system should be stable even in face of disagreement about empirical information, which is extremely common in an Intelligent System. Classical logic can make absurd inferences using such information even if disagreement is deeply hidden. The absurd inferences can be used in cyberattacks against Intelligent Systems. Unlike classical logic, robust inference can prevent absurd inferences being made using empirical information. Consequently, robust inference is crucial to the development and universal deployment of Intelligent Systems in this decade.