# Normalization by realizability also evaluates

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Normalization by realizability also evaluates Pierre-Évariste Dagand 1 & Gabriel Scherer 2 1: Gallium, INRIA Paris-Rocquencourt 2: Gallium, INRIA Paris-Rocquencourt Abstract For those of us that generally live in the world of syntax, semantic proof techniques such as realizability or logical relations/parametricity sometimes feel like magic. Why do they work? At which point in the proof is the real work done? Bernardy and Lasson [4] express realizability and parametricity models as syntactic models but the abstraction/adequacy theorems are still explained as meta-level proofs. Hoping to better understand the proof technique, we look at those proofs as programs themselves. How does a normalization argument using realizability actually computes those normal forms? This detective work is at an early stage and we propose a first attempt in a simple setting. Instead of arbitrary Pure Type Systems, we use the simply-typed lambda-calculus. 1. Introduction Realizability, logical relations and parametricity are tools to study the meta-theory of syntactic notions of computation or logic; typically, typed lambda-calculi. Starting from a syntactic type system (or system of inference rules for a given logic), they assign to each type/formula a predicate, or relation, that captures a semantic property, such as normalization, consistency, or some canonicity properties. Proof using such techniques rely crucially on an adequacy lemma, or fundamental lemma, that asserts that any term accepted by the syntactic system of inference also verifies the semantic property corresponding to its type it is accepted by the predicate, or is in the diagonal of the relation. They are many variants of these techniques, which have scaled to powerful logics [1] (e.g., the Calculus of Constructions, or predicative type theories with a countable universe hierarchy) and advanced type-system features [12] (second-order polymorphism, general references, equi-recursive types... ). However, to some they have a feel of magic: while the proofs are convincing, one is never quite sure why they really work. Our long-term goal is to better understand the computational behavior of these techniques. Where should we start looking, if we are interested in all three of realizability, logical relations and parametricity? Our reasoning was the following. First, we ruled out parametricity: it is arguably a specific form of logical relation. Besides, it is motivated by applications such as characterizing polymorphism, or extracting invariants from specific types whose computational interpretation is less clear than a proof of normalization. Second, general logical relations seem harder to work with than realizability. They are binary (or n-ary) while realizability is unary, and previous work [4] suggests that logical relations or parametricity can be built from realizability by an iterative process. While the binary aspect of logical relations seem essential to prove normalization of dependently-typed languages with a typed equality [2] (conveniently 1

2 Dagand & Scherer modeled by the logical relation itself) or to formulate representation invariance theorems [3], it does not come into play for normalization proof of simpler languages such as the simply-typed lambdacalculus. Among the various approaches to realizability (such as intuitionistic realizability, à la Prawitz, or classical realizability, à la Krivine), we decided to use classical realizability, which has two sets of realizers for each type A playing a symmetric role, namely the truth witnesses A (or proofs ) and the falsity witnesses }A} (or counter-proofs ). While this choice was mostly out of familiarity, we think its symmetry creates a good setting to study both negative and positive connectives. Keeping in mind that this is work is of exploratory nature, we would summarize our early findings as follows: The computational content of the adequacy lemma is an evaluation function. The polarity of object types seems to determine the flow of values in the interpreter. The way we define the set of truth and falsity witnesses determines the evaluation order. As could be expected, the bi-orthogonal injects a set of (co)-values inside a set of unevaluated (co)-terms. This evaluation order can be exposed by recovering the machine transitions from the typing constraints that appear when we move from a simply-typed version of the adequacy lemma to a dependently-version capturing the full content of the theorem The silhouette of realizability Classical realizability is a technique to reason about the behavior of (abstract) machines (or processes, configurations), c P M, built as pairs x t e y of a term t P T and a co-term e P E. Those sets can be instantiated in many ways; for example, by defining T to be the terms of a lambda-calculus, and suitably choosing the co-terms, one may obtain results about the various evaluation strategies for this calculus. In this subsection, we will keep the framework abstract; to study the simply-typed lambda-calculus we will instantiate it in several slightly different ways. Machines compute by reducing to other machines, as represented by a relation p q Ď M ˆ M between machines. They are closed objects, which have no other interaction with the outside world. The property of machines we want to reason about is represented by a pole K Ď M, a set of machines exhibiting a specific, good or bad, behavior typically machines that reduce to a normal form without raising an error. To be a valid notion of pole, the subset K must be closed by anti-reduction: if c c 1 and c 1 P K, we must have c P K. While machines are closed objects, a term needs a co-term to compute, and vice-versa. If T is a set of terms, we define its orthogonal as the set of co-terms that compute well against terms in T in the sense that they end up in the pole. The orthogonal of a set of co-terms is defined by symmetry: T K fi te P P T, x t e y P Ku E K fi tt P P E, x t e y P Ku Orthogonality plays on intuitions of linear algebra, but it also behaves in a way that is similar to intuitionistic negation: for any set S of either terms or co-terms we have S Ă S KK (just as A ùñ A) and this inclusion is strict in general, except for sets that are themselves orthogonals: S KKK Ď S K. These properties are easily proved consider it an exercise for the reader. If one considers the pole to define a notion of observation, the bi-orthogonals of a given (co)term are those that behave in the same way. Being stable by bi-orthogonality, that is being equal to its bi-orthogonal, is thus an important property; the sets we manipulate are often defined as orthogonals or bi-orthogonals of some more atomic sets [11]. 2

3 Normalization by realizability While orthogonality is a way to describe the interface of a term or co-term in a semantic way, it can be hard to exhibit specific properties of (co)-terms by direct inspection of their orthogonal. It is useful to have a more explicit language to describe the interface along which a term and a co-term may interact to form a machine. Given a syntax for types (or logical properties) A, B..., the terms interacting along the interface A are called the truth witnesses (or truth values, or proofs ) of A, and written A. The co-terms interacting along the interface A are called the falsity witnesses (or falsity values, or counter-proofs, or refutations ) of A, and written }A}. From a programming point of view, it also makes sense to think of A as a set of producers of A, and }A} as consumers of A. For example, one may define the falsity values of the arrow type, }A Ñ B}, as the (bi-orthogonal of the) product A ˆ }B}: to consume a function A Ñ B, one shall produce an A and consume a B. What is all this used for? Two categories of applications are the following: We can use realizability to prove semantic properties of a syntactic system of judgments (type system, logic). Typically, the pole K is defined according to some (simple?) semantic intuition ( normalizes to a valid machine ), while a syntactic system of inference Γ \$ t : A is given by rules that specify no intrinsic correctness property. If all well-typed terms H \$ t : A happen to also be truth witnesses in A, this may tell us that our syntactic judgment does what it is supposed to. For example, we will prove normalization of the simply-typed lambda-calculus by showing that the machines c built from well-typed terms are in the pole K of normalizing machines. We can extend the language of machines, terms and co-terms with new constructs and reduction rules, in order to give a computational meaning to interesting logical properties which had no witnesses in the base language. Typically, extending the base calculus with a suitably-defined callcc construct provides a truth witness for the double-negation elimination (and, thus, to the excluded middle). When possible, this gives us a deeper (and often surprising) understanding of logical axioms. We are concerned, in this article, only with the first aspect: using realizability to prove interesting properties of (relatively simple) type systems that classify untyped calculi. If the typing judgment for terms is of the form Γ \$ t : A, and the typing judgment for co-terms of the form Γ e : B \$, a well-typed machine c : Γ \$ is the pair of a well-typed term, and a well-typed co-term, interacting along the same type A. Finally, one should note that truth witnesses (and, respectively, falsity witnesses and the pole) contain closed terms, which have no free variables. To interpret an open term Γ \$ t : A, one should first be passed a substitution ρ that maps the free variables of t to closed terms; it is compatible with the typing environment Γ (which, in the judgments of the simply-typed lambda-calculus for example, maps term variables to types) if for each mapping x : B in Γ, the substitution maps x to a truth witness for A: ρpxq P A. We write Γ the set of substitutions satisfying this property. For a particular syntactic system of inference, the typical argument goes by proving an adequacy lemma, itself formed of three mutually recursive results of the following form: for any t and ρ such as Γ \$ t : A and ρ P Γ, we have trρs P A for any e and ρ such as Γ e : A \$ and ρ P Γ, we have erρs P }A} for any c and ρ such as c : Γ \$ and ρ P Γ, we have crρs P K Interestingly, this can usually be done without fixing a single pole K: adequacy lemmas tend to work for any pole closed under anti-reduction. Specializing it to different poles can then give different meta-theoretic results (weak and strong normalization, some canonicity results, etc.); in this article, we start simple and mostly use poles of the form normalizes to a valid value. 3

4 Dagand & Scherer 1.2. Witnesses, value witnesses, and adequacy As previously mentioned, we prefer sets of (co)terms to be stable by bi-orthogonality. In particular, we shall always define truth and falsity witnesses such that A KK A and }A} KK }A}, for any A. We write S T to say that the sets S and T are in bijection, and reserve the equality symbol to the definitional equality. To verify this property, we define those sets as themselves orthogonal of some more atomic sets of witnesses, dubbed value witnesses. We assume a classification of our syntactic types A into a category P of positive types, characterized by their truth value witnesses P V, and a category N of negative types, characterized by their falsity value witnesses }N} V. Truth and falsity witnesses for constructed types can then be defined by case distinction on the polarity: }P } fi P K V P fi P KK V }N} fi }N} KK V N fi }N} K V These definitions will be used in all variants of realizability proofs in this article: only }N} V and P V need to be defined, the rest is built on top of that. For consistency, we will also define }P } V fi }P } and N V fi N : positives have no specific falsity value witnesses, they are just falsity witnesses, and conversely negatives have only truth witnesses. Still, extending A V and }A} V to any type A, regardless of its polarity, gives us the following useful properties (which hold A }A} K }A} A K V This is directly checked by case distinction on the polarity of A. For a negative type for example, we have N }N} K V by definition, but also N V N }N} K V and }N} p}n}k V qk, thus }N} N K V. As we remarked before, for any set S we have ps K q KK S K. The above properties thus imply that truth and falsity witnesses, defined as orthogonals, are stable by bi-orthogonality Our approach To distill the computational content of this proof technique, we implement the adequacy lemma as a program in a rather standard type theory 1. As such, the result of this program is a proof of a set-membership t P A, whose computational status is unclear (and is not primitively supported in most type theories). Our core idea is to redefine these types in a proof-relevant way: we will redefine the predicate _ P _ as a type of witnesses : the inhabitants of t P A are the different syntactic justifications that the program t indeed realizes A. Note that _ P _ is here an atomic construct, we do not define truth witnesses independently from their membership predicate. In particular, we can pick the following proof-relevant definition of _ P K: informally, c P K fi tc c 1... c n c n P M N u where M N is the set of normal machines, machines that are not stuck on an error but cannot reduce further. With this definition, an adequacy lemma proving c P K for a well-typed c is exactly a normalization program the question is how it computes. While the definition of truth and falsity witnesses membership needs to be instantiated in each specific calculus, we can already give the general proof-relevant presentation of orthogonality, defining the types _ P _ K and _ P }_} K as dependent function types: e P A K fi Πpt : Tq. t P A Ñ x t e y P K t P }A} K fi Πpe : Eq. e P }A} Ñ x t e y P K 1 Martin-Löf type theory with one universe suffices for our development. The universe is required to define the type of truth and falsity witnesses by recursion on the syntax of object-language types. 4

5 Normalization by realizability Finally, the semantic correctness of an environment ρ P Γ is defined itself as a substitution that maps any term variable x bound to the type A in Γ to a proof of membership ρpxq P A Simplifying further As our very first step in the next section, we will use a simpler notion of pole that is not as orthodox, but results in simpler programs. This is crucial to make sense of their computational behavior. We will simply define c P K fi M N : a witness that c is well-behaved is not a reduction sequence to some value v, but only that value v. This is a weaker type, as it does not guarantee that the value we get in return of the adequacy program is indeed obtained from c (it may be any other value). At some point in the following section, the more informative definition of K will be reinstated, and we will show how the simple programs we have written can be enriched to keep track of this reduction sequence incidentally demonstrating that they were indeed returning the correct value. A pleasant side-effect of this simplification is that the set membership types are not dependent anymore: the definition of c P K does not depend on c ; definitions of t P A and e P }B} do not mention t, e ; ρ P Γ does not mention ρ ; and orthogonality is defined with non-dependent types. To make that simplification explicit, we rename those types J p Kq, J p A q, J p}b}q and J p Γ q: they are justifications of membership of some term (the type system does not remember which one), co-term or machine to the respective set. The definition of orthogonality becomes: J p A K q fi J p A q Ñ J p Kq J p}a} K q fi J p}a}q Ñ J p Kq In particular, this allows us to study the lambda-calculus without having to explicitly define the set of co-terms E first which would steal some of the suspense away by forcing us to decide upon an evaluation order in advance. The realizability program will very much depend on the structure of falsity witnesses J p}a}q which we will define carefully, but not on the syntax of co-terms themselves at first. In fact, we will show that we can deduce the syntax of co-terms from the dependent typing requirements, starting from the programs of the simplified version so they need not be fixed in advance. 2. Normalization by realizability for the simply-typed lambdacalculus We give a first example of a realizability-based proof of weak normalization of the simply-typed lambda-calculus with arrows and products. t, u : terms x, y, z variables λx. t abstractions t u applications pt, uq pairs let px, yq t in u pair eliminations A, B : types X, Y, Z atoms P N constructed types N : negative types A Ñ B function type P : positive types A B product type As mentioned in Section 1.2, we split our types into positives and negatives: our arrows are negative and our products are positive 2. While this distinction has a clear logical status (it refers to which of the 2 In intuitionistic logic, there is little difference between the positive and the negative product, so a product is always suspect of being negative in hiding. We could (and we have in our pen-and-paper work) use the sum instead, which is 5

6 Dagand & Scherer left or right introduction rules, in a sequent presentation, are non-invertible), we will simply postulate it and use it to define truth and falsity witnesses. The type system describing the correspondence between the untyped terms and our small language of types is the expected one: Γ, x:a \$ x : A Γ, x:a \$ t : B Γ \$ λx. t : A Ñ B Γ \$ t : A Γ \$ u : B Γ \$ pt, uq : A B Γ \$ t : A Ñ B Γ \$ u : A Γ \$ t u : B Γ \$ t : A B Γ, x:a, y :B \$ u : C Γ \$ let px, yq t in u : C Those inductive definitions of syntactic terms, types and type derivations should be understood as being part of the type theory used to express the adequacy lemma as a program: this program will manipulate syntactic representations of terms t, types A, and well-formed derivations (dependently) typed by a judgment Γ \$ t : A. We color those syntactic objects (and the meta-language variables used to denote them) in blue to make it easier to distinguish, for example, the λ-abstraction of the object language λx. t and of the meta-language λx. t. We will never mix identifiers differing only by their color you lose little if you don t see the colors, we first wrote the article without them A realization program Let us start with the following definitions of A V and }A} V as set of terms and co-terms: A B V fi A ˆ B }A Ñ B} V fi A ˆ }B} In the simplified setting (defining datatypes of justifications J p_q instead of membership predicates _ P _), it is easy to make these proof-relevant, simply by interpreting the Cartesian product as a type of pairs pairs of the meta-language: J p A B V q fi J p A q J p B q J p}a Ñ B} V q fi J p A q J p}b}q Note that the types J p A V q, J p}a} V q are not to be understood as inductive types indexed by an object-language type A, for they would contain fatal recursive occurrences in negative positions; for example, J p}n Ñ A} V q J p N q J p}a}q J p}n} K V q J p}a}q pj p}n} V q Ñ J p Kqq J p}a}q Instead, one should interpret J p A V q and J p}a} V q as mutually recursive type-returning functions defined by induction over the syntactic structure of A. We justified this with a Coq formalization 3.5. With this in place, we can now write our adequacy lemma as a program. The adequacy lemma corresponds to providing a function of the following type: rea tγu t tau tρu. pγ \$ t : Aq Ñ ρ P Γ Ñ trρs P A which can be simplified further using the non-dependent version: rea tγu t tau tρu. tγ \$ t : Au Ñ J p Γ q Ñ J p A q necessarily positive in single-conclusions presentations, but this adds a non-negligible syntactic overhead (the elimination form has two premises, etc.) where we rather want to be light and concise for comprehensibility. You need to trust that we really do treat them positively and it will be self-evident in the definition of truth witnesses. 6

7 Normalization by realizability The brackets around some arguments mean we will omit those arguments when writing calls to the rea function, as they can be non-ambiguously inferred from the other arguments at least in the fully-dependent version or the ambient environment. We will present the code case per case, and discuss why each is well-typed. We will respect a specific naming convention for arguments whose dependent type is an inhabitation property: an hypothesis of type t P A (or J p A q in the simplified version) will be named t, v for the type t P A V, ē for the type e P }A}, π for the type π P }A} V, and finally ρ for the type ρ P Γ. Names like x, t, u, A, B, Γ will be used to name syntactic term variables, terms, types, and contexts. Finally, to help the reader type-check the code, we will write M S if the expression M has type J psq. For example, t A can be interpreted as t : J p A q. Variable rea x Γ, x:a \$ x : A ( ρ Γ,x:A fi ρpxq A By hypothesis, the binding x:a is in Γ, and we have ρ : J p Γ q, so in particular ρpxq : x P A as expected. λ-abstraction We have structure information on the return type J p A Ñ B q, which unfolds by definition to a function type J p}a Ñ B} V q Ñ J p Kq. It is thus natural to start with a λ-abstraction over J p}a Ñ B} V q, that is, taking a pair of a ū : J p A q and a ē : J p}b}q to return a J p Kq. # + Γ, x:a \$ t : B rea pλx. tq ρ Γ fi λpū A, ē }B} q. x rea t ρrx ÞÑ ūs Γ,x:A ē y Γ \$ λx. t : A Ñ B B The recursive call on t : B gives a value of type J p B q; we combine it with a value of type J p}b}q to give a J p Kq by using the auxiliary cut function x _ _ y A tt eu, t P A Ñ e P }A} Ñ x t e y P K defined by case distinction on the polarity of the type A: x t ē y P fi t ē x t ē y N fi ē t For example in the positive case with t P and ē }P }, we have P }P } K so the application t ē is well-typed. Pair construction # + Γ \$ t : A Γ \$ u : B rea pt, uq Γ \$ pt, uq : A B ρ fi prea t ρ, rea u ρq KK The pair of recursive calls has type J p A q J p B q, that is, J p A B V q. This is not the expected type of the rea function, which is J p A B q J p A B KK V q. We used an auxiliary definition _KK that turns any expression of type J psq into J ps KK q it witnesses the set inclusion S Ď S KK. In fact, we shall define two distinct functions with this syntax, one acting on truth justifications (term witnesses), the other on falsity justifications (co-term witnesses). p v P V q KK fi λē }P }. ē v p π }N} V q KK fi λ t N. t π Notice that we have only defined _ KK on positive terms and negative contexts. We do not need it on negative terms, because it is not the case that N KK V N (definitionally). Instead, we will extend _ KK into a function p_q V that embeds any A V into A, and any }B} V into }B}. p v P V q V fi v KK p t N V q V fi t pē }P } V q V fi ē p π }N} V q V fi π KK 7

8 Dagand & Scherer Application rea pt uq # + Γ \$ t : A Ñ B Γ \$ u : A Γ \$ t u : B ρ fi λ π }B} V. rea t ρ prea u ρ, p πq V q Unlike the previous cases, we know nothing about the structure of the expected return type J p B q, so it seems unclear at first how to proceed. In such situations, the trick is to use the property B }B} K V : the expected type is a function type J p}b} V q Ñ J p Kq. The function parameter π at type J p}b} V q is injected into J p}b}q by the p_q V function defined previously, and then paired with a recursive call on u at type J p A q to build a J p}a Ñ B} V q. This co-value justification can then be directly applied to the recursive call on t, of type J p A Ñ B q, using the property that N }N} K V for negative types N, that is J p N q J p}n} V q Ñ J p Kq. Pair destruction rea plet px, yq t in uq # + Γ \$ t : A B Γ, x:a, y :B \$ u : C Γ \$ plet px, yq t in uq : C ρ fi λ π }C} V. x rea t ρ λp x, ȳq A B V. rea u ρrx ÞÑ x, y ÞÑ ȳs π y A B Here we use one last trick: it is not in general possible to turn a witness in A into a witness in A V (respectively, a witness in }B} into a witness in }B} V ), but it is when the return type of the whole expression is J p Kq: we can combine our t : J p A q with an abstraction λx A V.... (thus providing a name x at type J p A V q for the rest of the expression) returning a J p Kq, using the cut function: x t λx. M K y A : J p Kq. Summing up x _ _ y A : J p A q Ñ J p}a}q Ñ J p Kq x t ē y P fi t ē x t ē y N fi ē t _ KK : J p P V q Ñ J p P q p v P V q KK fi λē }P }. ē v p_q V : J p A V q Ñ J p A q p v P V q V fi v KK p t N V q V fi t _ KK : J p}n} V q Ñ J p}n} V q p π }N} V q KK fi λ t N. t π p_q V : J p}a} V q Ñ J p}a}q pē }P } V q V fi ē p π }N} V q V fi π KK rea x A ρ fi ρpxq rea pλx A. t B q ρ fi λpū A, ē }B} q. x rea t ρrx ÞÑ ūs ē y B rea pt AÑB u A q ρ fi λ π }B} V. rea t ρ prea u ρ, p πq V q rea pt A, u B q ρ fi prea t ρ, rea u ρq KK rea plet px, yq t A B in u C q ρ fi λ π }C} V. x rea t ρ λp x, ȳq. rea u ρrx ÞÑ x, y ÞÑ ȳs π y A B It should be clear at this point that the computational behavior of this proof is, as expected, a normalization function. The details of how it works, however, are rather unclear to the non-specialist, in part due to the relative complexity of the types involved: the call rea t ρ returns a function type, so it may not evaluate its argument immediately. We shall answer this question in the next sections, in 8

9 Normalization by realizability which we present a systematic study of the various design choices available in the realizability proof. Finally, the next section moves from the simplified, non-dependent types to the more informative dependent types; which requires specifying a grammar of co-terms, which have been absent so far. 3. The many ways to realize the lambda-calculus We made two arbitrary choices when we decided to define }A Ñ B} V possibilities with the same structure: fi A ˆ }B}. There are four (1) A ˆ }B} (2) A ˆ }B} V (3) A V ˆ }B} (4) A V ˆ }B} V In this section, we would like to study the four possibilities, and see that each of them gives a slightly different realization program but they all work. To reduce the verbosity, we will omit the positive product types from these variants; it can be handled separately (and would also give rise to four different choices, but they are less interesting as the construction is symmetric). In Subsection 3.2, we shall also move from our simplified non-dependent definitions to the real thing, which forces us to explicit our language of co-terms. This reveals the evaluation order from the typing constraints corresponding to the stability of the (dependently typed) pole by anti-reduction: two variants correspond to call-by-name, two to call-by-value Four realizations of the arrow connective (1) }A Ñ B} V fi A ˆ }B} rea x A ρ fi ρpxq rea pλx A. t B q AÑB ρ fi λpū A, ē }B} q. x rea t ρrx ÞÑ ūs ē y B rea pt AÑB u A q ρ fi λ π }B} V. rea t ρ prea u ρ, p πq V q (2) }A Ñ B} V fi A ˆ }B} V rea x A ρ fi ρpxq rea pλx A. t B q AÑB ρ fi λpū A, π }B} V q. rea t ρrx ÞÑ ūs π rea pt AÑB u A q ρ fi λ π }B} V. rea t ρ prea u ρ, πq We should remark that there would be a different yet natural way to write the λ-case, closer to the A ˆ }B} version: λpū A, π }B} V q. x rea t ρrx ÞÑ ūs p πq V y B The good news is that those two versions are equivalent: simple equational reasoning shows that x t p πq V y A is equivalent to t π, and conversely x p vq V ē y A is equivalent to ē v. The fact that a result can be computed in several, equivalent ways is not specific to this case. This phenomenon occurs in the three other variants as well. For example, in the application case of variant (1) above, it is possible to start with λē }B}. _ instead of λ π }B} V. _, and then take the resulting term of type J p}b}q Ñ J p Kq, that is J p}b} K q, and coerce it into a J p B q this is the equality for positive B, but not for negatives. Again, the resulting program is equivalent to the version we gave. We will not mention each such situation (but we checked that they preserve equivalence), and systematically use the presentation that results in the simpler code. We do not claim to have explored the entire design space, but would be tempted to conjecture that there is a unique pure program (modulo βη-equivalence) inhabiting the dependent type of rea. 9

10 Dagand & Scherer (3) }A Ñ B} V fi A V ˆ }B} A notable difference in this variant and the following one is that we can restrict the typing of our environment witnesses to store value witnesses: rather than ρ : ρ P Γ, we have ρ : ρ P Γ V, that is, for each binding x:a in Γ, we assume ρpxq : ρpxq P A V. The function would be typable with a weaker argument ρ : ρ P Γ (and would have an equivalent behavior when starting from the empty environment), but that would make the type less precise about the dynamics of evaluation. rea x A ρ Γ V fi p ρpxqq V rea pλx A. t B q AÑB ρ fi λp v A V, ē }B} q. x rea t ρrx ÞÑ vs ē y B rea pt AÑB u A q ρ fi λ π }B} V. x rea u ρ λ v A V u. rea t ρ p v u, p πq V q y A Had we kept ρ : ρ P Γ, we would have only ρpxq in the variable case, but ρrx ÞÑ p vq V s in the abstraction case, which is equivalent if we do not consider other connectives pushing in the environment. It is interesting to compare the application case with the one of variant (1): λ π }B} V. rea t ρ prea u ρ, p πq V q The relation between variant (1) and variant (3) seems to be an inlining of rea u ρ, yet the shapes of the terms hint at a distinction between call-by-name and call-by-value evaluation. For now, we will only remark that those two versions are equivalent whenever A is negative, as in that case the cut x rea u ρ λ v u.... y A applies its left-hand-side as an argument to its right-hand-side, giving exactly the definition of (1) after β-reduction. (4) }A Ñ B} V fi A V ˆ }B} V rea x A ρ Γ V fi p ρpxqq V rea pλx A. t B q AÑB ρ fi λp v A V, π }B} V q. rea t ρrx ÞÑ vs π rea pt AÑB u A q ρ fi λ π }B} V. x rea u ρ λ v A V u. rea t ρ p v u, πq y A In this variant, like in the previous one (3), it seems rather tempting to strengthen the return type of the rea t A ρ program to return not a t P A, but t P A V. In particular, this would allow to remove the cut x rea u ρ λ v u.... y A from the application case, instead directly writing prea u ρ, πq }AÑB} V. This strengthening, which seemed to coincide with the strengthening of the environment substitution from ρ P Γ to ρ P Γ V, does not lead to a well-typed program. The problem is with the application case t AÑB u: while we can return a J p B q by using the trick that B }B} K V, there is no way to build a J p B V q without any knowledge of B Co-Terms and stronger, un-simplified types It is now time to un-do the simplification presented in Section 1.4, by moving back to dependent types using the following definitions and types: c P K fi tc c 1... c n c n P M N u e P A K fi Πpt : Tq. t P A Ñ x t e y P K t P }A} K fi Πpe : Eq. e P }A} Ñ x t e y P K rea tγu t tau tρu. pγ \$ t : Aq Ñ ρ P Γ Ñ trρs P A Instead of starting from a known abstract machine for the lambda-calculus and its type system, 10

11 Normalization by realizability we propose to reverse-engineer the required machine from the code we have written in the previous subsection. Indeed, the typing constraints of the dependent version force us to exhibit a reduction sequence, that is to define the necessary machine transitions. Consider then the case of λ-abstraction in variant (1): rea pλx A. t B q AÑB ρ fi λpū, ēq }AÑB} V. x rea t ρrx ÞÑ ūs ē y B We transform this code to a case of the dependently-typed, by adding annotations (and otherwise leaving the code structure unchanged). We can first look at the types: we need to lift the definition J p}a Ñ B} V q fi J p A q J p}b}q of variant (1). As we started from a set-theoretic definition }A Ñ B} V fi A ˆ }B}, it is natural to define the co-terms that inhabit }A Ñ B} V, that is, the co-term indexes of the type _ P }A Ñ B} V, as pairs of a term in A and a co-term in B: pu, eq P }A Ñ B} V fi pu P A q pe P }B}q For this definition to make sense, we must specify that pairs pu, eq of a u : T and e : E form valid co-terms. Then, the definition means that those pairs are the only co-terms that satisfy the predicate _ P }A Ñ B} V (defined by recursion on the syntactic type in }_} V ). If this looks strange (we would get three inductive definitions that are defined by mutual recursion on one of their term-level indices), it can also be represented by the following, more heavyweight encoding without inductives, just a recursive functions from syntactic types of the object language to dependent products in our metalanguage, writing a b for the propositional equality this is what our mechanized formalization does: e 0 P }A Ñ B} V fi Σpu : T, e : Eq. pe 0 pu, eqq pu P A q pe P }B}q For presentation purposes, we will keep using the inductive-like definition, which results in more readable programs. For example, the case rea pλx. tq in variant (1) can now be rewritten into the following dependently-typed version: rea tγu pλx A. t B q AÑB tρu p ρ : ρ P Γ q fi λpu, eq : E. λppū : u P A, ē : e P }B}q : pu, eq P }A Ñ B} V q. x rea t ρrx ÞÑ ūs ē y lam B (Under the reading of _ P }A Ñ B} V as an inductive datatype, the reason why we can abstract over a pair-in-the-object-language λppu, eq : Eq instead of the more general λpe 0 : Eq is that the next pattern-matching necessarily teaches us that e 0 must be equal to a pair, so any other case would be an absurd pattern: matching on a pair covers all possible cases.) The cut function was defined from the start with a dependent type: x _ _ y A tt eu, t P A Ñ e P }A} Ñ x t e y P K but this is not enough to make the whole term type-check. Indeed, this gives to the expression x rea t ρrx ÞÑ ūs ē y lam B the type x trρ, x ÞÑ us e y P K. But we just abstracted on pu, eq P }A Ñ B} V, in order to form a complete term of type pλx. tqrρs P }A Ñ B} K V, so the expected type is x pλx. tqrρs pu, eq y P K. This explains the small _ lam annotation in the code above: we define this as an auxiliary function of type _ lam tx t u eu. x trx ÞÑ us e y P K Ñ x λx. t pu, eq y P K While we could think of several ways of inhabiting this function, by far the most natural is to specify our yet-unspecified machine reduction c c 1 as containing at least the following family of 11

12 Dagand & Scherer reductions: x λx. t pu, eq y x trx ÞÑ us e y. Similarly, the application case can be dependently typed as rea tγu pt AÑB u A q ρ fi λπ : E. λ π : π P }B} V. x rea t ρ prea u ρ, p πq V q y app AÑB The function p_q V, which we had define at the type J p}a} V q Ñ J p}a}q, can be given without changing its implementation the more precise tπu. π P }A} V Ñ π P }A}. This means that x rea t ρ prea u ρ, p πq V q y AÑB has type x trρs purρs, πq y P K. For the whole term to be well-typed, the auxiliary function _ app needs to have the tt u πu. x t pu, πq y P K Ñ x t u π y P K, which is inhabited by adding the family of reductions x t u π y x t pu, πq y. Experts will have long recognized the Krivine abstract machine for the call-by-name λ-calculus. We would like to point out that some of it was to be expected, and some of it is maybe more surprising. It is not a surprise that we recovered the same shape of co-terms as in the Krivine machine: we started from the exact same definition of realizers, and realizers precisely determine what the co-terms are even though our dependent, proof-relevant types partly obscures this relation. We argue that it is, however, less automatic that the transitions themselves would be suggested to us by typing alone it means they are, in some sense, solely determined by the type of the adequacy lemma. In particular, we recovered the fact that the application transition needs to be specified only for linear co-terms π in our setting, those that belong not only to }N}, but also to }N} V Machine reductions and their variants We will not repeat the process for all four variants of the arrow connective, but directly jump to the conclusion of what syntax for (arrow-related) co-terms they suggest, and which reduction rules they require to be well-typed. We observe that the two first variants correspond to a call-by-name evaluation strategy, while the two latter correspond to call-by-value; and, in particular, directly suggest the introduction of a µx.c construction in co-terms. (1) }A Ñ B} V fi A ˆ }B} e :: pu, eq... x λx. t pu, eq y x trx ÞÑ us e y x t u π y x t pu, πq y (2) }A Ñ B} V fi A ˆ }B} V e :: pu, πq... x λx. t pu, πq y x trx ÞÑ us π y x t u π y x t pu, πq y (3) }A Ñ B} V fi A V ˆ }B} The application case is interesting in this setting. Its simplified version is the following: rea pt AÑB u A q ρ fi λ π }B} V. x rea u ρ λ v A V u. x rea t ρ p v u, p πq V q y AÑB y A This can be enriched to the following dependent version: rea tγu pt AÑB u A q tρu ρ : ρ P Γ V fi λπ : E. λ π : π P }B} V. x rea u ρ λpv u : Tq. λp v u : v u P A V q. x rea t ρ p v u, p πq V q y app body AÑB y app arg A The annotation _ app body corresponds to a reduction to the machine x trρs pv u, πq y. Its return type which determines the machine which should reduce to this one is constrained not by the expected return type of the rea function as in previous examples, but by the typing of the outer cut 12

13 Normalization by realizability function which expects a urρs P A on the left, and thus on the right a e P }A} for some co-term e. By definition of e P }A}, the return type of _ app body should thus be of the form x v u e y P K. What would be a term e which would respect the reduction equation 3 x v u e y x trρs pv u, πq y allowing to define _ app body? This forces us to introduce System L s µ binder [5], to define e fi µv x. x trρs pv u, πq y, along with the expected reduction: x v µx. c y crx ÞÑ vs. From here, the input and output type of _ app arg are fully determined, and require the following reduction: x t u π y x u µv u. x t pv u, πq y y. Summing up: e :: pv, eq µx. c... x λx. t pv, eq y x trx ÞÑ vs e y x v µx. c y crx ÞÑ vs x t u π y x u µx. x t px, πq y y (4) }A Ñ B} V fi A V ˆ }B} V e :: pv, πq µx. c... x λx. t pv, πq y x trx ÞÑ vs π y x v µx. c y crx ÞÑ vs x t u π y x u µx. x t px, πq y y 3.4. Preliminary conclusions The code itself may not be clear enough to dissect its evaluation order, but the machine transitions, that were imposed to us by the dependent typing requirement, are deafeningly explicit. When interpreting the function type }A Ñ B} V, requiring truth value witnesses for A gives us call-by-value transitions, while just requiring arbitrary witnesses gives us call-by-name transitions. The significance of choosing }B} or }B} V as a second component is much less clear. It appears that reduction of applications t u always happen against a linear co-term (one in }B} V ); it is only the reduction of the λ-abstraction that is made more liberal to accept a }B} rather than a }B} V. In particular, notice that the versions with }B} V always apply the continuation π P }B} V directly to the recursive call on t P B in the λx. t case, while the e P }B} versions use a cut, meaning that the polarity of B will determine whether the realization of t is given the continuation as parameter, or the other way around. We conjecture that this different would be significant if our source calculus had some form control effects. Finally, an interesting thing to note is that we have explored the design space of the semantics of the arrow connective independently from other aspects of our language its product type. Any of these choices for }A Ñ B} V may be combined with any choice for A B V (basically, lazy or strict pairs; the Coq formalization has a variant with strict pairs) to form a complete proof 4. We see this as yet another manifestation of the claimed modularity of realizability and logical-relation approaches, which allows to study connectives independently from each other once a notion of computation powerful enough to support them all has been fixed Partially mechanized formalization The best way to make sure a program is type-correct is to run a type-checker on it. We formalized a part of the content of this article in a proof assistant. A Coq development that covers the nondependent part all four variants with the pole left abstract is available at 3 We stole the idea of seeing introduction of a new co-term constructions as the resolutions of some equations from Guillaume Munch-Maccagnoni s PhD thesis [9]. 4 With the exception that it is only possible to strengthen the type of environments from Γ to Γ V if all computation rules performing substitution only substitute values; that is a global change. 13

14 Dagand & Scherer 4. Related work 4.1. Intuitionistic realizability Paulo Oliva and Thomas Streicher [10] factor the use of classical realizability for a negative fragment of second-order logic as the composition of intuitionistic realizability after a CPS translation. The simplicity of this decomposition may unfortunately not carry on positive types, in particular the positive sum. In recent work, Danko Ilik [7] proposes a modified version of intuitionistic realizability to work with intuitionistic sums, by embedding the CPS translation inside the definition of realizability. The resulting notion of realizability is in fact quite close to our classical realizability presentation, with a strong forcing relation (corresponding to our set of value witnesses), and a forcing relation defined by double-negation ( bi-orthogonal) of the strong forcing. In particular, Danko Ilik remarks that by varying the interplay of these two notions (notably, requiring a strong forcing hypothesis in the semantics of function types), one can move from a call-by-name interpretation to a call-by-value setting, which seems to correspond to our findings we were not aware of this remark until late in the work on this article Normalization by Evaluation There are evidently strong links with normalization by evaluation (NbE). The previously cited work of Danko Ilik [7] constructs, as is now a folklore method, NbE as the composition of a soundness proof (embedding of the source calculus into a mathematical statement parametrized on concrete models) followed by a strong completeness proof (instantiation of the mathematical statement into the syntactic model to obtain a normal form). In personal communication, Hugo Herbelin presented orthogonality and NbE through Kripke models as two facets of the same construction. Orthogonality focuses on the (co)-terms realizing the mathematical statement, exhibiting an untyped reduction sequence. NbE focuses on the typing judgments; its statement guarantees that the resulting normal form is at the expected type, but does not exhibits any relation between the input and output term System L It is not so surprising that the evaluation function exhibited in Section 2.1 embeds a CPS translation, given our observation that the definition of truth and value witnesses determines the evaluation order. Indeed, the latter fact implies that the evaluation order should remain independent from the evaluation order of the meta-language (the dependent λ-calculus used to implement adequacy), and this property is usually obtained by CPS or monadic translations. System L is a direct-style calculus that also has this propriety of forcing us to be explicit about the evaluation order while being nicer to work with than results of CPS-encoding. In particular, Guillaume Munch-Maccagnoni shows that it is a good calculus to study classical realizability [8]. Our different variants of truth/falsity witnesses correspond to targeting different subsets of System L, which also determine the evaluation order. The principle of giving control of evaluation order to the term or the co-term according to polarity is also found in Munch-Maccagnoni PhD thesis [9]. The computational content of adequacy for System L has been studied, on the small µ µ fragment, by Hugo Herbelin in his habilitation thesis [6]. The reduction strategy is fixed to be call-by-name. We note the elegant regularity of the adequacy statements for classical logic, each of the three versions (configurations, terms and co-terms) taking an environment of truth witnesses (for term variables) and an environment of falsity witnesses (for co-term variables). 14

15 Normalization by realizability 4.4. Realizability in PTS Jean-Philippe Bernardy and Marc Lasson [4] have some of the most intriguing work on realizability and parametricity we know of. In many ways they go above and beyond this work: they capture realizability predicates not as an ad-hoc membership, but as a rich type in a pure type system (PTS) that is built on top of the source language of realizers itself a PTS. They also establish a deep connection between realizability and parametricity we hope that parametricity would be amenable to a treatment resembling ours, but that is purely future work. One thing we wanted to focus on was the computational content of realizability techniques, and this is not described in their work; adequacy is seen as a mapping from a well-typed term to a realizer inhabiting the realizability predicate. But it is a meta-level operation (not described as a program) described solely as an annotation process. We tried to see more in it though of course, it is a composability property of denotational model constructions that they respect the input s structure and can be presented as trivial mappings (e.g., t u fi appp t, u q ) given enough auxiliary functions. Conclusion At which point in a classical realizability proof is the real work done? We have seen that the computation content of the adequacy is a normalization function, that can be annotated to reconstruct the reduction sequence. Yet we have shown that there is very little leeway in proofs of adequacy: their computational content is determined by the types of the adequacy lemma and of truth and falsity witnesses. Finally, we have seen that polarity plays an important role in adequacy programs even when they finally correspond to well-known untyped reduction strategies such as call-by-name or call-by-value. This is yet another argument to study the design space of type-directed or, more generally, polarity-directed reduction strategies. Remerciements Nous avons eu le plaisir de discuter avec Guillaume Munch-Maccagnoni, Jean-Philippe Bernardy, Hugo Herbelin et Marc Lasson; non contents d avoir diffusé les idées au cœur de ce travail, ils se sont montrés disponibles et pédagogues. La relecture d Adrien Guatto a apporté un œil extérieur indispensable, et les commentaires des rapporteurs anonymes et de Guillaume Munch-Maccagnoni nous ont permis d améliorer encore grandement l article. References [1] A. Abel. Normalization by Evaluation: Dependent Types and Impredicativity. PhD thesis, Ludwig- Maximilians-Universität München, May Habilitation thesis. [2] A. Abel and G. Scherer. On irrelevance and algorithmic equality in predicative type theory. Logical Methods in Computer Science, 8(1), [3] R. Atkey, N. Ghani, and P. Johann. A relationally parametric model of dependent type theory. In The 41st Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 14, San Diego, CA, USA, January 20-21, 2014, pages , [4] J. Bernardy and M. Lasson. Realizability and parametricity in pure type systems. In Foundations of Software Science and Computational Structures - 14th International Conference, FOSSACS 2011, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2011, Saarbrücken, Germany, March 26-April 3, Proceedings, pages ,

16 Dagand & Scherer [5] P. Curien and H. Herbelin. The duality of computation. In Proceedings of the Fifth ACM SIGPLAN International Conference on Functional Programming (ICFP 00), Montreal, Canada, September 18-21, 2000., pages , [6] H. Herbelin. C est maintenant qu on calcule, au cœur de la dualité. Thèse d habilitation à diriger les recherches (French), [7] D. Ilik. Continuation-passing style models complete for intuitionistic logic. Ann. Pure Appl. Logic, 164(6): , [8] G. Munch-Maccagnoni. Focalisation and Classical Realisability (version with appendices). In E. Grädel and R. Kahle, editors, 18th EACSL Annual Conference on Computer Science Logic - CSL 09, volume 5771, pages , Coimbra, Portugal, Springer-Verlag. [9] G. Munch-Maccagnoni. Syntax and Models of a Non-Associative Composition of Programs and Proofs. PhD thesis, Université Paris Diderot Paris VII, [10] P. Oliva and T. Streicher. On krivine s realizability interpretation of classical second-order arithmetic. Fundam. Inform., 84(2): , [11] A. M. Pitts. Operational semantics and program equivalence. In Applied Semantics, International Summer School, APPSEM 2000, Caminha, Portugal, September 9-15, 2000, Advanced Lectures, pages , [12] A. J. Turon, J. Thamsborg, A. Ahmed, L. Birkedal, and D. Dreyer. Logical relations for finegrained concurrency. In The 40th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 13, Rome, Italy - January 23-25, 2013, pages ,

### Parametric Domain-theoretic models of Linear Abadi & Plotkin Logic

Parametric Domain-theoretic models of Linear Abadi & Plotkin Logic Lars Birkedal Rasmus Ejlers Møgelberg Rasmus Lerchedahl Petersen IT University Technical Report Series TR-00-7 ISSN 600 600 February 00

### CHAPTER 7 GENERAL PROOF SYSTEMS

CHAPTER 7 GENERAL PROOF SYSTEMS 1 Introduction Proof systems are built to prove statements. They can be thought as an inference machine with special statements, called provable statements, or sometimes

### [Refer Slide Time: 05:10]

Principles of Programming Languages Prof: S. Arun Kumar Department of Computer Science and Engineering Indian Institute of Technology Delhi Lecture no 7 Lecture Title: Syntactic Classes Welcome to lecture

### WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly

### CARDINALITY, COUNTABLE AND UNCOUNTABLE SETS PART ONE

CARDINALITY, COUNTABLE AND UNCOUNTABLE SETS PART ONE With the notion of bijection at hand, it is easy to formalize the idea that two finite sets have the same number of elements: we just need to verify

### Linear Types for Continuations

1/28 Linear Types for Continuations Alex Simpson LFCS, School of Informatics University of Edinburgh, UK Joint work with: Jeff Egger (LFCS, Edinburgh) Rasmus Møgelberg (ITU, Copenhagen) Linear Types for

### Bindings, mobility of bindings, and the -quantifier

ICMS, 26 May 2007 1/17 Bindings, mobility of bindings, and the -quantifier Dale Miller, INRIA-Saclay and LIX, École Polytechnique This talk is based on papers with Tiu in LICS2003 & ACM ToCL, and experience

### Regular Languages and Finite Automata

Regular Languages and Finite Automata 1 Introduction Hing Leung Department of Computer Science New Mexico State University Sep 16, 2010 In 1943, McCulloch and Pitts [4] published a pioneering work on a

### An example of a computable

An example of a computable absolutely normal number Verónica Becher Santiago Figueira Abstract The first example of an absolutely normal number was given by Sierpinski in 96, twenty years before the concept

### Types, Polymorphism, and Type Reconstruction

Types, Polymorphism, and Type Reconstruction Sources This material is based on the following sources: Pierce, B.C., Types and Programming Languages. MIT Press, 2002. Kanellakis, P.C., Mairson, H.G. and

### Chapter 6 Finite sets and infinite sets. Copyright 2013, 2005, 2001 Pearson Education, Inc. Section 3.1, Slide 1

Chapter 6 Finite sets and infinite sets Copyright 013, 005, 001 Pearson Education, Inc. Section 3.1, Slide 1 Section 6. PROPERTIES OF THE NATURE NUMBERS 013 Pearson Education, Inc.1 Slide Recall that denotes

### Discrete Mathematics, Chapter 5: Induction and Recursion

Discrete Mathematics, Chapter 5: Induction and Recursion Richard Mayr University of Edinburgh, UK Richard Mayr (University of Edinburgh, UK) Discrete Mathematics. Chapter 5 1 / 20 Outline 1 Well-founded

### Automated Theorem Proving - summary of lecture 1

Automated Theorem Proving - summary of lecture 1 1 Introduction Automated Theorem Proving (ATP) deals with the development of computer programs that show that some statement is a logical consequence of

### Induction. Mathematical Induction. Induction. Induction. Induction. Induction. 29 Sept 2015

Mathematical If we have a propositional function P(n), and we want to prove that P(n) is true for any natural number n, we do the following: Show that P(0) is true. (basis step) We could also start at

Computing and Informatics, Vol. 20, 2001,????, V 2006-Nov-6 UPDATES OF LOGIC PROGRAMS Ján Šefránek Department of Applied Informatics, Faculty of Mathematics, Physics and Informatics, Comenius University,

### MAT2400 Analysis I. A brief introduction to proofs, sets, and functions

MAT2400 Analysis I A brief introduction to proofs, sets, and functions In Analysis I there is a lot of manipulations with sets and functions. It is probably also the first course where you have to take

### Sets and functions. {x R : x > 0}.

Sets and functions 1 Sets The language of sets and functions pervades mathematics, and most of the important operations in mathematics turn out to be functions or to be expressible in terms of functions.

### Introduction to Proofs

Chapter 1 Introduction to Proofs 1.1 Preview of Proof This section previews many of the key ideas of proof and cites [in brackets] the sections where they are discussed thoroughly. All of these ideas are

### Turing Degrees and Definability of the Jump. Theodore A. Slaman. University of California, Berkeley. CJuly, 2005

Turing Degrees and Definability of the Jump Theodore A. Slaman University of California, Berkeley CJuly, 2005 Outline Lecture 1 Forcing in arithmetic Coding and decoding theorems Automorphisms of countable

### Summary Last Lecture. Automated Reasoning. Outline of the Lecture. Definition sequent calculus. Theorem (Normalisation and Strong Normalisation)

Summary Summary Last Lecture sequent calculus Automated Reasoning Georg Moser Institute of Computer Science @ UIBK Winter 013 (Normalisation and Strong Normalisation) let Π be a proof in minimal logic

### Chapter 1 LOGIC AND PROOF

Chapter 1 LOGIC AND PROOF To be able to understand mathematics and mathematical arguments, it is necessary to have a solid understanding of logic and the way in which known facts can be combined to prove

### Andrew Pitts chapter for D. Sangorgi and J. Rutten (eds), Advanced Topics in Bisimulation and Coinduction, Cambridge Tracts in Theoretical Computer

Andrew Pitts chapter for D. Sangorgi and J. Rutten (eds), Advanced Topics in Bisimulation and Coinduction, Cambridge Tracts in Theoretical Computer Science No. 52, chapter 5, pages 197 232 ( c 2011 CUP)

### Foundational Proof Certificates

An application of proof theory to computer science INRIA-Saclay & LIX, École Polytechnique CUSO Winter School, Proof and Computation 30 January 2013 Can we standardize, communicate, and trust formal proofs?

### Introduction to type systems

Introduction to type systems p. 1/5 Introductory Course on Logic and Automata Theory Introduction to type systems Polyvios.Pratikakis@imag.fr Based on slides by Jeff Foster, UMD Introduction to type systems

### Elementary Number Theory We begin with a bit of elementary number theory, which is concerned

CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,

### Students in their first advanced mathematics classes are often surprised

CHAPTER 8 Proofs Involving Sets Students in their first advanced mathematics classes are often surprised by the extensive role that sets play and by the fact that most of the proofs they encounter are

### 8 Divisibility and prime numbers

8 Divisibility and prime numbers 8.1 Divisibility In this short section we extend the concept of a multiple from the natural numbers to the integers. We also summarize several other terms that express

### A Propositional Dynamic Logic for CCS Programs

A Propositional Dynamic Logic for CCS Programs Mario R. F. Benevides and L. Menasché Schechter {mario,luis}@cos.ufrj.br Abstract This work presents a Propositional Dynamic Logic in which the programs are

### CS510 Software Engineering

CS510 Software Engineering Propositional Logic Asst. Prof. Mathias Payer Department of Computer Science Purdue University TA: Scott A. Carr Slides inspired by Xiangyu Zhang http://nebelwelt.net/teaching/15-cs510-se

### INCIDENCE-BETWEENNESS GEOMETRY

INCIDENCE-BETWEENNESS GEOMETRY MATH 410, CSUSM. SPRING 2008. PROFESSOR AITKEN This document covers the geometry that can be developed with just the axioms related to incidence and betweenness. The full

### ON FUNCTIONAL SYMBOL-FREE LOGIC PROGRAMS

PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Mathematical Sciences 2012 1 p. 43 48 ON FUNCTIONAL SYMBOL-FREE LOGIC PROGRAMS I nf or m at i cs L. A. HAYKAZYAN * Chair of Programming and Information

### Relations: their uses in programming and computational specifications

PEPS Relations, 15 December 2008 1/27 Relations: their uses in programming and computational specifications Dale Miller INRIA - Saclay & LIX, Ecole Polytechnique 1. Logic and computation Outline 2. Comparing

### Predicate Logic Review

Predicate Logic Review UC Berkeley, Philosophy 142, Spring 2016 John MacFarlane 1 Grammar A term is an individual constant or a variable. An individual constant is a lowercase letter from the beginning

### INTRODUCTORY SET THEORY

M.Sc. program in mathematics INTRODUCTORY SET THEORY Katalin Károlyi Department of Applied Analysis, Eötvös Loránd University H-1088 Budapest, Múzeum krt. 6-8. CONTENTS 1. SETS Set, equal sets, subset,

### MATHEMATICAL INDUCTION AND DIFFERENCE EQUATIONS

1 CHAPTER 6. MATHEMATICAL INDUCTION AND DIFFERENCE EQUATIONS 1 INSTITIÚID TEICNEOLAÍOCHTA CHEATHARLACH INSTITUTE OF TECHNOLOGY CARLOW MATHEMATICAL INDUCTION AND DIFFERENCE EQUATIONS 1 Introduction Recurrence

### Simulation-Based Security with Inexhaustible Interactive Turing Machines

Simulation-Based Security with Inexhaustible Interactive Turing Machines Ralf Küsters Institut für Informatik Christian-Albrechts-Universität zu Kiel 24098 Kiel, Germany kuesters@ti.informatik.uni-kiel.de

### Fundamental Computer Science Concepts Sequence TCSU CSCI SEQ A

Fundamental Computer Science Concepts Sequence TCSU CSCI SEQ A A. Description Introduction to the discipline of computer science; covers the material traditionally found in courses that introduce problem

### Mathematics for Computer Science/Software Engineering. Notes for the course MSM1F3 Dr. R. A. Wilson

Mathematics for Computer Science/Software Engineering Notes for the course MSM1F3 Dr. R. A. Wilson October 1996 Chapter 1 Logic Lecture no. 1. We introduce the concept of a proposition, which is a statement

### Fundamentals of Mathematics Lecture 6: Propositional Logic

Fundamentals of Mathematics Lecture 6: Propositional Logic Guan-Shieng Huang National Chi Nan University, Taiwan Spring, 2008 1 / 39 Connectives Propositional Connectives I 1 Negation: (not A) A A T F

### The Trip Scheduling Problem

The Trip Scheduling Problem Claudia Archetti Department of Quantitative Methods, University of Brescia Contrada Santa Chiara 50, 25122 Brescia, Italy Martin Savelsbergh School of Industrial and Systems

### A set is a Many that allows itself to be thought of as a One. (Georg Cantor)

Chapter 4 Set Theory A set is a Many that allows itself to be thought of as a One. (Georg Cantor) In the previous chapters, we have often encountered sets, for example, prime numbers form a set, domains

### Degrees of Truth: the formal logic of classical and quantum probabilities as well as fuzzy sets.

Degrees of Truth: the formal logic of classical and quantum probabilities as well as fuzzy sets. Logic is the study of reasoning. A language of propositions is fundamental to this study as well as true

### 1. R In this and the next section we are going to study the properties of sequences of real numbers.

+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real

### The Classes P and NP

The Classes P and NP We now shift gears slightly and restrict our attention to the examination of two families of problems which are very important to computer scientists. These families constitute the

### MATHS 315 Mathematical Logic

MATHS 315 Mathematical Logic Second Semester, 2006 Contents 2 Formal Statement Logic 1 2.1 Post production systems................................. 1 2.2 The system L.......................................

### 3. Mathematical Induction

3. MATHEMATICAL INDUCTION 83 3. Mathematical Induction 3.1. First Principle of Mathematical Induction. Let P (n) be a predicate with domain of discourse (over) the natural numbers N = {0, 1,,...}. If (1)

### EMBEDDING COUNTABLE PARTIAL ORDERINGS IN THE DEGREES

EMBEDDING COUNTABLE PARTIAL ORDERINGS IN THE ENUMERATION DEGREES AND THE ω-enumeration DEGREES MARIYA I. SOSKOVA AND IVAN N. SOSKOV 1. Introduction One of the most basic measures of the complexity of a

### Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2

CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2 Proofs Intuitively, the concept of proof should already be familiar We all like to assert things, and few of us

### System Description: Delphin A Functional Programming Language for Deductive Systems

System Description: Delphin A Functional Programming Language for Deductive Systems Adam Poswolsky 1 and Carsten Schürmann 2 1 Yale University poswolsky@cs.yale.edu 2 IT University of Copenhagen carsten@itu.dk

### Stochastic Inventory Control

Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

### COMP 250 Fall Mathematical induction Sept. 26, (n 1) + n = n + (n 1)

COMP 50 Fall 016 9 - Mathematical induction Sept 6, 016 You will see many examples in this course and upcoming courses of algorithms for solving various problems It many cases, it will be obvious that

### Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing

### Mathematical Induction

Chapter 2 Mathematical Induction 2.1 First Examples Suppose we want to find a simple formula for the sum of the first n odd numbers: 1 + 3 + 5 +... + (2n 1) = n (2k 1). How might we proceed? The most natural

### CS422 - Programming Language Design

1 CS422 - Programming Language Design General Information and Introduction Grigore Roşu Department of Computer Science University of Illinois at Urbana-Champaign 2 General Information Class Webpage and

### Inference Rules and Proof Methods

Inference Rules and Proof Methods Winter 2010 Introduction Rules of Inference and Formal Proofs Proofs in mathematics are valid arguments that establish the truth of mathematical statements. An argument

### We would like to state the following system of natural deduction rules preserving falsity:

A Natural Deduction System Preserving Falsity 1 Wagner de Campos Sanz Dept. of Philosophy/UFG/Brazil sanz@fchf.ufg.br Abstract This paper presents a natural deduction system preserving falsity. This new

### Verifying security protocols using theorem provers

1562 2007 79-86 79 Verifying security protocols using theorem provers Miki Tanaka National Institute of Information and Communications Technology Koganei, Tokyo 184-8795, Japan Email: miki.tanaka@nict.go.jp

### Mathematical Induction

Mathematical Induction In logic, we often want to prove that every member of an infinite set has some feature. E.g., we would like to show: N 1 : is a number 1 : has the feature Φ ( x)(n 1 x! 1 x) How

### Which Semantics for Neighbourhood Semantics?

Which Semantics for Neighbourhood Semantics? Carlos Areces INRIA Nancy, Grand Est, France Diego Figueira INRIA, LSV, ENS Cachan, France Abstract In this article we discuss two alternative proposals for

### This chapter is all about cardinality of sets. At first this looks like a

CHAPTER Cardinality of Sets This chapter is all about cardinality of sets At first this looks like a very simple concept To find the cardinality of a set, just count its elements If A = { a, b, c, d },

### Tableaux Modulo Theories using Superdeduction

Tableaux Modulo Theories using Superdeduction An Application to the Verification of B Proof Rules with the Zenon Automated Theorem Prover Mélanie Jacquel 1, Karim Berkani 1, David Delahaye 2, and Catherine

### Observational Program Calculi and the Correctness of Translations

Observational Program Calculi and the Correctness of Translations Manfred Schmidt-Schauss 1, David Sabel 1, Joachim Niehren 2, and Jan Schwinghammer 1 Goethe-University, Frankfurt, Germany 2 INRIA Lille,

### Fixed-Point Logics and Computation

1 Fixed-Point Logics and Computation Symposium on the Unusual Effectiveness of Logic in Computer Science University of Cambridge 2 Mathematical Logic Mathematical logic seeks to formalise the process of

### CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

### Non-deterministic Semantics and the Undecidability of Boolean BI

1 Non-deterministic Semantics and the Undecidability of Boolean BI DOMINIQUE LARCHEY-WENDLING, LORIA CNRS, Nancy, France DIDIER GALMICHE, LORIA Université Henri Poincaré, Nancy, France We solve the open

### SOME SUPER-INTUITIONISTIC LOGICS AS THE LOGICAL FRAGMENTS OF EQUATIONAL THEORIES 1

Tatsuya Shimura Nobu-Yuki Suzuki SOME SUPER-INTUITIONISTIC LOGICS AS THE LOGICAL FRAGMENTS OF EQUATIONAL THEORIES 1 Kripke frame semantics is used as a concise and convenient apparatus for the study of

### INTRODUCTION TO PROOFS: HOMEWORK SOLUTIONS

INTRODUCTION TO PROOFS: HOMEWORK SOLUTIONS STEVEN HEILMAN Contents 1. Homework 1 1 2. Homework 2 6 3. Homework 3 10 4. Homework 4 16 5. Homework 5 19 6. Homework 6 21 7. Homework 7 25 8. Homework 8 28

### There is no degree invariant half-jump

There is no degree invariant half-jump Rod Downey Mathematics Department Victoria University of Wellington P O Box 600 Wellington New Zealand Richard A. Shore Mathematics Department Cornell University

### 2. Methods of Proof Types of Proofs. Suppose we wish to prove an implication p q. Here are some strategies we have available to try.

2. METHODS OF PROOF 69 2. Methods of Proof 2.1. Types of Proofs. Suppose we wish to prove an implication p q. Here are some strategies we have available to try. Trivial Proof: If we know q is true then

### In mathematics you don t understand things. You just get used to them. (Attributed to John von Neumann)

Chapter 1 Sets and Functions We understand a set to be any collection M of certain distinct objects of our thought or intuition (called the elements of M) into a whole. (Georg Cantor, 1895) In mathematics

### Introducing Formal Methods. Software Engineering and Formal Methods

Introducing Formal Methods Formal Methods for Software Specification and Analysis: An Overview 1 Software Engineering and Formal Methods Every Software engineering methodology is based on a recommended

### Lecture 3. Mathematical Induction

Lecture 3 Mathematical Induction Induction is a fundamental reasoning process in which general conclusion is based on particular cases It contrasts with deduction, the reasoning process in which conclusion

### Functional Programming. Functional Programming Languages. Chapter 14. Introduction

Functional Programming Languages Chapter 14 Introduction Functional programming paradigm History Features and concepts Examples: Lisp ML 1 2 Functional Programming Functional Programming Languages The

### U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009. Notes on Algebra

U.C. Berkeley CS276: Cryptography Handout 0.1 Luca Trevisan January, 2009 Notes on Algebra These notes contain as little theory as possible, and most results are stated without proof. Any introductory

### Induction. Margaret M. Fleck. 10 October These notes cover mathematical induction and recursive definition

Induction Margaret M. Fleck 10 October 011 These notes cover mathematical induction and recursive definition 1 Introduction to induction At the start of the term, we saw the following formula for computing

### AURA: A language with authorization and audit

AURA: A language with authorization and audit Steve Zdancewic University of Pennsylvania WG 2.8 2008 Security-oriented Languages Limin Jia, Karl Mazurak, Jeff Vaughan, Jianzhou Zhao Joey Schorr and Luke

### Mathematics for Computer Science

mcs 01/1/19 1:3 page i #1 Mathematics for Computer Science revised Thursday 19 th January, 01, 1:3 Eric Lehman Google Inc. F Thomson Leighton Department of Mathematics and the Computer Science and AI Laboratory,

### Chapter 7: Functional Programming Languages

Chapter 7: Functional Programming Languages Aarne Ranta Slides for the book Implementing Programming Languages. An Introduction to Compilers and Interpreters, College Publications, 2012. Fun: a language

### Likewise, we have contradictions: formulas that can only be false, e.g. (p p).

CHAPTER 4. STATEMENT LOGIC 59 The rightmost column of this truth table contains instances of T and instances of F. Notice that there are no degrees of contingency. If both values are possible, the formula

### Satisfiability Checking

Satisfiability Checking First-Order Logic Prof. Dr. Erika Ábrahám RWTH Aachen University Informatik 2 LuFG Theory of Hybrid Systems WS 14/15 Satisfiability Checking Prof. Dr. Erika Ábrahám (RWTH Aachen

### CONTRIBUTIONS TO ZERO SUM PROBLEMS

CONTRIBUTIONS TO ZERO SUM PROBLEMS S. D. ADHIKARI, Y. G. CHEN, J. B. FRIEDLANDER, S. V. KONYAGIN AND F. PAPPALARDI Abstract. A prototype of zero sum theorems, the well known theorem of Erdős, Ginzburg

### Prime Numbers and Irreducible Polynomials

Prime Numbers and Irreducible Polynomials M. Ram Murty The similarity between prime numbers and irreducible polynomials has been a dominant theme in the development of number theory and algebraic geometry.

### Neighborhood Data and Database Security

Neighborhood Data and Database Security Kioumars Yazdanian, FrkdCric Cuppens e-mail: yaz@ tls-cs.cert.fr - cuppens@ tls-cs.cert.fr CERT / ONERA, Dept. of Computer Science 2 avenue E. Belin, B.P. 4025,31055

### 4 Domain Relational Calculus

4 Domain Relational Calculus We now present two relational calculi that we will compare to RA. First, what is the difference between an algebra and a calculus? The usual story is that the algebra RA is

### Lecture 4 -- Sets, Relations, Functions 1

Lecture 4 Sets, Relations, Functions Pat Place Carnegie Mellon University Models of Software Systems 17-651 Fall 1999 Lecture 4 -- Sets, Relations, Functions 1 The Story So Far Formal Systems > Syntax»

### CMCS 312: Programming Languages Lecture 3: Lambda Calculus (Syntax, Substitution, Beta Reduction) Acar & Ahmed 17 January 2008

CMCS 312: Programming Languages Lecture 3: Lambda Calculus (Syntax, Substitution, Beta Reduction) Acar & Ahmed 17 January 2008 Contents 1 Announcements 1 2 Introduction 1 3 Abstract Syntax 1 4 Bound and

### POLYTYPIC PROGRAMMING OR: Programming Language Theory is Helpful

POLYTYPIC PROGRAMMING OR: Programming Language Theory is Helpful RALF HINZE Institute of Information and Computing Sciences Utrecht University Email: ralf@cs.uu.nl Homepage: http://www.cs.uu.nl/~ralf/

### Prime Numbers. Chapter Primes and Composites

Chapter 2 Prime Numbers The term factoring or factorization refers to the process of expressing an integer as the product of two or more integers in a nontrivial way, e.g., 42 = 6 7. Prime numbers are

### CHAPTER 5: MODULAR ARITHMETIC

CHAPTER 5: MODULAR ARITHMETIC LECTURE NOTES FOR MATH 378 (CSUSM, SPRING 2009). WAYNE AITKEN 1. Introduction In this chapter we will consider congruence modulo m, and explore the associated arithmetic called

### DISCRETE MATHEMATICS W W L CHEN

DISCRETE MATHEMATICS W W L CHEN c W W L Chen, 1982, 2008. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990. It is available free

### A first step towards modeling semistructured data in hybrid multimodal logic

A first step towards modeling semistructured data in hybrid multimodal logic Nicole Bidoit * Serenella Cerrito ** Virginie Thion * * LRI UMR CNRS 8623, Université Paris 11, Centre d Orsay. ** LaMI UMR

### Termination Checking: Comparing Structural Recursion and Sized Types by Examples

Termination Checking: Comparing Structural Recursion and Sized Types by Examples David Thibodeau Decemer 3, 2011 Abstract Termination is an important property for programs and is necessary for formal proofs

### Journal of Functional Programming, volume 3 number 4, 1993. Dynamics in ML

Journal of Functional Programming, volume 3 number 4, 1993. Dynamics in ML Xavier Leroy École Normale Supérieure Michel Mauny INRIA Rocquencourt Abstract Objects with dynamic types allow the integration

### 2.1 The Algebra of Sets

Chapter 2 Abstract Algebra 83 part of abstract algebra, sets are fundamental to all areas of mathematics and we need to establish a precise language for sets. We also explore operations on sets and relations

### Mathematical Induction

Mathematical Induction Victor Adamchik Fall of 2005 Lecture 2 (out of three) Plan 1. Strong Induction 2. Faulty Inductions 3. Induction and the Least Element Principal Strong Induction Fibonacci Numbers

### The Epsilon-Delta Limit Definition:

The Epsilon-Delta Limit Definition: A Few Examples Nick Rauh 1. Prove that lim x a x 2 = a 2. (Since we leave a arbitrary, this is the same as showing x 2 is continuous.) Proof: Let > 0. We wish to find

### KNOWLEDGE FACTORING USING NORMALIZATION THEORY

KNOWLEDGE FACTORING USING NORMALIZATION THEORY J. VANTHIENEN M. SNOECK Katholieke Universiteit Leuven Department of Applied Economic Sciences Dekenstraat 2, 3000 Leuven (Belgium) tel. (+32) 16 28 58 09

### 11 Ideals. 11.1 Revisiting Z

11 Ideals The presentation here is somewhat different than the text. In particular, the sections do not match up. We have seen issues with the failure of unique factorization already, e.g., Z[ 5] = O Q(