SUPPORT THE WORK

GetWiki

combinatory logic

ARTICLE SUBJECTS
aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
ARTICLE TYPES
essay  →
feed  →
help  →
system  →
wiki  →
ARTICLE ORIGINS
critical  →
discussion  →
forked  →
imported  →
original  →
combinatory logic
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{distinguish|text=combinational logic, a topic in digital electronics}}Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses SchönfinkelJOURNAL, Moses, Schönfinkel, Moses Schönfinkel, 1924,weblink Ãœber die Bausteine der mathematischen Logik, Mathematische Annalen, 92, 305–316, 10.1007/bf01448013, Translated by Stefan Bauer-Mengelberg as "On the building blocks of mathematical logic" in Jean van Heijenoort, 1967. A Source Book in Mathematical Logic, 1879–1931. Harvard Univ. Press: 355–66. and Haskell Curry,JOURNAL, Curry, Haskell Brooks, Grundlagen der Kombinatorischen Logik, American Journal of Mathematics, 1930, 52, 3, 509–536, Haskell Curry, Foundations of combinatorial logic, The Johns Hopkins University Press, German, 10.2307/2370619, 2370619, and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions - and to remove any mention of variables - particularly in predicate logic.BOOK, Wolfram, Stephen, A New Kind of Science, Wolfram Media, Inc., 2002, 1121, 1-57955-008-8, A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments.

In mathematics

Combinatory logic was originally intended as a 'pre-logic' that would clarify the role of quantified variables in logic, essentially by eliminating them. Another way of eliminating quantified variables is Quine's predicate functor logic. While the expressive power of combinatory logic typically exceeds that of first-order logic, the expressive power of predicate functor logic is identical to that of first order logic (Quine 1960, 1966, 1976).The original inventor of combinatory logic, Moses Schönfinkel, published nothing on combinatory logic after his original 1924 paper. Haskell Curry rediscovered the combinators while working as an instructor at Princeton University in late 1927.JOURNAL, Seldin, Jonathan, The Logic of Curry and Church, 2008,weblink In the latter 1930s, Alonzo Church and his students at Princeton invented a rival formalism for functional abstraction, the lambda calculus, which proved more popular than combinatory logic. The upshot of these historical contingencies was that until theoretical computer science began taking an interest in combinatory logic in the 1960s and 1970s, nearly all work on the subject was by Haskell Curry and his students, or by Robert Feys in Belgium. Curry and Feys (1958), and Curry et al. (1972) survey the early history of combinatory logic. For a more modern treatment of combinatory logic and the lambda calculus together, see the book by Barendregt,{{sfn|Barendregt|1984}} which reviews the models Dana Scott devised for combinatory logic in the 1960s and 1970s.

In computing

In computer science, combinatory logic is used as a simplified model of computation, used in computability theory and proof theory. Despite its simplicity, combinatory logic captures many essential features of computation.Combinatory logic can be viewed as a variant of the lambda calculus, in which lambda expressions (representing functional abstraction) are replaced by a limited set of combinators, primitive functions from which bound variables are absent. It is easy to transform lambda expressions into combinator expressions, and combinator reduction is much simpler than lambda reduction. Hence combinatory logic has been used to model some non-strict functional programming languages and hardware. The purest form of this view is the programming language Unlambda, whose sole primitives are the S and K combinators augmented with character input/output. Although not a practical programming language, Unlambda is of some theoretical interest.Combinatory logic can be given a variety of interpretations. Many early papers by Curry showed how to translate axiom sets for conventional logic into combinatory logic equations (Hindley and Meredith 1990). Dana Scott in the 1960s and 1970s showed how to marry model theory and combinatory logic.

Summary of lambda calculus

Lambda calculus is concerned with objects called lambda-terms, which can be represented bythe following three forms of strings:
  • {{mvar|v}}
  • {{tmath|lambda v.E_1}}
  • {{tmath|(E_1 E_2)}}
where v is a variable name drawn from a predefined infinite set ofvariable names, and {{tmath|E_1}} and {{tmath|E_2}} are lambda-terms.Terms of the form {{tmath|lambda v.E_1}} are called abstractions. The variable v iscalled the formal parameter of the abstraction, and {{tmath|E_1}} is the bodyof the abstraction. The term {{tmath|lambda v.E_1}} represents the function which, appliedto an argument, binds the formal parameter v to the argument and thencomputes the resulting value of {{tmath|E_1}}— that is, it returns {{tmath|E_1}}, withevery occurrence of v replaced by the argument.Terms of the form {{tmath|(E_1 E_2)}} are called applications. Applications modelfunction invocation or execution: the function represented by {{tmath|E_1}} is to beinvoked, with {{tmath|E_2}} as its argument, and the result is computed. If {{tmath|E_1}}(sometimes called the applicand) is an abstraction, the term may bereduced: {{tmath|E_2}}, the argument, may be substituted into the body of {{tmath|E_1}}in place of the formal parameter of {{tmath|E_1}}, and the result is a new lambdaterm which is equivalent to the old one. If a lambda term contains nosubterms of the form {{tmath|((lambda v.E_1) E_2)}} then it cannot be reduced, and is said tobe in normal form.The expression {{tmath|E[v :{{=}} a]}} represents the result of taking the term {{mvar|E}} and replacing all free occurrences of {{mvar|v}} in it with {{mvar|a}}. Thus we write
{{tmath|((lambda v.E) a) Rightarrow E[v :{{=}} a]}}
By convention, we take {{tmath|(a b c d ... z)}} as shorthand for {{tmath|(...(((a b) c) d) ... z)}} (i.e., application is left associative).The motivation for this definition of reduction is that it capturesthe essential behavior of all mathematical functions. For example,consider the function that computes the square of a number. We mightwrite
The square of x is {{tmath|x*x}}
(Using "{{tmath|*}}" to indicate multiplication.) x here is the formal parameter of the function. To evaluate the square for a particularargument, say 3, we insert it into the definition in place of theformal parameter:
The square of 3 is {{tmath|3*3}}
To evaluate the resulting expression {{tmath|3*3}}, we would have to resort toour knowledge of multiplication and the number 3. Since anycomputation is simply a composition of the evaluation of suitablefunctions on suitable primitive arguments, this simple substitutionprinciple suffices to capture the essential mechanism of computation.Moreover, in lambda calculus, notions such as '3' and '{{tmath|*}}' can berepresented without any need for externally defined primitiveoperators or constants. It is possible to identify terms in lambda calculus, which, when suitably interpreted, behave like thenumber 3 and like the multiplication operator, q.v. Church encoding.Lambda calculus is known to be computationally equivalent in power tomany other plausible models for computation (including Turing machines); that is, any calculation that can be accomplished in anyof these other models can be expressed in lambda calculus, andvice versa. According to the Church-Turing thesis, both modelscan express any possible computation.It is perhaps surprising that lambda-calculus can represent anyconceivable computation using only the simple notions of functionabstraction and application based on simple textual substitution ofterms for variables. But even more remarkable is that abstraction isnot even required. Combinatory logic is a model of computationequivalent to lambda calculus, but without abstraction. The advantageof this is that evaluating expressions in lambda calculus is quite complicatedbecause the semantics of substitution must be specified with great care toavoid variable capture problems. In contrast, evaluating expressions incombinatory logic is much simpler, because there is no notion of substitution.

Combinatory calculi

Since abstraction is the only way to manufacture functions in thelambda calculus, something must replace it in the combinatorycalculus. Instead of abstraction, combinatory calculus provides alimited set of primitive functions out of which other functions may bebuilt.

Combinatory terms

A combinatory term has one of the following forms:
  • {{mvar|x}}
  • {{mvar|P}}
  • {{tmath|(E_1 E_2)}}
where {{mvar|x}} is a variable, {{mvar|P}} is one of the primitive functions, and {{tmath|(E_1 E_2)}} is the application of combinatory terms {{tmath|E_1}} and {{tmath|E_2}}. The primitive functions themselves are combinators, or functions that, when seen as lambda terms, contain no free variables.To shorten the notations, a general convention is that {{tmath|(E_1 E_2 E_3 ... E_n)}}, or even {{tmath|E_1 E_2 E_3... E_n}}, denotes the term {{tmath|(...((E_1 E_2) E_3)... E_n)}}. This is the same general convention (left-associativity) as for multiple application in lambda calculus.

Reduction in combinatory logic

In combinatory logic, each primitive combinator comes with a reduction rule of the form
{{math|1=(P x1 ... xn) = E}}
where E is a term mentioning only variables from the set {{math|{{mset|x1 ... xn}}}}. It is in this way that primitive combinators behave as functions.

Examples of combinators

The simplest example of a combinator is I, the identitycombinator, defined by
(I x) = x
for all terms x. Another simple combinator is K, whichmanufactures constant functions: (K x) is the function which,for any argument, returns x, so we say
((K x) y) = x
for all terms x and y. Or, following the convention formultiple application,
(K x y) = x
A third combinator is S, which is a generalized version ofapplication:
(S x y z) = (x z (y z))
S applies x to y after first substituting z intoeach of them. Or put another way, x is applied to y inside theenvironment z.Given S and K, I itself is unnecessary, since it canbe built from the other two:
((S K K) x)
= (S K K x) = (K x (K x)) = x
for any term x. Note that although ((S K K)x) = (I x) for any x, (S K K)itself is not equal to I. We say the terms are extensionally equal. Extensional equality captures themathematical notion of the equality of functions: that two functionsare equal if they always produce the same results for the samearguments. In contrast, the terms themselves, together with thereduction of primitive combinators, capture the notion ofintensional equality of functions: that two functions are equalonly if they have identical implementations up to the expansion of primitivecombinators when these ones are applied to enough arguments. There are many ways toimplement an identity function; (S K K) and Iare among these ways. (S K S) is yet another. Wewill use the word equivalent to indicate extensional equality,reserving equal for identical combinatorial terms.A more interesting combinator is the fixed point combinator or Y combinator, which can be used to implement recursion.

Completeness of the S-K basis

S and K can be composed to produce combinators that are extensionally equal to any lambda term, and therefore, by Church's thesis, to any computable function whatsoever. The proof is to present a transformation, T[ ], which converts an arbitrary lambda term into an equivalent combinator.T[ ] may be defined as follows:
  1. T[x] => x
  2. T[(E₁ E₂)] => (T[E₁] T[E₂])
  3. T[λx.E] => (K T[E]) (if x does not occur free in E)
  4. T[λx.x] => I
  5. T[λx.λy.E] => T{{!(}}λx.T{{!(}}λy.E{{))!}} (if x occurs free in E)
  6. T[λx.(E₁ E₂)] => (S T[λx.E₁] T[λx.E₂]) (if x occurs free in E₁ or E₂)
This process is also known as abstraction elimination. This definition is exhaustive: any lambda expression will be subject to exactly one of these rules (see Summary of lambda calculus above).It is related to the process of bracket abstraction, which takes an expression E built from variables and application and produces a combinator expression [x]E in which the variable x is not free, such that [x]E x = E holds.A very simple algorithm for bracket abstraction is defined by induction on the structure of expressions as follows:JOURNAL, Turner, David A., David Turner (computer scientist), Another Algorithm for Bracket Abstraction, The Journal of Symbolic Logic, 44, 267–270, 1979, 10.2307/2273733, 2273733,
  1. [x]y := K y
  2. [x]x := I
  3. [x](E₁ E₂) := S([x]E₁)([x]E₂)
Bracket abstraction induces a translation from lambda terms to combinator expressions, by interpreting lambda-abstractions using the bracket abstraction algorithm.

Conversion of a lambda term to an equivalent combinatorial term

For example, we will convert the lambda term λx.λy.(y x) to acombinatorial term:
T[λx.λy.(y x)]
= T{{!(}}λx.T{{!(}}λy.(y x){{))!}} (by 5) = T[λx.(S T[λy.y] T[λy.x])] (by 6) = T[λx.(S I T[λy.x])] (by 4) = T[λx.(S I (K T[x]))] (by 3) = T[λx.(S I (K x))] (by 1) = (S T[λx.(S I)] T[λx.(K x)]) (by 6) = (S (K (S I)) T[λx.(K x)]) (by 3) = (S (K (S I)) (S T[λx.K] T[λx.x])) (by 6) = (S (K (S I)) (S (K K) T[λx.x])) (by 3) = (S (K (S I)) (S (K K) I)) (by 4)
If we apply this combinatorial term to any two terms x and y, itreduces as follows:
(S (K (S I)) (S (K K) I) x y)
= (K (S I) x (S (K K) I x) y) = (S I (S (K K) I x) y) = (I y (S (K K) I x y)) = (y (S (K K) I x y)) = (y (K K x (I x) y)) = (y (K (I x) y)) = (y (I x)) = (y x)
The combinatory representation, (S (K (S I)) (S (K K) I)) is muchlonger than the representation as a lambda term, λx.λy.(y x). This is typical. In general, the T[ ] construction may expand a lambdaterm of length n to a combinatorial term of lengthΘ(n3) JOURNAL, Lachowski, Łukasz, On the Complexity of the Standard Translation of Lambda Calculus into Combinatory Logic, Reports on Mathematical Logic, 2018, 53, 19-42, 10.4467/20842589RM.18.002.8835,weblink 9 September 2018, .

Explanation of the T[ ] transformation

The T[ ] transformation is motivated by a desire to eliminateabstraction. Two special cases, rules 3 and 4, are trivial: λx.x isclearly equivalent to I, and λx.E is clearly equivalent to(K T[E]) if x does not appear free in E.The first two rules are also simple: Variables convert to themselves,and applications, which are allowed in combinatory terms, areconverted to combinators simply by converting the applicand and theargument to combinators.It is rules 5 and 6 that are of interest. Rule 5 simply says that to convert a complex abstraction to a combinator, we must first convert its body to a combinator, and then eliminate the abstraction. Rule 6 actually eliminates the abstraction.λx.(E₁ Eâ‚‚) is a function which takes an argument, say a, andsubstitutes it into the lambda term (E₁ Eâ‚‚) in place of x,yielding (E₁ Eâ‚‚)[x : = a]. But substituting a into (E₁ Eâ‚‚) in place of x is just the same as substituting it into both E₁ and Eâ‚‚, so
(E₁ E₂)[x := a] = (E₁[x := a] E₂[x := a])
(λx.(E₁ E₂) a) = ((λx.E₁ a) (λx.E₂ a))
= (S λx.E₁ λx.E₂ a) = ((S λx.E₁ λx.E₂) a)
By extensional equality,
λx.(E₁ E₂) = (S λx.E₁ λx.E₂)
Therefore, to find a combinator equivalent to λx.(E₁ E₂), it issufficient to find a combinator equivalent to (S λx.E₁ λx.E₂), and
(S T[λx.E₁] T[λx.E₂])
evidently fits the bill. E₁ and E₂ each contain strictly fewerapplications than (E₁ E₂), so the recursion must terminate in a lambdaterm with no applications at all—either a variable, or a term of theform λx.E.

Simplifications of the transformation

η-reduction

The combinators generated by the T[ ] transformation can be madesmaller if we take into account the η-reduction rule:
T[λx.(E x)] = T[E] (if x is not free in E)
λx.(E x) is the function which takes an argument, x, andapplies the function E to it; this is extensionally equal to thefunction E itself. It is therefore sufficient to convert E tocombinatorial form.Taking this simplification into account, the example above becomes:
  T[λx.λy.(y x)] = ... = (S (K (S I)) T[λx.(K x)]) = (S (K (S I)) K) (by η-reduction)
This combinator is equivalent to the earlier, longer one:
  (S (K (S I)) K x y) = (K (S I) x (K x) y) = (S I (K x) y) = (I y (K x y)) = (y (K x y)) = (y x)
Similarly, the original version of the T[ ] transformationtransformed the identity function λf.λx.(f x) into (S (S (K S) (S (K K) I)) (K I)). With the η-reduction rule, λf.λx.(f x) istransformed into I.

One-point basis

There are one-point bases from which every combinator can be composed extensionally equal to any lambda term. The simplest example of such a basis is {X} where:
X ≡ λx.((xS)K)
It is not difficult to verify that:
X (X (X X)) =ηβ K and X (X (X (X X))) =ηβ S.
Since {K, S} is a basis, it follows that {X} is a basis too. The Iota programming language uses X as its sole combinator.Another simple example of a one-point basis is:
X' ≡ λx.(x K S K) with (X' X') X' =β K and X' (X' X') =β S
X' does not need η contraction in order to produce K and S. In fact, there exist infinitely many such bases.JOURNAL, Mayer, Goldberg,weblink 10.1016/j.ipl.2003.12.005, 89, 2004, A construction of one-point bases in extended lambda calculi, Information Processing Letters, 281–286,

Combinators B, C

In addition to S and K, Schönfinkel's paper included two combinators which are now called B and C, with the following reductions:
(C f g x) = ((f x) g) (B f g x) = (f (g x))
He also explains how they in turn can be expressed using only S and K:
B = (S (K S) K) C = (S (S (K (S (K S) K)) S) (K K))
These combinators are extremely useful when translating predicate logic or lambda calculus into combinator expressions. They were also used by Curry, and much later by David Turner, whose name has been associated with their computational use. Using them, we can extend the rules for the transformation as follows:
  1. T[x] ⇒ x
  2. T[(E₁ Eâ‚‚)] ⇒ (T[E₁] T[Eâ‚‚])
  3. T[λx.E] ⇒ (K T[E]) (if x is not free in E)
  4. T[λx.x] ⇒ I
  5. T[λx.λy.E] ⇒ T{{!(}}λx.T{{!(}}λy.E{{))!}} (if x is free in E)
  6. T[λx.(E₁ Eâ‚‚)] ⇒ (S T[λx.E₁] T[λx.Eâ‚‚]) (if x is free in both E₁ and Eâ‚‚)
  7. T[λx.(E₁ Eâ‚‚)] ⇒ (C T[λx.E₁] T[Eâ‚‚]) (if x is free in E₁ but not Eâ‚‚)
  8. T[λx.(E₁ Eâ‚‚)] ⇒ (B T[E₁] T[λx.Eâ‚‚]) (if x is free in Eâ‚‚ but not E₁)
Using B and C combinators, the transformation ofλx.λy.(y x) looks like this:
   T[λx.λy.(y x)] = T{{!(}}λx.T{{!(}}λy.(y x){{))!}} = T[λx.(C T[λy.y] x)] (by rule 7) = T[λx.(C I x)] = (C I) (η-reduction) = mathbf{C}_{*} (traditional canonical notation : mathbf{X}_{*} = mathbf{X I}) = mathbf{I}' (traditional canonical notation: mathbf{X}' = mathbf{C X})
And indeed, (C I x y) does reduce to (y x):
   (C I x y) = (I y x) = (y x)
The motivation here is that B and C are limited versions of S.Whereas S takes a value and substitutes it into both the applicand andits argument before performing the application, C performs thesubstitution only in the applicand, and B only in the argument.The modern names for the combinators come from Haskell Curry's doctoral thesis of 1930 (see B, C, K, W System). In Schönfinkel's original paper, what we now call S, K, I, B and C were called S, C, I, Z, and T respectively.The reduction in combinator size that results from the new transformation rulescan also be achieved without introducing B and C, as demonstrated in Section 3.2 of.BOOK, John, Tromp, Binary Lambda Calculus and Combinatory Logic, Randomness And Complexity, from Leibniz To Chaitin, Cristian S., Calude, World Scientific Publishing Company, 2008,weblink" title="web.archive.org/web/20160304083208weblink">weblink

CLK versus CLI calculus

A distinction must be made between the CLK as described in this article and the CLI calculus. The distinction corresponds to that between the λK and the λI calculus. Unlike the λK calculus, the λI calculus restricts abstractions to:
λx.E where x has at least one free occurrence in E.
As a consequence, combinator K is not present in the λI calculus nor in the CLI calculus. The constants of CLI are: I, B, C and S, which form a basis from which all CLI terms can be composed (modulo equality). Every λI term can be converted into an equal CLI combinator according to rules similar to those presented above for the conversion of λK terms into CLK combinators. See chapter 9 in Barendregt (1984).

Reverse conversion

The conversion L[ ] from combinatorial terms to lambda terms istrivial:
L[I] = λx.x L[K] = λx.λy.x L[C] = λx.λy.λz.(x z y) L[B] = λx.λy.λz.(x (y z)) L[S] = λx.λy.λz.(x z (y z)) L[(E₁ E₂)] = (L[E₁] L[E₂])
Note, however, that this transformation is not the inversetransformation of any of the versions of T[ ] that we have seen.

Undecidability of combinatorial calculus

A normal form is any combinatory term in which the primitive combinators that occur, if any, are not applied to enough arguments to be simplified. It is undecidable whether a general combinatory term has a normal form; whether two combinatory terms are equivalent, etc. This is equivalent to the undecidability of the corresponding problems for lambda terms. However, a direct proof is as follows:First, observe that the term
Ω = (S I I (S I I))
has no normal form, because it reduces to itself after three steps, asfollows:
{{spaces|2}} (S I I (S I I)) = (I (S I I) (I (S I I))) = (S I I (I (S I I))) = (S I I (S I I))
and clearly no other reduction order can make the expression shorter.Now, suppose N were a combinator for detecting normal forms,such that
(mathbf{N} x) = begin{cases} mathbf{T}, text{ if } x text{ has a normal form}
mathbf{F}, text{ otherwise.} end{cases}
(Where {{math|T}} and {{math|F}} represent the conventional Church encodings of true and false, λx.λy.x and λx.λy.y, transformed into combinatory logic. The combinatory versions have {{math|1=T = K}} and {{math|1=F = (K I)}}.)
Now let
Z = (C (C (B N (S I I)) Ω) I)
now consider the term (S I I Z). Does (S I I Z) have a normalform? It does if and only if the following do also:
{{spaces|2}} (S I I Z) = (I Z (I Z)) = (Z (I Z)) = (Z Z) = (C (C (B N (S I I)) Ω) I Z) (definition of Z) = (C (B N (S I I)) Ω Z I) = (B N (S I I) Z Ω I) = (N (S I I Z) Ω I)
Now we need to apply N to (S I I Z).Either (S I I Z) has a normal form, or it does not. If it doeshave a normal form, then the foregoing reduces as follows:
{{spaces|2}} (N (S I I Z) Ω I) = (K Ω I) (definition of N) = Ω
but Ω does not have a normal form, so we have a contradiction. Butif (S I I Z) does not have a normal form, the foregoing reduces asfollows:
{{spaces|2}} (N (S I I Z) Ω I) = (K I Ω I) (definition of N) = (I I) = I
which means that the normal form of (S I I Z) is simply I, anothercontradiction. Therefore, the hypothetical normal-form combinator Ncannot exist.The combinatory logic analogue of Rice's theorem says that there is no complete nontrivial predicate. A predicate is a combinator that, when applied, returns either T or F. A predicate N is nontrivial if there are two arguments A and B such that N A = T and N B = F. A combinator N is complete if and only if NM has a normal form for every argument M. The analogue of Rice's theorem then says that every complete predicate is trivial. The proof of this theorem is rather simple.Proof: By reductio ad absurdum. Suppose there is a complete non trivial predicate, say N. Because N is supposed to be non trivial there are combinators A and B such that
(N A) = T and (N B) = F.
Define NEGATION ≡ λx.(if (N x) then B else A) ≡ λx.((N x) B A) Define ABSURDUM ≡ (Y NEGATION)
Fixed point theorem gives: ABSURDUM = (NEGATION ABSURDUM), for
ABSURDUM ≡ (Y NEGATION) = (NEGATION (Y NEGATION)) ≡ (NEGATION ABSURDUM).
Because N is supposed to be complete either:
  1. (N ABSURDUM) = F or
  2. (N ABSURDUM) = T
  • Case 1: F = (N ABSURDUM) = N (NEGATION ABSURDUM) = (N A) = T, a contradiction.
  • Case 2: T = (N ABSURDUM) = N (NEGATION ABSURDUM) = (N B) = F, again a contradiction.
Hence (N ABSURDUM) is neither T nor F, which contradicts the presupposition that N would be a complete non trivial predicate. Q.E.D.From this undecidability theorem it immediately follows that there is no complete predicate that can discriminate between terms that have a normal form and terms that do not have a normal form. It also follows that there is no complete predicate, say EQUAL, such that:
(EQUAL A B) = T if A = B and (EQUAL A B) = F if A ≠ B.
If EQUAL would exist, then for all A, λx.(EQUAL x A) would have to be a complete non trivial predicate.

Applications

Compilation of functional languages

David Turner used his combinators to implement the SASL programming language.Kenneth E. Iverson used primitives based on Curry's combinators in his J programming language, a successor to APL. This enabled what Iverson called tacit programming, that is, programming in functional expressions containing no variables, along with powerful tools for working with such programs. It turns out that tacit programming is possible in any APL-like language with user-defined operators.JOURNAL, Pure Functions in APL and J, Proceedings of the international conference on APL '91, Edward, Cherlin, 88–93, 1991, 10.1145/114054.114065,

Logic

The Curry–Howard isomorphism implies a connection between logic and programming: every proof of a theorem of intuitionistic logic corresponds to a reduction of a typed lambda term, and conversely. Moreover, theorems can be identified with function type signatures. Specifically, a typed combinatory logic corresponds to a Hilbert system in proof theory.The K and S combinators correspond to the axioms
AK: A → (B → A), AS: (A → (B → C)) → ((A → B) → (A → C)),
and function application corresponds to the detachment (modus ponens) rule
MP: from A and A → B infer B.
The calculus consisting of AK, AS, and MP is complete for the implicational fragment of the intuitionistic logic, which can be seen as follows. Consider the set W of all deductively closed sets of formulas, ordered by inclusion. Then langle W,subseteqrangle is an intuitionistic Kripke frame, and we define a model Vdash in this frame by
XVdash Aiff Ain X.
This definition obeys the conditions on satisfaction of →: on one hand, if XVdash Ato B, and Yin W is such that Ysupseteq X and YVdash A, then YVdash B by modus ponens. On the other hand, if XnotVdash Ato B, then X,Anotvdash B by the deduction theorem, thus the deductive closure of Xcup{A} is an element Yin W such that Ysupseteq X, YVdash A, and YnotVdash B.Let A be any formula which is not provable in the calculus. Then A does not belong to the deductive closure X of the empty set, thus XnotVdash A, and A is not intuitionistically valid.

See also

References

{{reflist}}

Further reading

  • BOOK, Barendregt, Hendrik Pieter, Henk Barendregt, 1984, The Lambda Calculus, Its Syntax and Semantics. Studies in Logic and the Foundations of Mathematics, Volume 103, North Holland, 0-444-87508-5, harv,
  • BOOK, Curry, Haskell B., Haskell Curry, Feys, Robert, Robert Feys, Combinatory Logic, Vol. I, 1958, North Holland, Amsterdam, 0-7204-2208-6,
  • BOOK, Curry, Haskell B., Haskell Curry, J. Roger, Hindley, J. Roger Hindley, Jonathan P., Seldin, Combinatory Logic, Vol. II, 1972, North Holland, Amsterdam, 0-7204-2208-6,
  • BOOK, Field, Anthony J., Peter G., Harrison, Peter G. Harrison, 1998, Functional Programming, Addison-Wesley, 0-201-19249-7,
  • {{citation|last1=Hindley|first1=J. Roger|authorlink1=J. Roger Hindley|last2=Meredith|first2=David|title=Principal type-schemes and condensed detachment|url=http://projecteuclid.org/euclid.jsl/1183743187|MR=1043546|journal=Journal of Symbolic Logic|volume = 55|issue=1|pp=90–105|year=1990|doi=10.2307/2274956|jstor=2274956}}
  • BOOK, Hindley, J. R., J. Roger Hindley, Seldin, J. P., 2008,weblink λ-calculus and Combinators: An Introduction, Cambridge University Press,
  • BOOK, Paulson, Lawrence C., Lawrence Paulson, 1995,weblink Foundations of Functional Programming, University of Cambridge,
  • JOURNAL, Willard Van Orman Quine, Quine, W. V., 1960, Variables explained away, Proceedings of the American Philosophical Society, 104, 3, 343–347, 985250, Reprinted as Chapter 23 of Quine's Selected Logic Papers (1966), pp. 227–235
  • Schönfinkel, Moses, 1924, "Ãœber die Bausteine der mathematischen Logik," translated as "On the Building Blocks of Mathematical Logic" in From Frege to Gödel: a source book in mathematical logic, 1879–1931, Jean van Heijenoort, ed. Harvard University Press, 1967. {{ISBN|0-674-32449-8}}. The article that founded combinatory logic.
  • Smullyan, Raymond, 1985. To Mock a Mockingbird. Knopf. {{ISBN|0-394-53491-3}}. A gentle introduction to combinatory logic, presented as a series of recreational puzzles using bird watching metaphors.
  • --------, 1994. Diagonalization and Self-Reference. Oxford Univ. Press. Chapters 17–20 are a more formal introduction to combinatory logic, with a special emphasis on fixed point results.
  • Sørensen, Morten Heine B. and PaweÅ‚ Urzyczyn, 1999. weblink" title="web.archive.org/web/20051016213140weblink">Lectures on the Curry–Howard Isomorphism. University of Copenhagen and University of Warsaw, 1999.
  • BOOK, Wolfengagen, V. E.,weblink Combinatory logic in programming: Computations with objects through examples and exercises, 2nd, Moscow, "Center JurInfoR" Ltd., 2003, 5-89158-101-9, .

External links

{{authority control}}

- content above as imported from Wikipedia
- "combinatory logic" does not exist on GetWiki (yet)
- time: 11:29pm EST - Mon, Dec 10 2018
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
GETWIKI 19 AUG 2014
GETWIKI 18 AUG 2014
Wikinfo
Culture
CONNECT