1 Linguistic Introduction
The Lambek calculus [23] is a substructural, noncommutative logical system (a variant of linear logic [15] in its intuitionistic noncommutative version [1]) that serves as the logical base for categorial grammars, a formalism that aims to describe natural language by means of logical derivability (see Buszkowski [9], Carpenter [11], Morrill [30], Moot and Retoré [28], etc). The idea of categorial grammar goes back to works of Ajdukiewicz [2] and BarHillel [3], and afterwards it developed into several closely related frameworks, including combinatory categorial grammars (CCG, Steedman [39]), categorial dependency grammars (CDG, Dikovsky and Dekhtyar [12]), and Lambek categorial grammars. A categorial grammar assigns syntactic categories (types) to words of the language. In the Lambek setting, types are constructed using two division operations, and , and the product, . Intuitively, denotes the type of a syntactic object that lacks something of type on the left side to become an object of type ; is symmetric; the product stands for concatenation. The Lambek calculus provides a system of rules for reasoning about syntactic types.
For a quick example, consider the sentence “John loves Mary.” Let “John” and “Mary” be of type (noun), and “loves” receive the type of the transitive verb: it takes a noun from the left and a noun from the right, yielding a sentence, . This sentence is judged as a grammatical one, because is a theorem in the Lambek calculus (and even in the Ajdukiewicz – BarHillel logic for basic categorial grammars).
The Lambek calculus is capable of handling more complicated situations, including dependent clauses: “the girl whom John loves”, parsed as using the following types: (here stands for “common noun,” a noun without an article), and coordination: “John loves Mary and Pete loves Kate,” where “and” is .
There are, however, even more sophisticated cases for which the pure Lambek calculus is known to be insufficient (see, for example, [30][28]). On the one hand, for a noun phrase like “the girl whom John met yesterday” it is problematic to find a correct type for “whom,” since the dependent clause “John met yesterday” expects the lacking noun (“John met … yesterday”; the “…” place is called gap) in the middle, and therefore is neither of type nor of type . This phenomenon is called medial extraction. On the other hand, the grammar sketched above generates, for example, *“the girl whom John loves Mary and Pete loves.” The asterisk indicates ungrammaticality—but “John loves Mary and Pete loves” is yet of type . To avoid this, one needs to block extraction from certain syntactic structures (e.g., compound sentences), called islands [38][30].
These issues can be addressed by extending the Lambek calculus with extra connectives (that allow to derive more theorems) and also with a more sophisticated syntactic structure (that allows blocking unwanted derivations). In the next section, we follow Morrill and Valentín [30][33] and define an extension of the Lambek calculus with a subexponential modality (allows medial and also socalled parasitic extraction) and brackets (for creating islands).
2 Logical Introduction
In order to block ungrammatical extractions, such as discussed above, Morrill [29] and Moortgat [27] introduce an extension of the Lambek calculus with brackets that create islands. For the second issue, medial extraction, Morrill and Valentín [4][33] suggest using a modality which they call “exponential,” in the spirit of Girard’s exponential in linear logic [15]. We rather use the term “subexponential,” which is due to Nigam and Miller [34], since this modality allows only some of the structural rules (permutation and contraction, but not weakening). The difference from [34], however, is in the noncommutativity of the whole system and the nonstandard nature of the contraction rule.
We consider , the Lambek calculus with the unit constant [24], brackets, and a subexponential controlled by rules from [33]. The calculus is a conservative fragment of the system by Morrill and Valentín [33].
Due to brackets, the syntax of is more involved than the syntax of a standard sequent calculus. Derivable objects are sequents of the form . The antecedent is a structure called metaformula (or configuration); the succedent is a formula. Metaformulae are built from formulae (types) using two metasyntactic operators: comma and brackets. The succedent is a formula. Formulae, in their turn, are built from primitive types (variables) and the unit constant using the Lambek’s binary connectives: , , and , and three unary connectives, , , and . The first two unary connectives operate brackets; the last one is the subexponential used for medial extraction.
Metaformulae are denoted by capital Greek letters; stands for with a designated occurrence of a metaformula (in particular, formula) . Metaformulae are allowed to be empty; the empty metaformula is denoted by .
The axioms of are and , and the rules are as follows:
The permutation rules for allow medial extraction. The relative pronoun “whom” now receives the type , and the noun phrase “the girl whom John met yesterday” now becomes derivable (the type for “yesterday” is , modifier of verb phrase):
The permutation rule puts to the correct place (“John met … yesterday”).
For brackets, consider the following ungrammatical example: *“the book which John laughed without reading.” In the original Lambek calculus, it would be generated by the following derivable sequent:
In the grammar with brackets, however, “without” receives the syntactic type , making the withoutclause an island that cannot be penetrated by extraction. Thus, the following sequent is not derivable
and the ungrammatical example gets ruled out.
Finally, the nonstandard contraction rule, , that governs both and brackets, was designed for handling a more rare phenomenon called parasitic extraction. It appears in examples like “the paper that John signed without reading.” Compare with the ungrammatical example considered before: now in the dependent clause there are two gaps, and one of them is inside an island (“John signed … [without reading …]”); both gaps are filled with the same :
This construction allows potentially infinite recursion, nesting islands with parasitic extraction. On the other hand, ungrammatical examples, like *“the book that John gave to” with two gaps outside islands (“John gave … to …”) are not derived with , but can be derived using the contraction rule in the standard, not bracketaware form: .
The system with instead of is a conservative extension of its fragment without brackets. In an earlier paper [20] we show that the latter is undecidable. For , however, in the bracketfree fragment there are only permutation rules for , and this fragment is decidable (in fact, it belongs to NP). Therefore, in contrast to [20], the undecidability proof in this paper (Section 5) crucially depends on brackets. On the other hand, in [20] we’ve also proved decidability of a fragment of a calculus with , but without brackets. In the calculus considered in this paper, , brackets control the number of applications, whence we are now able to show membership in NP for a different, broad fragment of (Section 6), which includes brackets.
It can be easily seen that the calculus with bracket modalities but without also belongs to the NP class. Moreover, as shown in [21], there exists even a polynomial algorithm for deriving formulae of bounded order (connective alternation and bracket nesting depth) in the calculus with brackets but without . This algorithm uses proof nets, following the ideas of Pentus [36]. As opposed to [21], as we show here, in the presence of the derivability problem is undecidable.
In short, [20] is about the calculus with , but without brackets; [21] is about the calculus with brackets, but without . This paper is about the calculus with both and brackets, interacting with each other, governed by .
The rest of this paper is organised as follows. In Section 3 we formulate the cut elimination theorem for and sketch the proof strategy; the detailed proof is placed in Appendix I. In Section 4 we define two intermediate calculi used in our undecidability proof. In Section 5 we prove the main result of this paper—the fact that is undecidable. This solves an open question posed by Morrill and Valentín [33] (the other open question from [33], undecidability for the case without brackets, is solved in our previous paper [20]). In Section 6 we consider a practically interesting fragment of for which Morrill and Valentín [33] present an exponential time algorithm and strengthen their result by proving an NP upper bound for the derivability problems in this fragments. Section 7 is for conclusion and future research.
3 Cut Elimination in
Cut elimination is a natural property that one expects a decent logical system to have. For example, cut elimination entails the subformula property: each formula that appears somewhere in the cutfree derivation is a subformula of the goal sequent. (Note that for metaformulae this doesn’t hold, since brackets get removed by applications of some rules, namely, , , and .)
Theorem 3.1 is claimed in [33], but without a detailed proof. In this section we give a sketch of the proof strategy; the complete proof is in Appendix I.
For the original Lambek calculus cut elimination was shown by Lambek [23] and goes straightforwardly by induction; Moortgat [27] extended Lambek’s proof to the Lambek calculus with brackets (but without ). It is wellknown, however, that in the presence of a contraction rule direct induction doesn’t work. Therefore, one needs to use more sophisticated cut elimination strategies.
The standard strategy, going back to Gentzen’s Hauptsatz [14], replaces the cut (Schnitt) rule with a more general rule called mix (Mischung). Mix is a combination of cut and contraction, and this more general rule can be eliminated by straightforward induction. For linear logic with the exponential obeying standard rules, cut elimination is due to Girard [15]; a detailed exposition of the cut elimination procedure using mix is presented in [25, Appendix A].
For , however, due to the subtle nature of the contraction rule, , formulating the mix rule is problematic. Therefore, here we follow another strategy, “deep cut elimination” by Braüner and de Paiva [5][6]; similar ideas are also used in [7] and [13]. As usually, we eliminate one cut, and then proceed by induction.
Lemma 1
Let be derived from and using the cut rule, and and have cutfree derivations and . Then also has a cutfree derivation.
We proceed by nested induction on two parameters: (1) the complexity of the formula being cut; (2) the total number of rule applications in and . Induction goes smoothly for all cases, except the case where the last rule in is and the last rule in is :
(Here stands for , if .) The naïve attempt,
fails, since for the lower the parameter is the same, and is uncontrolled. Instead of that, the “deep” cut elimination strategy goes inside and traces the active occurrences up to the applications of which introduced them. Instead of these applications we put with the left premise, , and replace with down the traces. The new instances have a smaller parameter ( is simpler than ) and can be eliminated by induction.
Theorem 3.1
Every sequent derivable in has a derivation without .
4 Calculi Without Brackets: , ,
In this section we consider more traditional versions of the Lambek calculus with that don’t include bracket modalities. This is needed as a technical step in our undecidability proof (Section 5). Types (formulae) of these calculi are built from primitive types using Lambek’s connectives, , , and , and the subexponential, . Unlike in , metaformulae now are merely linearly ordered sequences of formulae (possibly empty), and we can write instead of .
First we define the calculus . It includes the standard axioms and rules for Lambek connectives and the unit constant—see the rules of in Section 2. For the subexponential modality, , introduction rules, and , and permutation rules are also the same as in , with the natural modification due to a simpler antecedent syntax. The contraction rule, however, is significantly different, since now it is not controlled by brackets:
The full set of axioms and rules of is presented in Appendix II.
This calculus is a conservative fragment of , also by Morrill and Valentín [33]. This system could also be used for modelling medial and parasitic extraction, but is not as finegrained as the bracketed system, being able to derive ungrammatical examples like *“the paper that John sent to” (see Section 2).
In order to construct a mapping of into , we define the bracketforgetting projection (BFP) of formulae and metaformulae that removes all brackets and bracket modalities ( and ). The BFP of a formula is again a formula, but in the language without and ; the BFP of a metaformula is a sequence of formulae. The following lemma is proved by induction on derivation.
Lemma 2
If , then .
Note that the opposite implication doesn’t hold, i.e., this mapping is not conservative. Also, is not a conservative fragment of : in the fragment of without brackets contraction is not admissible.
The second calculus is , obtained from by adding weakening for :
In , the connective is equipped with a full set of structural rules (permutation, contraction, and weakening), i.e., it is the exponential of linear logic [15].
The cut rule in and can be eliminated by the same “deep” strategy as for . On the other hand, since the contraction rule in these calculi is standard, one can also use the traditional way with mix, like in [25, Appendix A].
Finally, if we remove with all its rules, we get the Lambek calculus with the unit constant [24]. We denote it by .
5 Undecidability of
The main result of this paper is:
Theorem 5.1
The derivability problem for is undecidable.
As a byproduct of our proof we also obtain undecidability of , which was proved in [20] by a different method. We also obtain undecidability of , which also follows from the results of [25], as shown in [17] and [16].
We prove Theorem 5.1 by encoding derivations in generative grammars, or semiThue [40] systems. A generative grammar is a quadruple , where and are two disjoint alphabets, is the starting symbol, and is a finite set of productions (rules) of the form , where and are words over . The production can be applied in the following way: , where and are arbitrary (possibly empty) words over . The language generated by is the set of all words over , such that , where is the reflexivetransitive closure of .
Theorem 5.2
In our presentation for every production we require and to be nonempty. This class still includes an undecidable language (cf. [10]).
Further we use two trivial lemmas about derivations in a generative grammar:
Lemma 3
If and , then .
Lemma 4
If and , then .
The second ingredient we need for our undecidability proof is the concept of theories over . Let be a finite set of sequents in the language of . Then is the calculus from by adding sequents from as extra axioms.
In general, the cut rule in is not eliminable. However, the standard cut elimination procedure (see [23]) yields the following cut normalization lemma:
Lemma 5
If a sequent is derivable in , then this sequent has a derivation in which every application of has a sequent from as one of its premises.
This lemma yields a weak version of the subformula property:
Lemma 6
If , and both and include no occurrences of , , and , then there is a derivation of in that includes no occurrences of , , and .
The third core element of the construction is the rule which allows to place a specific formula into an arbitrary place in the sequent.
Lemma 7
The following rule is admissible in :
Proof
Now we are ready to prove Theorem 5.1. Let be
the grammar provided by Theorem 5.2,
and the set of variables include .
We convert productions of into Lambek formulae in the following natural way:
For , we define the following sequences of formulae:
(Since in all calculi we have permutation rules for formulae under , the ordering of doesn’t matter.) We also define a theory associated with , as follows:
Lemma 8
The following are equivalent:

(i.e., belongs to the language defined by );

;

;

;

.
Proof
Proceed by induction on . The base case is handled as follows:
For the induction step let the last production be , i.e.,
Then, since is in , we enjoy the following:
Here is derivable in by induction hypothesis, and the rule is admissible due to Lemma 7.
Immediately by Lemma 2, since and .
For each formula from the sequent is derivable in by consequent application of , , and to the axiom. The sequent is derivable in and therefore in , and applying for each formula of yields .
In this part of our proof we follow [25] and [17]. Consider the derivation of in (recall that by default all derivations are cutfree) and remove all the formulae of the form from all sequents in this derivation. After this transformation the rules not operating with remain valid. Applications of , , and do not alter the sequent. The rule is never applied in the original derivation, since our sequents never have formulae of the form in their succedents. Finally, an application of ,
is simulated in in the following way:
In this part we follow [25]. Let . By Lemma 6, this sequent has a derivation without occurrences of , , and . In other words, all formulae in this derivation are built from variables using only the product. Since it is associative, we can omit parenthesis in the formulae; we shall also omit the “”s. The rules used in this derivation can now be written as follows:
The rule is trivial. The axioms are productions of with the arrows inversed, and . By induction, using Lemmas 3 and 4, we show that if is derivable using these rules and axioms, then . Now the derivability of implies .
6 A Decidable Fragment
The undecidability results from the previous section are somewhat unfortunate, since the new operations added to have good linguistic motivations [33][30]. As a compensation, in this section we show NPdecidability for a substantial fragment of , introduced by Morrill and Valentín [33] (see Definition 1 below). This complexity upper bound is tight, since the original Lambek calculus is already known to be NPcomplete [35]. Notice that Morrill and Valentín present an exponential time algorithm for deciding derivability in this fragment; this algorithm was implemented as part of a parser called CatLog [31].
First we recall the standard notion of polarity of occurrences of subformulae in a formula. Every formula occurs positively in itself; subformula polarities get inverted (positive becomes negative and vice versa) when descending into denominators of and and also for the lefthand side of the sequent; brackets and all unary operations don’t change polarity. All inference rules of respect polarity: a positive (resp., negative) occurrence of a subformula in the premise(s) of the rule translates into a positive (resp., negative) occurrence in the goal.
Definition 1
An sequent obeys the bracket nonnegative condition, if any negative occurrence of a subformula of the form in includes neither a positive occurrence of a subformula of the form , nor a negative occurrence of a subformula of the form .
Note that sequents used in our undecidability proof are exactly the minimal violations of this bracket nonnegative condition.
Theorem 6.1
The derivability problem in for sequents that obey the bracket nonnegative condition belongs to the NP class.
Derivations in
are a bit incovenient for complexity estimations, since redundant applications of permutation rules could make the proof arbitrarily large without increasing its “real” complexity. In order to get rid of that, we introduce a
generalised form of permutation rule:where the sequence coincides with , and . Obviously, is admissible in , and it subsumes , so further we consider a formulation of with instead of . Several consecutive applications of can be merged into one. We call a derivation normal, if it doesn’t contain consecutive applications of . If a sequent is derivable in , then it has a normal cutfree derivation.
Lemma 9
Every normal cutfree derivation of a sequent that obeys bracket nonnegative restriction is of quadratic size (number of rule applications) w.r.t. the size of the goal sequent.
Proof
Let us call and structural rules, and all others logical.
First, we track all pairs of brackets that occur in this derivation. Pairs of brackets are in onetoone correspondence with applications of or rules that introduce them. Then a pair of brackets either traces down to the goal sequent, or gets destroyed by an application of , , or . Therefore, the total number of applications is less or equal to the number of and applications. Each application introduces a negative occurrence of a formula; each occurrence introduces a positive occurrence of a formula. Due to the bracket nonnegative condition these formulae are never contracted (i.e., could not occur in a to which is applied), and therefore they trace down to distinct subformula occurrences in the goal sequent. Hence, the total number of applications is bounded by the number of subformulae of a special kind in the goal sequent, in other words, it is bounded by the size of the sequent.
Second, we bound the number of logical rules applications. Each logical rule introduces exactly one connective occurrence. Such an occurrence traces down either to a connective occurrence in the goal sequent, or to an application of that merges this occurrence with the corresponding occurrence in the other . If is the size of the goal sequent, then the first kind of occurrences is bounded by ; for the second kind, notice that each application of merges not more than occurrences (since the size of the formula being contracted, , is bounded by due to the subformula property), and the total number of applications is also bounded by . Thus, we get a quadratic bound for the number of logical rule applications.
Third, the derivation is a tree with binary branching, so the number of leafs (axioms instances) in this tree is equal to the number of branching points plus one. Each branching point is an application of a logical rule (namely, , , or ). Hence, the number of axiom instances is bounded quadratically.
Finally, the number of applications is also quadratically bounded, since each application of in a normal proof is preceded by an application of another rule or by an axiom instance.
Proof (of Theorem 6.1)
The normal derivation of a sequent obeying the bracket nonnegative condition is an NPwitness for derivability: it is of polynomial size, and correctness is checked in linear time (w.r.t. the size of the derivation).
For the case without brackets, , considered in our earlier paper [20], the NPdecidable fragment is substantially smaller. Namely, it includes only sequents in which can be applied only to variables. Indeed, as soon as we allow formulae of implication nesting depth at least 2 under , the derivablity problem for becomes undecidable [20]. In contrast to , in , due to the nonstandard contraction rule, brackets control the number of applications in the proof, and this allows to construct an effective decision algorithm for derivability of a broad class of sequents, where, for example, any formulae without bracket modalities can be used under . Essentially, the only problematic situation, that gives rise to undecidability (Theorem 5.1), is the construction where one forcedly removes the brackets that appear in the rule, i.e., uses constructions like (as in our undecidability proof). The idea of the bracket nonnegative condition is to rule out such situations while keeping all other constructions allowed, as they don’t violate decidability [33].
7 Conclusions and Future Work
In this paper we study an extension of the Lambek calculus with subexponential and bracket modalities. Bracket modalities were introduced by Morrill [29] and Moortgat [27] in order to represent the linguistic phenomenon of islands [38]. The interaction of subexponential and bracket modalities was recently studied by Morrill and Valentín [33] in order to represent correctly the phenomenon of medial and parasitic extraction [38][4]. We prove that the calculus of Morrill and Valentín is undecidable, thus solving a problem left open in [33]. Morrill and Valentín also considered the socalled bracket nonnegative fragment of this calculus, for which they presented an exponential time derivability decision procedure. We improve their result by showing that this problem is in NP.
Our undecidability proof is based on encoding semiThue systems by means of sequents that lie just outside the bracket nonnegative fragment. More precisely, the formulae used in our encoding are of the from , where is a pure Lambek formula of order 2. It remains for further investigation whether these formulae could be simplified.
Our undecidability proof could be potentially made stronger by restricting the language. Now we use three connectives of the original Lambek calculus: , , and , plus and . One could get rid of by means of the substitution from [22]. Going further, one might also encode a more clever construction by Buszkowski [8] in order to restrict ourselves further to the productfree onedivision fragment. Finally, one could adopt substitutions from [18] and obtain undecidability for the language with only one variable.
References
 [1] V. M. Abrusci. A comparison between Lambek syntactic calculus and intuitionistic linear propositional logic. Zeitschr. für math. Log. Grundl. Math. (Math. Logic Quart.), 36:11–15, 1990.
 [2] K. Ajdukiewicz. Die syntaktische Konnexität. Studia Philosophica, Vol. 1, 1–27, 1935.
 [3] Y. BarHillel. A quasiarithmetical notation for syntactic description. Language, Vol. 29, 47–58, 1953.
 [4] G. Barry, M. Hepple, N. Leslie, G. Morrill. Proof figures and structural operators for categorial grammar. Proc. 5th Conference of the European Chapter of ACL, Berlin, 1991.
 [5] T. Braüner, V. de Paiva. Cut elimination for full intuitionstic linear logic. BRICS Report RS9610, April 1996.
 [6] T. Braüner, V. de Paiva. A formulation of linear logic based on dependency relations. Proc. CSL 1997, LNCS vol. 1414, Springer, 1998, 129–148.
 [7] T. Braüner. A cutfree Gentzen formulation of modal logic S5. Log. J. IGPL, 8(5):629–643, 2000.
 [8] W. Buszkowski. Some decision problems in the theory of syntactic categories. Zeitschr. für math. Logik und Grundl. der Math. (Math. Logic Quart.), Vol. 28, 539–548, 1982.
 [9] W. Buszkowski. Type logics in grammar. In: Trends in Logic: 50 Years of Studia Logica, Springer, 2003, 337–382.
 [10] W. Buszkowski. Lambek calculus with nonlogical axioms. Language and Grammar. CSLI Lect. Notes vol. 168, 2005, 77–93.
 [11] B. Carpenter. Typelogical semantics. MIT Press, 1997.
 [12] M. Dekhtyar, A. Dikovsky. Generalized categorial dependency grammars. Trakhtenbrot/Festschrift, LNCS vol. 4800, Springer, 2008, 230–255.
 [13] H. Eades III, V. de Paiva. Multiple conclusion linear logic: cut elimination and more. Proc. LFCS 2016. LNCS vol. 9537, 2015, 90–105.
 [14] G. Gentzen. Untersuchungen über das logische Schließen I. Mathematische Zeitschrift, Vol. 39, 176–210, 1935.
 [15] J.Y. Girard. Linear logic. Theor. Comput. Sci. 50:1–102, 1987.
 [16] Ph. de Groote. On the expressive power of the Lambek calculus extended with a structural modality. Language and Grammar. CSLI Lect. Notes, vol. 168, 2005, 95–111.
 [17] M. Kanazawa. Lambek calculus: Recognizing power and complexity. In: J. Gerbrandy et al. (eds.). JFAK. Essays dedicated to Johan van Benthem on the occasion of his 50th birthday. Vossiuspers, Amsterdam Univ. Press, 1999.
 [18] M. Kanovich. The complexity of neutrals in linear logic. Proc. LICS ’95, 1995, 486–495.
 [19] M. Kanovich, S. Kuznetsov, A. Scedrov. On Lambek’s restriction in the presence of exponential modalities. Proc. LFCS ’16. LNCS vol. 9537, 2015, 146–158.
 [20] M. Kanovich, S. Kuznetsov, A. Scedrov. Undecidability of the Lambek calculus with a relevant modality. Proc. FG ’15 and FG ’16. LNCS vol. 9804, 2016, 240–256 (arXiv: 1601.06303).
 [21] M. Kanovich, S. Kuznetsov, G. Morrill, A. Scedrov. A polynomial time algorithm for the Lambek calculus with brackets of bounded order. arXiv preprint 1705.00694, 2017. Submitted for publication.
 [22] S. L. Kuznetsov. On the Lambek calculus with a unit and one division. Moscow Univ. Math. Bull., 66:4 (2011), 173–175.
 [23] J. Lambek. The mathematics of sentence structure. Amer. Math. Monthly, Vol. 65, No. 3, 154–170, 1958.
 [24] J. Lambek. Deductive systems and categories II: Standard constructions and closed categories. Category Theory, Homology Theory and their Applications I. Lect. Notes Math. vol. 86, Springer, 1969, 76–122.
 [25] P. Lincoln, J. Mitchell, A. Scedrov, N. Shankar. Decision problems for propositional linear logic. APAL, 56:239–311, 1992.
 [26] A. Markov. On the impossibility of certain algorithms in the theory of associative systems. Doklady Acad. Sci. USSR (N. S.), 55 (1947), 583–586.
 [27] M. Moortgat. Multimodal linguistic inference. J. Log. Lang. Inform., 5(3,4):349–385, 1996.
 [28] R. Moot, C. Retoré. The logic of categorial grammars: a deductive account of natural language syntax and semantics. Springer, 2012.
 [29] G. Morrill. Categorial formalisation of relativisation: pied piping, islands, and extraction sites. Technical Report LSI9223R, Universitat Politècnica de Catalunya, 1992.
 [30] G. V. Morrill. Categorial grammar: logical syntax, semantics, and processing. Oxford University Press, 2011.
 [31] G. Morrill. CatLog: a categorial parser/theoremprover. System demonstration, LACL 2012, Nantes, 2012.
 [32] G. Morrill. Grammar logicised: relativisation. Linguistics and Philosophy, 40(2): 119–163, 2017.
 [33] G. Morrill, O. Valentín. Computational coverage of TLG: Nonlinearity. Proc. NLCS ’15 (EPiC Series, vol. 32), 2015. P. 51–63.
 [34] V. Nigam, D. Miller. Algorithmic specifications in linear logic with subexponentials. Proc. PPDP ’09, ACM, 2009. P. 129–140.
 [35] M. Pentus. Lambek calculus is NPcomplete. Theor. Comput. Sci. 357(1):186–201, 2006.
 [36] M. Pentus. A polynomial time algorithm for Lambek grammars of bounded order. Linguistic Analysis, 36(1–4):441–471, 2010.
 [37] E. L. Post. Recursive unsolvability of a problem of Thue. J. Symb. Log., 12 (1947), 1–11.
 [38] J. R. Ross. Constraints on variables in syntax. Ph. D. Thesis, MIT, 1967.
 [39] M. Steedman. The syntactic process. MIT Press, 2000.
 [40] A. Thue. Probleme über Veränderungen von Zeichenreihen nach gegebener Regeln. Kra. Vidensk. Selsk. Skrifter., 10, 1914. (In: Selected Math. Papers, Univ. Forlaget, Oslo, 1977, pp. 493–524.)
Appendix I. Cut Elimination Proof for
In this section we give a complete proof of Lemma 1, which is the main step of cut elimination in (Theorem 3.1).
Proof
Proceed by nested induction on two parameters:

The complexity of the formula being cut.

The total number of rule applications in and .
In each case either gets reduced, or remains the same and gets reduced.
Case 1 (axiomatic). One of the premises of is an axiom of the form . Then the other premise coincides with the goal, and cut disappears.
Case 2 (left nonprincipal).
Subcase 2.a. The last rule in is one of the onepremise rules operating only on the lefthand side of the sequent: , , , , , , . Denote this rule by . Notice that can be applied in any context, and transform the derivation in the following way:
The parameter gets reduced, therefore the new cut is eliminable by induction hypothesis.
Subcase 2.b. The last rule in is or . Then the derivation fragment
is transformed into
Again, decreases. The case is handled symmetrically.
Case 3 (deep). The last rule applied on the left is . Then the cut rule application has the following form:
The right premise, , has a cutfree derivation tree . Let us trace the designated occurrence of in . The trace can branch if is applied to this formula. Each branch of the trace ends either with an axiom () leaf or with an application of that introduces .
The axiom can be reduced to by consequent application of and . Therefore, without loss of generality, we can assume that all branches lead to applications of . The whole picture is shown on Figure 1.
In we replace the designated occurrences of with along the traces. The applications of remain valid; if there were permutation rules applied, we replace such a rule with a series of permutations for each formula in . Other rules do not operate and therefore remain intact. After this replacement applications of tranform into applications of with as the left premise (Figure 2). One case could go through several instances of with the active , like and in the example; in this case we go from top to bottom.
The new cuts have lower (the cut formula is instead of ), and therefore they are eliminable by induction hypothesis.
Case 4 (principal). In the socalled principal case, the last rules both in and in introduce the main connective of the formula being cut. Note that here is not of the form (this is the previous case). In the principal case, the parameter gets reduced, and therefore the induction hypothesis can be applied to eliminate the new cut(s) that arise after the transformation.
Subcase 4.a: vs. or vs. . In this case (the case is handled symmetrically), and the derivation fragment
transforms into
Subcase 4.b. vs. . In this case , and the derivation fragment
transforms into
Subcase 4.c. vs. . In this case :
The cut disappears, since its goal coincides with the premise of .
Subcase 4.d. vs. . In this case , and the derivation fragment
transforms into
Subcase 4.e. vs. . In this case , and the derivation fragment
transforms into
Case 5 (right nonprincipal). In the remaining cases, is not of the form (therefore the last rule of is not ; it is also not , since there is nothing to cut in an empty antecedent) and the last rule of does not operate on . In this case, the cut gets propagated upwards to , decreasing with the same .
Comments
There are no comments yet.