diff --git a/doc/gf-logo.png b/doc/gf-logo.png new file mode 100644 index 000000000..4d29b7d8e Binary files /dev/null and b/doc/gf-logo.png differ diff --git a/doc/quick-editor.png b/doc/quick-editor.png new file mode 100644 index 000000000..c840a8108 Binary files /dev/null and b/doc/quick-editor.png differ diff --git a/doc/tutorial/Makefile b/doc/tutorial/Makefile new file mode 100644 index 000000000..d226e7348 --- /dev/null +++ b/doc/tutorial/Makefile @@ -0,0 +1,6 @@ +html: + txt2tags -thtml --toc gf-tutorial2.txt +tex: + txt2tags -ttex --toc gf-tutorial2.txt + pdflatex gf-tutorial2.tex + pdflatex gf-tutorial2.tex diff --git a/doc/tutorial/gf-tutorial2.html b/doc/tutorial/gf-tutorial2.html index 804ed1969..5576428b5 100644 --- a/doc/tutorial/gf-tutorial2.html +++ b/doc/tutorial/gf-tutorial2.html @@ -2,12 +2,13 @@ + Grammatical Framework Tutorial

Grammatical Framework Tutorial

-Author: Aarne Ranta <aarne (at) cs.chalmers.se>
-Last update: Fri Jun 16 17:28:39 2006 +Author: Aarne Ranta aarne (at) cs.chalmers.se
+Last update: Wed May 30 21:26:11 2007

@@ -31,108 +32,106 @@ Last update: Fri Jun 16 17:28:39 2006
  • Systematic generation
  • More on pipes; tracing
  • Writing and reading files -
  • Labelled context-free grammars -
  • The labelled context-free format -
  • The .gf grammar format +
  • The .gf grammar format -
  • Multilingual grammars and translation +
  • Multilingual grammars and translation -
  • Grammar architecture +
  • Grammar architecture -
  • System commands -
  • Resource modules +
  • Resource modules -
  • Morphology +
  • Morphology -
  • Using morphology in concrete syntax +
  • Using parameters in concrete syntax -
  • More constructs for concrete syntax +
  • Using the resource grammar library TODO -
  • More concepts of abstract syntax +
  • More constructs for concrete syntax -
  • More features of the module system +
  • More concepts of abstract syntax -
  • Using the standard resource library +
  • Transfer modules TODO +
  • Practical issues TODO -
  • Transfer modules -
  • Practical issues +
  • Larger case studies TODO -
  • Case studies - @@ -140,7 +139,7 @@ Last update: Fri Jun 16 17:28:39 2006

    - +

    Introduction

    @@ -199,7 +198,7 @@ A typical GF application is based on a multilingual grammar involving translation on a special domain. Existing applications of this idea include

    -The CF format fuses these two things together, but it is possible -to take them apart. For instance, the sentence formation rule +The context-free format fuses these two things together, but it is always +possible to take them apart. For instance, the sentence formation rule

         Is. S ::= Item "is" Quality ;
     

    -is interpreted as the following pair of rules: +is interpreted as the following pair of GF rules:

         fun Is : Item -> Quality -> S ;
    @@ -731,7 +678,7 @@ The latter rule, with the keyword lin, belongs to the concrete synt
     It defines the linearization function for
     syntax trees of form (Is item quality). 
     

    - +

    Judgement forms

    Rules in a GF grammar are called judgements, and the keywords @@ -759,7 +706,6 @@ judgement forms: -

    - -

    Record types, records, and ``Str``s

    + +

    Records and strings

    The linearization type of a category is a record type, with zero of more fields of different types. The simplest record @@ -861,7 +806,7 @@ can be used for lists of tokens. The expression

    denotes the empty token list.

    - +

    An abstract syntax example

    To express the abstract syntax of food.cf in @@ -874,7 +819,7 @@ a file Food.gf, we write two kinds of judgements:

    -  abstract Food = {
    +    abstract Food = {
       
         cat
           S ; Item ; Kind ; Quality ;
    @@ -886,14 +831,27 @@ a file Food.gf, we write two kinds of judgements:
           Wine, Cheese, Fish : Kind ;
           Very : Quality -> Quality ;
           Fresh, Warm, Italian, Expensive, Delicious, Boring : Quality ;
    -  }
    +    }
     

    Notice the use of shorthands permitting the sharing of -the keyword in subsequent judgements, and of the type -in subsequent fun judgements. +the keyword in subsequent judgements,

    - +
    +    cat S ; Item ;   ===   cat S ; cat Item ; 
    +
    +

    +and of the type in subsequent fun judgements, +

    +
    +    fun Wine, Fish : Kind ;            ===
    +    fun Wine : Kind ; Fish : Kind ;    ===
    +    fun Wine : Kind ; fun Fish : Kind ;
    +
    +

    +The order of judgements in a module is free. +

    +

    A concrete syntax example

    Each category introduced in Food.gf is @@ -902,7 +860,7 @@ function is given a lin rule. Similar shorthands apply as in abstract modules.

    -  concrete FoodEng of Food = {
    +    concrete FoodEng of Food = {
       
         lincat
           S, Item, Kind, Quality = {s : Str} ;
    @@ -922,16 +880,16 @@ apply as in abstract modules.
           Expensive = {s = "expensive"} ;
           Delicious = {s = "delicious"} ;
           Boring = {s = "boring"} ;
    -  }
    +    }
     

    - +

    Modules and files

    -Module name + .gf = file name +Source files: Module name + .gf = file name

    -Each module is compiled into a .gfc file. +Target files: each module is compiled into a .gfc file.

    Import FoodEng.gf and see what happens @@ -952,7 +910,7 @@ GF source files. When reading a module, GF decides whether to use an existing .gfc file or to generate a new one, by looking at modification times.

    - +

    Multilingual grammars and translation

    The main advantage of separating abstract from concrete syntax is that @@ -965,7 +923,7 @@ translation. Let us build an Italian concrete syntax for Food and then test the resulting multilingual grammar.

    - +

    An Italian concrete syntax

       concrete FoodIta of Food = {
    @@ -993,7 +951,7 @@ multilingual grammar.
       
     

    - +

    Using a multilingual grammar

    Import the two grammars in the same GF session. @@ -1032,7 +990,7 @@ To see what grammars are in scope and which is the main one, use the command actual concretes : FoodIta FoodEng

    - +

    Translation session

    If translation is what you want to do with a set of grammars, a convenient @@ -1055,7 +1013,7 @@ A dot . terminates the translation session. >

    - +

    Translation quiz

    This is a simple language exercise that can be automatically @@ -1095,9 +1053,9 @@ file for later use, by the command translation_list = tl

    The number flag gives the number of sentences generated.

    - +

    Grammar architecture

    - +

    Extending a grammar

    The module system of GF makes it possible to extend a @@ -1132,7 +1090,7 @@ be built for concrete syntaxes: The effect of extension is that all of the contents of the extended and extending module are put together.

    - +

    Multiple inheritance

    Specialized vocabularies can be represented as small grammars that @@ -1167,7 +1125,7 @@ At this point, you would perhaps like to go back to Food and take apart Wine to build a special Drink module.

    - +

    Visualizing module structure

    When you have created all the abstract syntaxes and @@ -1195,8 +1153,8 @@ The graph uses

    - -

    System commands

    + +

    System commands

    To document your grammar, you may want to print the graph into a file, e.g. a .png file that @@ -1223,9 +1181,9 @@ are available: > help -printer

    - +

    Resource modules

    - +

    The golden rule of functional programming

    In comparison to the .cf format, the .gf format looks rather @@ -1247,7 +1205,7 @@ changing parts, parameters. In functional programming languages, such as Haskell, it is possible to share much more than in languages such as C and Java.

    - +

    Operation definitions

    GF is a functional programming language, not only in the sense that @@ -1277,7 +1235,7 @@ its type, and an expression defining it. As for the syntax of the defining expression, notice the lambda abstraction form \x -> t of the function.

    - +

    The ``resource`` module type

    Operator definitions can be included in a concrete syntax. @@ -1305,7 +1263,7 @@ Resource modules can extend other resource modules, in the same way as modules of other types can extend modules of the same type. Thus it is possible to build resource hierarchies.

    - +

    Opening a ``resource``

    Any number of resource modules can be @@ -1340,22 +1298,22 @@ opened in a new version of FoodEng. }

    -The same string operations could be use to write FoodIta +The same string operations could be used to write FoodIta more concisely.

    - +

    Division of labour

    Using operations defined in resource modules is a way to avoid repetitive code. In addition, it enables a new kind of modularity and division of labour in grammar writing: grammarians familiar with -the linguistic details of a language can put this knowledge +the linguistic details of a language can make this knowledge available through resource grammar modules, whose users only need to pick the right operations and not to know their implementation details.

    - +

    Morphology

    Suppose we want to say, with the vocabulary included in @@ -1373,9 +1331,9 @@ singular forms. The introduction of plural forms requires two things:

    @@ -1390,7 +1348,7 @@ and many new expression forms. We also need to generalize linearization types from strings to more complex types.

    - +

    Parameters and tables

    We define the parameter type of number in Englisn by @@ -1422,6 +1380,10 @@ example shows such a table: } ;

    +The table consists of branches, where a pattern on the +left of the arrow => is assigned a value on the right. +

    +

    The application of a table to a parameter is done by the selection operator !. For instance,

    @@ -1429,19 +1391,22 @@ operator !. For instance, table {Sg => "cheese" ; Pl => "cheeses"} ! Pl

    -is a selection, whose value is "cheeses". +is a selection that computes into the value "cheeses". +This computation is performed by pattern matching: return +the value from the first branch whose pattern matches the +selection argument.

    - +

    Inflection tables, paradigms, and ``oper`` definitions

    All English common nouns are inflected in number, most of them in the -same way: the plural form is formed from the singular form by adding the +same way: the plural form is obtained from the singular by adding the ending s. This rule is an example of a paradigm - a formula telling how the inflection forms of a word are formed.

    -From GF point of view, a paradigm is a function that takes a lemma - +From the GF point of view, a paradigm is a function that takes a lemma - also known as a dictionary form - and returns an inflection table of desired type. Paradigms are not functions in the sense of the fun judgements of abstract syntax (which operate on trees and not @@ -1465,7 +1430,7 @@ are written together to form one token. Thus, for instance, (regNoun "cheese").s ! Pl ---> "cheese" + "s" ---> "cheeses"

    - +

    Worst-case functions and data abstraction

    Some English nouns, such as mouse, are so irregular that @@ -1506,7 +1471,7 @@ interface (i.e. the system of type signatures) that makes it correct to use these functions in concrete modules. In programming terms, Noun is then treated as an abstract datatype.

    - +

    A system of paradigms using Prelude operations

    In addition to the completely regular noun paradigm regNoun, @@ -1534,11 +1499,11 @@ all characters but the last) of a string: yNoun : Str -> Noun = \fly -> mkNoun fly (init fly + "ies") ;

    -The operator init belongs to a set of operations in the +The operation init belongs to a set of operations in the resource module Prelude, which therefore has to be opened so that init can be used.

    - +

    An intelligent noun paradigm using ``case`` expressions

    It may be hard for the user of a resource morphology to pick the right @@ -1568,15 +1533,13 @@ this, either use mkNoun or modify regNoun so that the "y" case does not apply if the second-last character is a vowel.

    - +

    Pattern matching

    -Expressions of the table form are built from lists of -argument-value pairs. These pairs are called the branches -of the table. In addition to constants introduced in -param definitions, the left-hand side of a branch can more -generally be a pattern, and the computation of selection is -then performed by pattern matching: +We have so far built all expressions of the table form +from branches whose patterns are constants introduced in +param definitions, as well as constant strings. +But there are more expressive patterns. Here is a summary of the possible forms:

    - +

    Prefix-dependent choices

    Sometimes a token has different forms depending on the token @@ -2156,7 +2332,7 @@ Thus

         artIndef ++ "cheese"  --->  "a" ++ "cheese"
    -    artIndef ++ "apple"   --->  "an" ++ "cheese"
    +    artIndef ++ "apple"   --->  "an" ++ "apple"
     

    This very example does not work in all situations: the prefix @@ -2171,7 +2347,7 @@ This very example does not work in all situations: the prefix } ;

    - +

    Predefined types and operations

    GF has the following predefined categories in abstract syntax: @@ -2194,11 +2370,17 @@ they can be used as arguments. For example: -- e.g. (StreetAddress 10 "Downing Street") : Address

    -The linearization type is {s : Str} for all these categories. +FIXME: The linearization type is {s : Str} for all these categories.

    - +

    More concepts of abstract syntax

    - +

    +This section is about the use of the type theory part of GF for +including more semantics in grammars. Some of the subsections present +ideas that have not yet been used in real-world applications, and whose +tool support outside the GF program is not complete. +

    +

    GF as a logical framework

    In this section, we will show how @@ -2217,8 +2399,8 @@ of such a theory, represented as an abstract module in GF.

       abstract Arithm = {
         cat
    -      Prop ;    -- proposition
    -      Nat ;     -- natural number
    +      Prop ;                        -- proposition
    +      Nat ;                         -- natural number
         fun
           Zero : Nat ;                  -- 0
           Succ : Nat -> Nat ;           -- successor of x
    @@ -2230,7 +2412,7 @@ of such a theory, represented as an abstract module in GF.
     A concrete syntax is given below, as an example of using the resource grammar
     library.
     

    - +

    Dependent types

    Dependent types are a characteristic feature of GF, @@ -2266,12 +2448,10 @@ a street, a city, and a country. }

    -The linearization rules -are straightforward, +The linearization rules are straightforward,

         lin
    -  
           mkAddress country city street = 
             ss (street.s ++ "," ++ city.s ++ "," ++ country.s) ;
           UK = ss ("U.K.") ;
    @@ -2286,11 +2466,11 @@ are straightforward,
           AvAlsaceLorraine = ss ("avenue" ++ "Alsace-Lorraine") ;
     

    -with the exception of mkAddress, where we have +Notice that, in mkAddress, we have reversed the order of the constituents. The type of mkAddress in the abstract syntax takes its arguments in a "logical" order, -with increasing precision. (This order is sometimes even used in the concrete -syntax of addresses, e.g. in Russia). +with increasing precision. (This order is sometimes even used in the +concrete syntax of addresses, e.g. in Russia).

    Both existing and non-existing addresses are recognized by this @@ -2314,10 +2494,11 @@ well-formed. What we do is to include contexts in cat judgements:

    -    cat Address ; 
    -    cat Country ; 
    -    cat City Country ; 
    -    cat Street (x : Country)(y : City x) ;
    +    cat 
    +      Address ; 
    +      Country ; 
    +      City Country ; 
    +      Street (x : Country)(City x) ;
     

    The first two judgements are as before, but the third one makes @@ -2342,19 +2523,18 @@ The fun judgements of the grammar are modified accordingly:

         fun
    +      mkAddress : (x : Country) -> (y : City x) -> Street x y -> Address ;
         
    -    mkAddress : (x : Country) -> (y : City x) -> Street x y -> Address ;
    -    
    -    UK : Country ;
    -    France : Country ;
    -    Paris : City France ; 
    -    London : City UK ; 
    -    Grenoble : City France ;
    -    OxfordSt : Street UK London ; 
    -    ShaftesburyAve : Street UK London ;
    -    BdRaspail : Street France Paris ; 
    -    RueBlondel : Street France Paris ; 
    -    AvAlsaceLorraine : Street France Grenoble ;
    +      UK : Country ;
    +      France : Country ;
    +      Paris : City France ; 
    +      London : City UK ; 
    +      Grenoble : City France ;
    +      OxfordSt : Street UK London ; 
    +      ShaftesburyAve : Street UK London ;
    +      BdRaspail : Street France Paris ; 
    +      RueBlondel : Street France Paris ; 
    +      AvAlsaceLorraine : Street France Grenoble ;
     

    Since the type of mkAddress now has dependencies among @@ -2394,11 +2574,17 @@ or any other naming of the variables. Actually the use of variables sometimes shortens the code, since we can write e.g.

    -    fun  ConjNP : Conj -> (x,y : NP) -> NP ;
    -    oper triple : (x,y,z : Str) -> Str = \x,y,z -> x ++ y ++ z ;
    +    oper triple : (x,y,z : Str) -> Str = ...
    +
    +

    +If a bound variable is not used, it can here, as elswhere in GF, be replaced by +a wildcard: +

    +
    +    oper triple : (_,_,_ : Str) -> Str = ...
     

    - +

    Dependent types in concrete syntax

    The functional fragment of GF @@ -2443,7 +2629,7 @@ When the operations are used, the type checker requires them to be equipped with all their arguments; this may be a nuisance for a Haskell or ML programmer.

    - +

    Expressing selectional restrictions

    This section introduces a way of using dependent types to @@ -2467,8 +2653,8 @@ For instance, the sentence is syntactically well-formed but semantically ill-formed. It is well-formed because it combines a well-formed noun phrase ("the number 2") with a well-formed -verb phrase ("is equilateral") in accordance with the -rule that the verb phrase is inflected in the +verb phrase ("is equilateral") and satisfies the agreement +rule saying that the verb phrase is inflected in the number of the noun phrase:

    @@ -2523,6 +2709,7 @@ but no proposition linearized to
     

    since Equilateral two is not a well-formed type-theoretical object. +It is not even accepted by the context-free parser.

    When formalizing mathematics, e.g. in the purpose of @@ -2559,64 +2746,15 @@ and dependencies of other categories on this:

         cat 
           S ;            -- sentence
    -      V1 Dom ;       -- one-place verb
    -      V2 Dom Dom ;   -- two-place verb
    +      V1 Dom ;       -- one-place verb with specific subject type
    +      V2 Dom Dom ;   -- two-place verb with specific subject and object types
           A1 Dom ;       -- one-place adjective
           A2 Dom Dom ;   -- two-place adjective
    -      PN Dom ;       -- proper name
    -      NP Dom ;       -- noun phrase
    +      NP Dom ;       -- noun phrase for an object of specific type
           Conj ;         -- conjunction
           Det ;          -- determiner
     

    -The number of Dom arguments depends on the semantic type -corresponding to the category: one-place verbs and adjectives -correspond to types of the form -

    -
    -    A -> Prop
    -
    -

    -whereas two-place verbs and adjectives correspond to types of the form -

    -
    -    A -> B -> Prop
    -
    -

    -where the domains A and B can be distinct. -Proper names correspond to types of the form -

    -
    -    A
    -
    -

    -that is, individual objects of the domain A. Noun phrases -correspond to -

    -
    -    (A -> Prop) -> Prop
    -
    -

    -that is, quantifiers over the domain A. -Sentences, conjunctions, and determiners correspond to -

    -
    -    Prop
    -    Prop -> Prop -> Prop
    -    (A : Dom) -> (A -> Prop) -> Prop
    -
    -

    -respectively, -and are thus independent of domain. As for common nouns CN, -the simplest semantics is that they correspond to -

    -
    -    Dom
    -
    -

    -In this section, we will, in fact, write Dom instead of CN. -

    -

    Having thus parametrized categories on domains, we have to reformulate the rules of predication, etc, accordingly. This is straightforward:

    @@ -2624,7 +2762,6 @@ the rules of predication, etc, accordingly. This is straightforward: fun PredV1 : (A : Dom) -> NP A -> V1 A -> S ; ComplV2 : (A,B : Dom) -> V2 A B -> NP B -> V1 A ; - UsePN : (A : Dom) -> PN A -> NP A ; DetCN : Det -> (A : Dom) -> NP A ; ModA1 : (A : Dom) -> A1 A -> Dom ; ConjS : Conj -> S -> S -> S ; @@ -2632,14 +2769,13 @@ the rules of predication, etc, accordingly. This is straightforward:

    In linearization rules, -we typically use wildcards for the domain arguments, -to get arities right: +we use wildcards for the domain arguments, +because they don't affect linearization:

         lin
           PredV1 _ np vp = ss (np.s ++ vp.s) ;
           ComplV2 _ _ v2 np = ss (v2.s ++ np.s) ;
    -      UsePN _ pn = pn ;
           DetCN det cn = ss (det.s ++ cn.s) ;
           ModA1 cn a1 = ss (a1.s ++ cn.s) ;
           ConjS conj s1 s2 = ss (s1.s ++ conj.s ++ s2.s) ;
    @@ -2666,24 +2802,23 @@ To explain the contrast, we introduce the functions
         human : Dom ; 
         game : Dom ;
         play : V2 human game ;
    -    John : PN human ;
    -    Golf : PN game ;
    +    John : NP human ;
    +    Golf : NP game ;
     

    Both sentences still pass the context-free parser, returning trees with lots of metavariables of type Dom:

    -    PredV1 ?0 (UsePN ?1 John) (ComplV2 ?2 ?3 play (UsePN ?4 Golf))
    -  
    -    PredV1 ?0 (UsePN ?1 Golf) (ComplV2 ?2 ?3 play (UsePN ?4 John))
    +    PredV1 ?0 John (ComplV2 ?1 ?2 play Golf)
    +    PredV1 ?0 Golf (ComplV2 ?1 ?2 play John)
     

    But only the former sentence passes the type checker, which moreover infers the domain arguments:

    -    PredV1 human (UsePN human John) (ComplV2 human game play (UsePN game Golf))
    +    PredV1 human John (ComplV2 human game play Golf)
     

    To try this out in GF, use pt = put_term with the tree transformation @@ -2705,7 +2840,7 @@ or less liberal. For instance, John loves golf

    -both make sense, even though Mary and golf +should both make sense, even though Mary and golf are of different types. A natural solution in this case is to formalize love as a polymorphic verb, which takes a human as its first argument but an object of any type as its second @@ -2716,16 +2851,21 @@ argument: lin love _ = ss "loves" ;

    -Problems remain, such as subtyping (e.g. what -is meaningful for a human is also meaningful for -a man and a woman, but not the other way round) -and the extended use of expressions (e.g. a metaphoric use that -makes sense of "golf plays John"). +In other words, it is possible for a human to love anything.

    - +

    +A problem related to polymorphism is subtyping: what +is meaningful for a human is also meaningful for +a man and a woman, but not the other way round. +One solution to this is coercions: functions that +"lift" objects from subtypes to supertypes. +

    + +

    Case study: selectional restrictions and statistical language models TODO

    +

    Proof objects

    -Perhaps the most well-known feature of constructive type theory is +Perhaps the most well-known idea in constructive type theory is the Curry-Howard isomorphism, also known as the propositions as types principle. Its earliest formulations were attempts to give semantics to the logical systems of @@ -2747,61 +2887,109 @@ The successor function Succ generates an infinite sequence of natural numbers, beginning from Zero.

    -We then define what it means for a number x to be less than +We then define what it means for a number x to be less than a number y. Our definition is based on two axioms:

    + +

    The most straightforward way of expressing these axioms in type theory -is as typing judgements that introduce objects of a type Less x y: +is as typing judgements that introduce objects of a type Less //x y //: +

         cat Less Nat Nat ; 
         fun lessZ : (y : Nat) -> Less Zero (Succ y) ;
         fun lessS : (x,y : Nat) -> Less x y -> Less (Succ x) (Succ y) ;
     
    +

    Objects formed by lessZ and lessS are called proof objects: they establish the truth of certain mathematical propositions. For instance, the fact that 2 is less that 4 has the proof object +

         lessS (Succ Zero) (Succ (Succ (Succ Zero)))
               (lessS Zero (Succ (Succ Zero)) (lessZ (Succ Zero)))
     
    +

    whose type is +

         Less (Succ (Succ Zero)) (Succ (Succ (Succ (Succ Zero))))
     
    -which is the same thing as the proposition that 2 is less than 4. -

    +

    +which is the formalization of the proposition that 2 is less than 4. +

    +

    GF grammars can be used to provide a semantic control of well-formedness of expressions. We have already seen examples of this: the grammar of well-formed addresses and the grammar with selectional restrictions above. By introducing proof objects -we have now added a very powerful -technique of expressing semantic conditions. -

    +we have now added a very powerful technique of expressing semantic conditions. +

    +

    A simple example of the use of proof objects is the definition of well-formed time spans: a time span is expected to be from an earlier to a later time: +

         from 3 to 8
     
    +

    is thus well-formed, whereas +

         from 8 to 3
     
    +

    is not. The following rules for spans impose this condition by using the Less predicate: +

         cat Span ;
         fun span : (m,n : Nat) -> Less m n -> Span ;
     
    - - - +

    +A possible practical application of this idea is proof-carrying documents: +to be semantically well-formed, the abstract syntax of a document must contain a proof +of some property, although the proof is not shown in the concrete document. +Think, for instance, of small documents describing flight connections: +

    +

    +To fly from Gothenburg to Prague, first take LH3043 to Frankfurt, then OK0537 to Prague. +

    +

    +The well-formedness of this text is partly expressible by dependent typing: +

    +
    +    cat
    +      City ;
    +      Flight City City ;
    +    fun
    +      Gothenburg, Frankfurt, Prague : City ;
    +      LH3043 : Flight Gothenburg Frankfurt ;
    +      OK0537 : Flight Frankfurt Prague ;
    +
    +

    +This rules out texts saying take OK0537 from Gothenburg to Prague. However, there is a +further condition saying that it must be possible to change from LH3043 to OK0537 in Frankfurt. +This can be modelled as a proof object of a suitable type, which is required by the constructor +that connects flights. +

    +
    +    cat
    +      IsPossible (x,y,z : City)(Flight x y)(Flight y z) ;
    +    fun
    +      Connect : (x,y,z : City) -> 
    +        (u : Flight x y) -> (v : Flight y z) -> 
    +          IsPossible x y z u v -> Flight x z ;
    +
    +

    +

    Variable bindings

    Mathematical notation and programming languages have lots of @@ -2813,8 +3001,8 @@ a universally quantifier proposition

    consists of the binding (All x) of the variable x, -and the body B(x), where the variable x is -said to occur bound. +and the body B(x), where the variable x can have +bound occurrences.

    Variable bindings appear in informal mathematical language as well, for @@ -2901,7 +3089,6 @@ since the linearization type of Prop is {s : Str}

    -(we remind that the order of fields in a record does not matter). In other words, the linearization of a function consists of a linearization of the body together with a field for a linearization of the bound variable. @@ -2911,16 +3098,16 @@ should notice that GF requires trees to be in any function of type

    -    A -> C
    +    A -> B
     

    always has a syntax tree of the form

    -    \x -> c
    +    \x -> b
     

    -where c : C under the assumption x : A. +where b : B under the assumption x : A. It is in this form that an expression can be analysed as having a bound variable and a body.

    @@ -2957,8 +3144,7 @@ linearized into the same strings that represent them in the print-out of the abstract syntax.

    -To be able to -parse variable symbols, however, GF needs to know what +To be able to parse variable symbols, however, GF needs to know what to look for (instead of e.g. trying to parse any string as a variable). What strings are parsed as variable symbols is defined in the lexical analysis part of GF parsing @@ -2968,11 +3154,10 @@ is defined in the lexical analysis part of GF parsing All (\x -> Eq x x)

    -(see more details on lexers below). -If several variables are bound in the same argument, the -labels are $0, $1, $2, etc. +(see more details on lexers below). If several variables are bound in the +same argument, the labels are $0, $1, $2, etc.

    - +

    Semantic definitions

    We have seen that, @@ -2993,7 +3178,7 @@ recognized by the key word def. At its simplest, it is just the definition of one constant, e.g.

    -    def one = succ zero ;
    +    def one = Succ Zero ;
     

    We can also define a function with arguments, @@ -3006,8 +3191,9 @@ which is still a special case of the most general notion of definition, that of a group of pattern equations:

    -    def sum x zero = x ;
    -    def sum x (succ y) = succ (sum x y) ;
    +    def 
    +      sum x Zero = x ;
    +      sum x (Succ y) = Succ (Sum x y) ;
     

    To compute a term is, as in functional programming languages, @@ -3015,10 +3201,10 @@ simply to follow a chain of reductions until no definition can be applied. For instance, we compute

    -    sum one one -->
    -    sum (succ zero) (succ zero) -->
    -    succ (sum (succ zero) zero) -->
    -    succ (succ zero)
    +    Sum one one -->
    +    Sum (Succ Zero) (Succ Zero) -->
    +    Succ (sum (Succ Zero) Zero) -->
    +    Succ (Succ Zero)
     

    Computation in GF is performed with the pt command and the @@ -3027,7 +3213,7 @@ Computation in GF is performed with the pt command and the

         > p -tr "1 + 1" | pt -transform=compute -tr | l
         sum one one
    -    succ (succ zero)
    +    Succ (Succ Zero)
         s(s(0))
     

    @@ -3040,9 +3226,9 @@ Thus, trivially, all trees in a chain of computation are definitionally equal to each other. So are the trees

    -    sum zero (succ one)
    -    succ one
    -    sum (sum zero zero) (sum (succ zero) one)
    +    sum Zero (Succ one)
    +    Succ one
    +    sum (sum Zero Zero) (sum (Succ Zero) one)
     

    and infinitely many other trees. @@ -3052,8 +3238,8 @@ A fact that has to be emphasized about def definitions is that they are not performed as a first step of linearization. We say that linearization is intensional, which means that the definitional equality of two trees does not imply that -they have the same linearizations. For instance, the seven terms -above all have different linearizations in arithmetic notation: +they have the same linearizations. For instance, each of the seven terms +shown above has a different linearizations in arithmetic notation:

         1 + 1
    @@ -3085,7 +3271,7 @@ equal types. For instance,
     

         Proof (Odd one)
    -    Proof (Odd (succ zero))
    +    Proof (Odd (Succ Zero))
     

    are equal types. Hence, any tree that type checks as a proof that @@ -3116,7 +3302,7 @@ and other functions, GF has a judgement form data to tell that certain functions are canonical, e.g.

    -    data Nat = succ | zero ;
    +    data Nat = Succ | Zero ;
     

    Unlike in Haskell, but similarly to ALF (where constructor functions @@ -3127,269 +3313,20 @@ are given separately, in ordinary fun judgements. One can also write directly

    -    data succ : Nat -> Nat ;
    +    data Succ : Nat -> Nat ;
     

    which is equivalent to the two judgements

    -    fun succ : Nat -> Nat ;
    -    data Nat = succ ;
    -
    -

    - -

    More features of the module system

    - -

    Interfaces, instances, and functors

    - -

    Resource grammars and their reuse

    -

    -A resource grammar is a grammar built on linguistic grounds, -to describe a language rather than a domain. -The GF resource grammar library, which contains resource grammars for -10 languages, is described more closely in the following -documents: -

    - - -

    -However, to give a flavour of both using and writing resource grammars, -we have created a miniature resource, which resides in the -subdirectory resource. Its API consists of the following -three modules: -

    -

    -Syntax - syntactic structures, language-independent: -

    -
    -  
    -
    -

    -LexEng - lexical paradigms, English: -

    -
    -  
    -
    -

    -LexIta - lexical paradigms, Italian: -

    -
    -  
    -
    -

    -

    -Only these three modules should be opened in applications. -The implementations of the resource are given in the following four modules: -

    -

    -MorphoEng, -

    -
    -  
    -
    -

    -MorphoIta: low-level morphology -

    - - -

    -An example use of the resource resides in the -subdirectory applications. -It implements the abstract syntax -FoodComments for English and Italian. -The following diagram shows the module structure, indicating by -colours which modules are written by the grammarian. The two blue modules -form the abstract syntax. The three red modules form the concrete syntax. -The two green modules are trivial instantiations of a functor. -The rest of the modules (black) come from the resource. -

    -

    - -

    - -

    Restricted inheritance and qualified opening

    - -

    Using the standard resource library

    -

    -The example files of this chapter can be found in -the directory arithm. -

    - -

    The simplest way

    -

    -The simplest way is to open a top-level Lang module -and a Paradigms module: -

    -
    -    abstract Foo = ...
    -  
    -    concrete FooEng = open LangEng, ParadigmsEng in ...
    -    concrete FooSwe = open LangSwe, ParadigmsSwe in ...
    -
    -

    -Here is an example. -

    -
    -  abstract Arithm = {
    -    cat
    -      Prop ;
    -      Nat ;
    -    fun
    -      Zero : Nat ;
    -      Succ : Nat -> Nat ;
    -      Even : Nat -> Prop ;
    -      And  : Prop -> Prop -> Prop ;
    -  }
    -  
    -  --# -path=.:alltenses:prelude
    -  
    -  concrete ArithmEng of Arithm = open LangEng, ParadigmsEng in {
    -    lincat
    -      Prop = S ;
    -      Nat  = NP ;
    -    lin
    -      Zero = 
    -        UsePN (regPN "zero" nonhuman) ;
    -      Succ n = 
    -        DetCN (DetSg (SgQuant DefArt) NoOrd) (ComplN2 (regN2 "successor") n) ;
    -      Even n = 
    -        UseCl TPres ASimul PPos 
    -          (PredVP n (UseComp (CompAP (PositA (regA "even"))))) ;
    -      And x y = 
    -        ConjS and_Conj (BaseS x y) ;
    -  
    -  }
    -  
    -  --# -path=.:alltenses:prelude
    -  
    -  concrete ArithmSwe of Arithm = open LangSwe, ParadigmsSwe in {
    -    lincat
    -      Prop = S ;
    -      Nat  = NP ;
    -    lin
    -      Zero = 
    -        UsePN (regPN "noll" neutrum) ;
    -      Succ n = 
    -        DetCN (DetSg (SgQuant DefArt) NoOrd) 
    -          (ComplN2 (mkN2 (mk2N "efterföljare" "efterföljare") 
    -             (mkPreposition "till")) n) ;
    -      Even n = 
    -        UseCl TPres ASimul PPos 
    -          (PredVP n (UseComp (CompAP (PositA (regA "jämn"))))) ;
    -      And x y = 
    -        ConjS and_Conj (BaseS x y) ;
    -  }
    +    fun Succ : Nat -> Nat ;
    +    data Nat = Succ ;
     

    -

    How to find resource functions

    -

    -The definitions in this example were found by parsing: -

    -
    -    > i LangEng.gf
    -  
    -    -- for Successor:
    -    > p -cat=NP -mcfg -parser=topdown "the mother of Paris"
    -  
    -    -- for Even:
    -    > p -cat=S -mcfg -parser=topdown "Paris is old"
    -  
    -    -- for And:
    -    > p -cat=S -mcfg -parser=topdown "Paris is old and I am old"
    -
    -

    -The use of parsing can be systematized by example-based grammar writing, -to which we will return later. -

    +

    Case study: representing anaphoric reference TODO

    -

    A functor implementation

    -

    -The interesting thing now is that the -code in ArithmSwe is similar to the code in ArithmEng, except for -some lexical items ("noll" vs. "zero", "efterföljare" vs. "successor", -"jämn" vs. "even"). How can we exploit the similarities and -actually share code between the languages? -

    -

    -The solution is to use a functor: an incomplete module that opens -an abstract as an interface, and then instantiate it to different -languages that implement the interface. The structure is as follows: -

    -
    -    abstract Foo ...
    -  
    -    incomplete concrete FooI = open Lang, Lex in ...
    -  
    -    concrete FooEng of Foo = FooI with (Lang=LangEng), (Lex=LexEng) ;
    -    concrete FooSwe of Foo = FooI with (Lang=LangSwe), (Lex=LexSwe) ;
    -
    -

    -where Lex is an abstract lexicon that includes the vocabulary -specific to this application: -

    -
    -    abstract Lex = Cat ** ...
    -  
    -    concrete LexEng of Lex = CatEng ** open ParadigmsEng in ...
    -    concrete LexSwe of Lex = CatSwe ** open ParadigmsSwe in ...  
    -
    -

    -Here, again, a complete example (abstract Arithm is as above): -

    -
    -  incomplete concrete ArithmI of Arithm = open Lang, Lex in {
    -    lincat
    -      Prop = S ;
    -      Nat  = NP ;
    -    lin
    -      Zero = 
    -        UsePN zero_PN ;
    -      Succ n = 
    -        DetCN (DetSg (SgQuant DefArt) NoOrd) (ComplN2 successor_N2 n) ;
    -      Even n = 
    -        UseCl TPres ASimul PPos 
    -          (PredVP n (UseComp (CompAP (PositA even_A)))) ;
    -      And x y = 
    -        ConjS and_Conj (BaseS x y) ;
    -  }
    -  
    -  --# -path=.:alltenses:prelude
    -  concrete ArithmEng of Arithm = ArithmI with
    -    (Lang = LangEng),
    -    (Lex = LexEng) ;
    -  
    -  --# -path=.:alltenses:prelude
    -  concrete ArithmSwe of Arithm = ArithmI with
    -    (Lang = LangSwe),
    -    (Lex = LexSwe) ;
    -  
    -  abstract Lex = Cat ** {
    -    fun
    -      zero_PN : PN ;
    -      successor_N2 : N2 ;  
    -      even_A : A ;
    -  }
    -  
    -  concrete LexSwe of Lex = CatSwe ** open ParadigmsSwe in {
    -    lin 
    -      zero_PN = regPN "noll" neutrum ;
    -      successor_N2 = 
    -        mkN2 (mk2N "efterföljare" "efterföljare") (mkPreposition "till") ;
    -      even_A = regA "jämn" ;
    -  }
    -
    -

    - -

    Transfer modules

    +

    Transfer modules TODO

    Transfer means noncompositional tree-transforming operations. The command apply_transfer = at is typically used in a pipe: @@ -3407,9 +3344,9 @@ See the transfer language documentation for more information.

    + +

    Practical issues TODO

    -

    Practical issues

    -

    Lexers and unlexers

    Lexers and unlexers can be chosen from @@ -3442,10 +3379,9 @@ Given by help -lexer, help -unlexer: -unlexer=codelit like code, but remove string literal quotes -unlexer=concat remove all spaces -unlexer=bind like identity, but bind at "&+" -

    - +

    Efficiency of grammars

    Issues: @@ -3456,7 +3392,7 @@ Issues:

  • parsing efficiency: -fcfg vs. others - +

    Speech input and output

    Thespeak_aloud = sa command sends a string to the speech @@ -3486,7 +3422,7 @@ The method words only for grammars of English. Both Flite and ATK are freely available through the links above, but they are not distributed together with GF.

    - +

    Multilingual syntax editor

    The @@ -3497,18 +3433,18 @@ describes the use of the editor, which works for any multilingual GF grammar. Here is a snapshot of the editor:

    - +

    The grammars of the snapshot are from the Letter grammar package.

    - +

    Interactive Development Environment (IDE)

    Forthcoming.

    - +

    Communicating with GF

    Other processes can communicate with the GF command interpreter, @@ -3525,7 +3461,7 @@ Thus the most silent way to invoke GF is - +

    Embedded grammars in Haskell, Java, and Prolog

    GF grammars can be used as parts of programs written in the @@ -3537,15 +3473,15 @@ following languages. The links give more documentation.

  • Prolog - +

    Alternative input and output grammar formats

    A summary is given in the following chart of GF grammar compiler phases:

    + +

    Larger case studies TODO

    -

    Case studies

    -

    Interfacing formal and natural languages

    Formal and Informal Software Specifications, @@ -3557,7 +3493,12 @@ English and German.

    A simpler example will be explained here.

    + +

    A multimodal dialogue system

    +

    +See TALK project deliverables, TALK homepage +

    - - + + diff --git a/doc/tutorial/gf-tutorial2.txt b/doc/tutorial/gf-tutorial2.txt index 4b20f38cd..9c3ae71b2 100644 --- a/doc/tutorial/gf-tutorial2.txt +++ b/doc/tutorial/gf-tutorial2.txt @@ -1,5 +1,5 @@ Grammatical Framework Tutorial -Author: Aarne Ranta +Author: Aarne Ranta aarne (at) cs.chalmers.se Last update: %%date(%c) % NOTE: this is a txt2tags file. @@ -20,7 +20,7 @@ Last update: %%date(%c) -[../gf-logo.gif] +[../gf-logo.png] @@ -552,7 +552,7 @@ module forms are %--! -===Record types, records, and ``Str``s=== +===Records and strings=== The linearization type of a category is a **record type**, with zero of more **fields** of different types. The simplest record @@ -922,7 +922,7 @@ The graph uses %--! -==System commands== +===System commands=== To document your grammar, you may want to print the graph into a file, e.g. a ``.png`` file that @@ -1385,7 +1385,8 @@ Why does the command also show the operations that form the same as the value of ``Noun``. -==Using morphology in concrete syntax== + +==Using parameters in concrete syntax== We can now enrich the concrete syntax definitions to comprise morphology. This will involve a more radical @@ -1595,34 +1596,6 @@ are not a good idea in top-level categories accessed by the users of a grammar application. -%--! -==More constructs for concrete syntax== - - -%--! -===Local definitions=== - -Local definitions ("``let`` expressions") are used in functional -programming for two reasons: to structure the code into smaller -expressions, and to avoid repeated computation of one and -the same expression. Here is an example, from -[``MorphoIta`` resource/MorphoIta.gf]: -``` - oper regNoun : Str -> Noun = \vino -> - let - vin = init vino ; - o = last vino - in - case o of { - "a" => mkNoun Fem vino (vin + "e") ; - "o" | "e" => mkNoun Masc vino (vin + "i") ; - _ => mkNoun Masc vino vino - } ; -``` - - - - %--! ===Free variation=== @@ -1644,1004 +1617,46 @@ In general, ``variants`` should be used cautiously. It is not recommended for modules aimed to be libraries, because the user of the library has no way to choose among the variants. -%Moreover, ``variants`` is only defined for basic types (``Str`` -%and parameter types). The grammar compiler will admit -%``variants`` for any types, but it will push it to the -%level of basic types in a way that may be unwanted. -%For instance, German has two words meaning "car", -%//Wagen//, which is Masculine, and //Auto//, which is Neuter. -%However, if one writes -%``` -% variants {{s = "Wagen" ; g = Masc} ; {s = "Auto" ; g = Neutr}} -%``` -%this will compute to -%``` -% {s = variants {"Wagen" ; "Auto"} ; g = variants {Masc ; Neutr}} -%``` -%which will also accept erroneous combinations of strings and genders. +===Overloading of operations=== +Large libraries, such as the GF Resource Grammar Library, may define +hundreds of names, which can be unpractical +for both the library writer and the user. The writer has to invent longer +and longer names which are not always intuitive, +and the user has to learn or at least be able to find all these names. +A solution to this problem, adopted by languages such as C++, is **overloading**: +the same name can be used for several functions. When such a name is used, the +compiler performs **overload resolution** to find out which of the possible functions +is meant. The resolution is based on the types of the functions: all functions that +have the same name must have different types. - -===Record extension and subtyping=== - -Record types and records can be **extended** with new fields. For instance, -in German it is natural to see transitive verbs as verbs with a case. -The symbol ``**`` is used for both constructs. +In C++, functions with the same name can be scattered everywhere in the program. +In GF, they must be grouped together in ``overload`` groups. Here is an example +of an overload group, defining four ways to define nouns in Italian: ``` - lincat TV = Verb ** {c : Case} ; - - lin Follow = regVerb "folgen" ** {c = Dative} ; -``` -To extend a record type or a record with a field whose label it -already has is a type error. - -A record type //T// is a **subtype** of another one //R//, if //T// has -all the fields of //R// and possibly other fields. For instance, -an extension of a record type is always a subtype of it. - -If //T// is a subtype of //R//, an object of //T// can be used whenever -an object of //R// is required. For instance, a transitive verb can -be used whenever a verb is required. - -**Contravariance** means that a function taking an //R// as argument -can also be applied to any object of a subtype //T//. - - - -===Tuples and product types=== - -Product types and tuples are syntactic sugar for record types and records: -``` - T1 * ... * Tn === {p1 : T1 ; ... ; pn : Tn} - === {p1 = T1 ; ... ; pn = Tn} -``` -Thus the labels ``p1, p2,...`` are hard-coded. - - -===Record and tuple patterns=== - -Record types of parameter types are also parameter types. -A typical example is a record of agreement features, e.g. French -``` - oper Agr : PType = {g : Gender ; n : Number ; p : Person} ; -``` -Notice the term ``PType`` rather than just ``Type`` referring to -parameter types. Every ``PType`` is also a ``Type``. - -Pattern matching is done in the expected way, but it can moreover -utilize partial records: the branch -``` - {g = Fem} => t -``` -in a table of type ``Agr => T`` means the same as -``` - {g = Fem ; n = _ ; p = _} => t -``` -Tuple patterns are translated to record patterns in the -same way as tuples to records; partial patterns make it -possible to write, slightly surprisingly, -``` - case of { - => t - ... - } -``` - -%--! -===Regular expression patterns=== - -To define string operations computed at compile time, such -as in morphology, it is handy to use regular expression patterns: - - //p// ``+`` //q// : token consisting of //p// followed by //q// - - //p// ``*`` : token //p// repeated 0 or more times - (max the length of the string to be matched) - - ``-`` //p// : matches anything that //p// does not match - - //x// ``@`` //p// : bind to //x// what //p// matches - - //p// ``|`` //q// : matches what either //p// or //q// matches - - -The last three apply to all types of patterns, the first two only to token strings. -Example: plural formation in Swedish 2nd declension -(//pojke-pojkar, nyckel-nycklar, seger-segrar, bil-bilar//): -``` - plural2 : Str -> Str = \w -> case w of { - pojk + "e" => pojk + "ar" ; - nyck + "e" + l@("l" | "r" | "n") => nyck + l + "ar" ; - bil => bil + "ar" - } ; -``` -Another example: English noun plural formation. -``` - plural : Str -> Str = \w -> case w of { - _ + ("s" | "z" | "x" | "sh") => w + "es" ; - _ + ("a" | "o" | "u" | "e") + "y" => w + "s" ; - x + "y" => x + "ies" ; - _ => w + "s" - } ; -``` -Semantics: variables are always bound to the **first match**, which is the first -in the sequence of binding lists ``Match p v`` defined as follows. In the definition, -``p`` is a pattern and ``v`` is a value. -``` - Match (p1|p2) v = Match p1 v ++ Match p2 v - Match (p1+p2) s = [Match p1 s1 ++ Match p2 s2 | - i <- [0..length s], (s1,s2) = splitAt i s] - Match p* s = [[]] if Match "" s ++ Match p s ++ Match (p+p) s ++... /= [] - Match -p v = [[]] if Match p v = [] - Match c v = [[]] if c == v -- for constant and literal patterns c - Match x v = [[(x,v)]] -- for variable patterns x - Match x@p v = [[(x,v)]] + M if M = Match p v /= [] - Match p v = [] otherwise -- failure -``` -Examples: -- ``x + "e" + y`` matches ``"peter"`` with ``x = "p", y = "ter"`` -- ``x + "er"*`` matches ``"burgerer"`` with ``x = "burg" - - - - - -%--! -===Prefix-dependent choices=== - -Sometimes a token has different forms depending on the token -that follows. An example is the English indefinite article, -which is //an// if a vowel follows, //a// otherwise. -Which form is chosen can only be decided at run time, i.e. -when a string is actually build. GF has a special construct for -such tokens, the ``pre`` construct exemplified in -``` - oper artIndef : Str = - pre {"a" ; "an" / strs {"a" ; "e" ; "i" ; "o"}} ; -``` -Thus -``` - artIndef ++ "cheese" ---> "a" ++ "cheese" - artIndef ++ "apple" ---> "an" ++ "apple" -``` -This very example does not work in all situations: the prefix -//u// has no general rules, and some problematic words are -//euphemism, one-eyed, n-gram//. It is possible to write -``` - oper artIndef : Str = - pre {"a" ; - "a" / strs {"eu" ; "one"} ; - "an" / strs {"a" ; "e" ; "i" ; "o" ; "n-"} - } ; -``` - - -===Predefined types and operations=== - -GF has the following predefined categories in abstract syntax: -``` - cat Int ; -- integers, e.g. 0, 5, 743145151019 - cat Float ; -- floats, e.g. 0.0, 3.1415926 - cat String ; -- strings, e.g. "", "foo", "123" -``` -The objects of each of these categories are **literals** -as indicated in the comments above. No ``fun`` definition -can have a predefined category as its value type, but -they can be used as arguments. For example: -``` - fun StreetAddress : Int -> String -> Address ; - lin StreetAddress number street = {s = number.s ++ street.s} ; - - -- e.g. (StreetAddress 10 "Downing Street") : Address -``` -FIXME: The linearization type is ``{s : Str}`` for all these categories. - - - -==More concepts of abstract syntax== - -===GF as a logical framework=== - -In this section, we will show how -to encode advanced semantic concepts in an abstract syntax. -We use concepts inherited from **type theory**. Type theory -is the basis of many systems known as **logical frameworks**, which are -used for representing mathematical theorems and their proofs on a computer. -In fact, GF has a logical framework as its proper part: -this part is the abstract syntax. - -In a logical framework, the formalization of a mathematical theory -is a set of type and function declarations. The following is an example -of such a theory, represented as an ``abstract`` module in GF. -``` -abstract Arithm = { - cat - Prop ; -- proposition - Nat ; -- natural number - fun - Zero : Nat ; -- 0 - Succ : Nat -> Nat ; -- successor of x - Even : Nat -> Prop ; -- x is even - And : Prop -> Prop -> Prop ; -- A and B - } -``` -A concrete syntax is given below, as an example of using the resource grammar -library. - - - -===Dependent types=== - -**Dependent types** are a characteristic feature of GF, -inherited from the -**constructive type theory** of Martin-Löf and -distinguishing GF from most other grammar formalisms and -functional programming languages. -The initial main motivation for developing GF was, indeed, -to have a grammar formalism with dependent types. -As can be inferred from the fact that we introduce them only now, -after having written lots of grammars without them, -dependent types are no longer the only motivation for GF. -But they are still important and interesting. - - -Dependent types can be used for stating stronger -**conditions of well-formedness** than non-dependent types. -A simple example is postal addresses. Ignoring the other details, -let us take a look at addresses consisting of -a street, a city, and a country. -``` -abstract Address = { - cat - Address ; Country ; City ; Street ; - - fun - mkAddress : Country -> City -> Street -> Address ; - - UK, France : Country ; - Paris, London, Grenoble : City ; - OxfordSt, ShaftesburyAve, BdRaspail, RueBlondel, AvAlsaceLorraine : Street ; + oper mkN = overload { + mkN : Str -> N = -- regular nouns + mkN : Str -> Gender -> N = -- regular nouns with unexpected gender + mkN : Str -> Str -> N = -- irregular nouns + mkN : Str -> Str -> Gender -> N = -- irregular nouns with unexpected gender } ``` -The linearization rules -are straightforward, +All of the following uses of ``mkN`` are easy to resolve: ``` - lin - - mkAddress country city street = - ss (street.s ++ "," ++ city.s ++ "," ++ country.s) ; - UK = ss ("U.K.") ; - France = ss ("France") ; - Paris = ss ("Paris") ; - London = ss ("London") ; - Grenoble = ss ("Grenoble") ; - OxfordSt = ss ("Oxford" ++ "Street") ; - ShaftesburyAve = ss ("Shaftesbury" ++ "Avenue") ; - BdRaspail = ss ("boulevard" ++ "Raspail") ; - RueBlondel = ss ("rue" ++ "Blondel") ; - AvAlsaceLorraine = ss ("avenue" ++ "Alsace-Lorraine") ; -``` -with the exception of ``mkAddress``, where we have -reversed the order of the constituents. The type of ``mkAddress`` -in the abstract syntax takes its arguments in a "logical" order, -with increasing precision. (This order is sometimes even used in the concrete -syntax of addresses, e.g. in Russia). - - - -Both existing and non-existing addresses are recognized by this -grammar. The non-existing ones in the following randomly generated -list have afterwards been marked by *: -``` - > gr -cat=Address -number=7 | l - - * Oxford Street , Paris , France - * Shaftesbury Avenue , Grenoble , U.K. - boulevard Raspail , Paris , France - * rue Blondel , Grenoble , U.K. - * Shaftesbury Avenue , Grenoble , France - * Oxford Street , London , France - * Shaftesbury Avenue , Grenoble , France -``` -Dependent types provide a way to guarantee that addresses are -well-formed. What we do is to include **contexts** in -``cat`` judgements: -``` - cat Address ; - cat Country ; - cat City Country ; - cat Street (x : Country)(y : City x) ; -``` -The first two judgements are as before, but the third one makes -``City`` dependent on ``Country``: there are no longer just cities, -but cities of the U.K. and cities of France. The fourth judgement -makes ``Street`` dependent on ``City``; but since -``City`` is itself dependent on ``Country``, we must -include them both in the context, moreover guaranteeing that -the city is one of the given country. Since the context itself -is built by using a dependent type, we have to use variables -to indicate the dependencies. The judgement we used for ``City`` -is actually shorthand for -``` - cat City (x : Country) -``` -which is only possible if the subsequent context does not depend on ``x``. - -The ``fun`` judgements of the grammar are modified accordingly: -``` - fun - - mkAddress : (x : Country) -> (y : City x) -> Street x y -> Address ; - - UK : Country ; - France : Country ; - Paris : City France ; - London : City UK ; - Grenoble : City France ; - OxfordSt : Street UK London ; - ShaftesburyAve : Street UK London ; - BdRaspail : Street France Paris ; - RueBlondel : Street France Paris ; - AvAlsaceLorraine : Street France Grenoble ; -``` -Since the type of ``mkAddress`` now has dependencies among -its argument types, we have to use variables just like we used in -the context of ``Street`` above. What we claimed to be the -"logical" order of the arguments is now forced by the type system -of GF: a variable must be declared (=bound) before it can be -referenced (=used). - -The effect of dependent types is that the *-marked addresses above are -no longer well-formed. What the GF parser actually does is that it -initially accepts them (by using a context-free parsing algorithm) -and then rejects them (by type checking). The random generator does not produce -illegal addresses (this could be useful in bulk mailing!). -The linearization algorithm does not care about type dependencies; -actually, since the //categories// (ignoring their arguments) -are the same in both abstract syntaxes, -we use the same concrete syntax -for both of them. - -**Remark**. Function types //without// -variables are actually a shorthand notation: writing -``` - fun PredV1 : NP -> V1 -> S -``` -is shorthand for -``` - fun PredV1 : (x : NP) -> (y : V1) -> S -``` -or any other naming of the variables. Actually the use of variables -sometimes shortens the code, since we can write e.g. -``` - oper triple : (x,y,z : Str) -> Str = ... -``` - - -===Dependent types in concrete syntax=== - -The **functional fragment** of GF -terms and types comprises function types, applications, lambda -abstracts, constants, and variables. This fragment is similar in -abstract and concrete syntax. In particular, -dependent types are also available in concrete syntax. -We have not made use of them yet, -but we will now look at one example of how they -can be used. - -Those readers who are familiar with functional programming languages -like ML and Haskell, may already have missed **polymorphic** -functions. For instance, Haskell programmers have access to -the functions -``` - const :: a -> b -> a - const c _ = c - - flip :: (a -> b -> c) -> b -> a -> c - flip f y x = f x y -``` -which can be used for any given types ``a``,``b``, and ``c``. - -The GF counterpart of polymorphic functions are **monomorphic** -functions with explicit **type variables**. Thus the above -definitions can be written -``` - oper const :(a,b : Type) -> a -> b -> a = - \_,_,c,_ -> c ; - - oper flip : (a,b,c : Type) -> (a -> b ->c) -> b -> a -> c = - \_,_,_,f,x,y -> f y x ; -``` -When the operations are used, the type checker requires -them to be equipped with all their arguments; this may be a nuisance -for a Haskell or ML programmer. - - - -===Expressing selectional restrictions=== - -This section introduces a way of using dependent types to -formalize a notion known as **selectional restrictions** -in linguistics. We first present a mathematical model -of the notion, and then integrate it in a linguistically -motivated syntax. - -In linguistics, a -grammar is usually thought of as being about **syntactic well-formedness** -in a rather liberal sense: an expression can be well-formed without -being meaningful, in other words, without being -**semantically well-formed**. -For instance, the sentence -``` - the number 2 is equilateral -``` -is syntactically well-formed but semantically ill-formed. -It is well-formed because it combines a well-formed -noun phrase ("the number 2") with a well-formed -verb phrase ("is equilateral") in accordance with the -rule that the verb phrase is inflected in the -number of the noun phrase: -``` - fun PredVP : NP -> VP -> S ; - lin PredVP np v = {s = np.s ++ vp.s ! np.n} ; -``` -It is ill-formed because the predicate "is equilateral" -is only defined for triangles, not for numbers. - -In a straightforward type-theoretical formalization of -mathematics, domains of mathematical objects -are defined as types. In GF, we could write -``` - cat Nat ; - cat Triangle ; - cat Prop ; -``` -for the types of natural numbers, triangles, and propositions, -respectively. -Noun phrases are typed as objects of basic types other than -``Prop``, whereas verb phrases are functions from basic types -to ``Prop``. For instance, -``` - fun two : Nat ; - fun Even : Nat -> Prop ; - fun Equilateral : Triangle -> Prop ; -``` -With these judgements, and the linearization rules -``` - lin two = ss ["the number 2"] ; - lin Even x = ss (x.s ++ ["is even"]) ; - lin Equilateral x = ss (x.s ++ ["is equilateral"]) ; -``` -we can form the proposition ``Even two`` -``` - the number 2 is even -``` -but no proposition linearized to -``` - the number 2 is equilateral -``` -since ``Equilateral two`` is not a well-formed type-theoretical object. - -When formalizing mathematics, e.g. in the purpose of -computer-assisted theorem proving, we are certainly interested -in semantic well-formedness: we want to be sure that a proposition makes -sense before we make the effort of proving it. The straightforward typing -of nouns and predicates shown above is the way in which this -is guaranteed in various proof systems based on type theory. -(Notice that it is still possible to form //false// propositions, -e.g. "the number 3 is even". -False and meaningless are different things.) - -As shown by the linearization rules for ``two``, ``Even``, -etc, it //is// possible to use straightforward mathematical typings -as the abstract syntax of a grammar. However, this syntax is not very -expressive linguistically: for instance, there is no distinction between -adjectives and verbs. It is hard to give rules for structures like -adjectival modification ("even number") and conjunction of predicates -("even or odd"). - -By using dependent types, it is simple to combine a linguistically -motivated system of categories with mathematically motivated -type restrictions. What we need is a category of domains of -individual objects, -``` - cat Dom -``` -and dependencies of other categories on this: -``` - cat - S ; -- sentence - V1 Dom ; -- one-place verb - V2 Dom Dom ; -- two-place verb - A1 Dom ; -- one-place adjective - A2 Dom Dom ; -- two-place adjective - PN Dom ; -- proper name - NP Dom ; -- noun phrase - Conj ; -- conjunction - Det ; -- determiner -``` -The number of ``Dom`` arguments depends on the semantic type -corresponding to the category: one-place verbs and adjectives -correspond to types of the form -``` - A -> Prop -``` -whereas two-place verbs and adjectives correspond to types of the form -``` - A -> B -> Prop -``` -where the domains ``A`` and ``B`` can be distinct. -Proper names correspond to types of the form -``` - A -``` -that is, individual objects of the domain ``A``. Noun phrases -correspond to -``` - (A -> Prop) -> Prop -``` -that is, **quantifiers** over the domain ``A``. -Sentences, conjunctions, and determiners correspond to -``` - Prop - Prop -> Prop -> Prop - (A : Dom) -> (A -> Prop) -> Prop -``` -respectively, -and are thus independent of domain. As for common nouns ``CN``, -the simplest semantics is that they correspond to -``` - Dom -``` -In this section, we will, in fact, write ``Dom`` instead of ``CN``. - -Having thus parametrized categories on domains, we have to reformulate -the rules of predication, etc, accordingly. This is straightforward: -``` - fun - PredV1 : (A : Dom) -> NP A -> V1 A -> S ; - ComplV2 : (A,B : Dom) -> V2 A B -> NP B -> V1 A ; - UsePN : (A : Dom) -> PN A -> NP A ; - DetCN : Det -> (A : Dom) -> NP A ; - ModA1 : (A : Dom) -> A1 A -> Dom ; - ConjS : Conj -> S -> S -> S ; - ConjV1 : (A : Dom) -> Conj -> V1 A -> V1 A -> V1 A ; -``` -In linearization rules, -we typically use wildcards for the domain arguments, -to get arities right: -``` - lin - PredV1 _ np vp = ss (np.s ++ vp.s) ; - ComplV2 _ _ v2 np = ss (v2.s ++ np.s) ; - UsePN _ pn = pn ; - DetCN det cn = ss (det.s ++ cn.s) ; - ModA1 cn a1 = ss (a1.s ++ cn.s) ; - ConjS conj s1 s2 = ss (s1.s ++ conj.s ++ s2.s) ; - ConjV1 _ conj v1 v2 = ss (v1.s ++ conj.s ++ v2.s) ; -``` -The domain arguments thus get suppressed in linearization. -Parsing initially returns metavariables for them, -but type checking can usually restore them -by inference from those arguments that are not suppressed. - -One traditional linguistic example of domain restrictions -(= selectional restrictions) is the contrast between the two sentences -``` - John plays golf - golf plays John -``` -To explain the contrast, we introduce the functions -``` - human : Dom ; - game : Dom ; - play : V2 human game ; - John : PN human ; - Golf : PN game ; -``` -Both sentences still pass the context-free parser, -returning trees with lots of metavariables of type ``Dom``: -``` - PredV1 ?0 (UsePN ?1 John) (ComplV2 ?2 ?3 play (UsePN ?4 Golf)) - - PredV1 ?0 (UsePN ?1 Golf) (ComplV2 ?2 ?3 play (UsePN ?4 John)) -``` -But only the former sentence passes the type checker, which moreover -infers the domain arguments: -``` - PredV1 human (UsePN human John) (ComplV2 human game play (UsePN game Golf)) -``` -To try this out in GF, use ``pt = put_term`` with the **tree transformation** -that solves the metavariables by type checking: -``` - > p -tr "John plays golf" | pt -transform=solve - > p -tr "golf plays John" | pt -transform=solve -``` -In the latter case, no solutions are found. - -A known problem with selectional restrictions is that they can be more -or less liberal. For instance, -``` - John loves Mary - John loves golf -``` -both make sense, even though ``Mary`` and ``golf`` -are of different types. A natural solution in this case is to -formalize ``love`` as a **polymorphic** verb, which takes -a human as its first argument but an object of any type as its second -argument: -``` - fun love : (A : Dom) -> V2 human A ; - lin love _ = ss "loves" ; -``` -Problems remain, such as **subtyping** (e.g. what -is meaningful for a ``human`` is also meaningful for -a ``man`` and a ``woman``, but not the other way round) -and the **extended use** of expressions (e.g. a metaphoric use that -makes sense of "golf plays John"). - - - - - -===Proof objects=== - -Perhaps the most well-known feature of constructive type theory is -the **Curry-Howard isomorphism**, also known as the -**propositions as types principle**. Its earliest formulations -were attempts to give semantics to the logical systems of -propositional and predicate calculus. In this section, we will consider -a more elementary example, showing how the notion of proof is useful -outside mathematics, as well. - -We first define the category of unary (also known as Peano-style) -natural numbers: -``` - cat Nat ; - fun Zero : Nat ; - fun Succ : Nat -> Nat ; -``` -The **successor function** ``Succ`` generates an infinite -sequence of natural numbers, beginning from ``Zero``. - -We then define what it means for a number //x// to be less than -a number //y//. Our definition is based on two axioms: -- ``Zero`` is less than ``Succ y`` for any ``y``. -- If ``x`` is less than ``y``, then``Succ x`` is less than ``Succ y``. - - -The most straightforward way of expressing these axioms in type theory -is as typing judgements that introduce objects of a type ``Less x y``: -``` - cat Less Nat Nat ; - fun lessZ : (y : Nat) -> Less Zero (Succ y) ; - fun lessS : (x,y : Nat) -> Less x y -> Less (Succ x) (Succ y) ; -``` -Objects formed by ``lessZ`` and ``lessS`` are -called **proof objects**: they establish the truth of certain -mathematical propositions. -For instance, the fact that 2 is less that -4 has the proof object -``` - lessS (Succ Zero) (Succ (Succ (Succ Zero))) - (lessS Zero (Succ (Succ Zero)) (lessZ (Succ Zero))) -``` -whose type is -``` - Less (Succ (Succ Zero)) (Succ (Succ (Succ (Succ Zero)))) -``` -which is the same thing as the proposition that 2 is less than 4. - -GF grammars can be used to provide a **semantic control** of -well-formedness of expressions. We have already seen examples of this: -the grammar of well-formed addresses and the grammar with -selectional restrictions above. By introducing proof objects -we have now added a very powerful -technique of expressing semantic conditions. - -A simple example of the use of proof objects is the definition of -well-formed //time spans//: a time span is expected to be from an earlier to -a later time: -``` - from 3 to 8 -``` -is thus well-formed, whereas -``` - from 8 to 3 -``` -is not. The following rules for spans impose this condition -by using the ``Less`` predicate: -``` - cat Span ; - fun span : (m,n : Nat) -> Less m n -> Span ; -``` - - - - -===Variable bindings=== - -Mathematical notation and programming languages have lots of -expressions that **bind** variables. For instance, -a universally quantifier proposition -``` - (All x)B(x) -``` -consists of the **binding** ``(All x)`` of the variable ``x``, -and the **body** ``B(x)``, where the variable ``x`` is -said to occur bound. - -Variable bindings appear in informal mathematical language as well, for -instance, -``` - for all x, x is equal to x - - the function that for any numbers x and y returns the maximum of x+y - and x*y -``` -In type theory, variable-binding expression forms can be formalized -as functions that take functions as arguments. The universal -quantifier is defined -``` - fun All : (Ind -> Prop) -> Prop -``` -where ``Ind`` is the type of individuals and ``Prop``, -the type of propositions. If we have, for instance, the equality predicate -``` - fun Eq : Ind -> Ind -> Prop -``` -we may form the tree -``` - All (\x -> Eq x x) -``` -which corresponds to the ordinary notation -``` - (All x)(x = x). -``` - - -An abstract syntax where trees have functions as arguments, as in -the two examples above, has turned out to be precisely the right -thing for the semantics and computer implementation of -variable-binding expressions. The advantage lies in the fact that -only one variable-binding expression form is needed, the lambda abstract -``\x -> b``, and all other bindings can be reduced to it. -This makes it easier to implement mathematical theories and reason -about them, since variable binding is tricky to implement and -to reason about. The idea of using functions as arguments of -syntactic constructors is known as **higher-order abstract syntax**. - -The question now arises: how to define linearization rules -for variable-binding expressions? -Let us first consider universal quantification, -``` - fun All : (Ind -> Prop) -> Prop -``` -We write -``` - lin All B = {s = "(" ++ "All" ++ B.$0 ++ ")" ++ B.s} -``` -to obtain the form shown above. -This linearization rule brings in a new GF concept - the ``$0`` -field of ``B`` containing a bound variable symbol. -The general rule is that, if an argument type of a function is -itself a function type ``A -> C``, the linearization type of -this argument is the linearization type of ``C`` -together with a new field ``$0 : Str``. In the linearization rule -for ``All``, the argument ``B`` thus has the linearization -type -``` - {$0 : Str ; s : Str}, -``` -since the linearization type of ``Prop`` is -``` - {s : Str} -``` -(we remind that the order of fields in a record does not matter). -In other words, the linearization of a function -consists of a linearization of the body together with a -field for a linearization of the bound variable. -Those familiar with type theory or lambda calculus -should notice that GF requires trees to be in -**eta-expanded** form in order to be linearizable: -any function of type -``` - A -> C -``` -always has a syntax tree of the form -``` - \x -> c -``` -where ``c : C`` under the assumption ``x : A``. -It is in this form that an expression can be analysed -as having a bound variable and a body. - - -Given the linearization rule -``` - lin Eq a b = {s = "(" ++ a.s ++ "=" ++ b.s ++ ")"} -``` -the linearization of -``` - \x -> Eq x x -``` -is the record -``` - {$0 = "x", s = ["( x = x )"]} -``` -Thus we can compute the linearization of the formula, -``` - All (\x -> Eq x x) --> {s = "[( All x ) ( x = x )]"}. -``` - -How did we get the //linearization// of the variable ``x`` -into the string ``"x"``? GF grammars have no rules for -this: it is just hard-wired in GF that variable symbols are -linearized into the same strings that represent them in -the print-out of the abstract syntax. - - -To be able to -//parse// variable symbols, however, GF needs to know what -to look for (instead of e.g. trying to parse //any// -string as a variable). What strings are parsed as variable symbols -is defined in the lexical analysis part of GF parsing -``` - > p -cat=Prop -lexer=codevars "(All x)(x = x)" - All (\x -> Eq x x) -``` -(see more details on lexers below). -If several variables are bound in the same argument, the -labels are ``$0, $1, $2``, etc. - - - -===Semantic definitions=== - -We have seen that, -just like functional programming languages, GF has declarations -of functions, telling what the type of a function is. -But we have not yet shown how to **compute** -these functions: all we can do is provide them with arguments -and linearize the resulting terms. -Since our main interest is the well-formedness of expressions, -this has not yet bothered -us very much. As we will see, however, computation does play a role -even in the well-formedness of expressions when dependent types are -present. - - -GF has a form of judgement for **semantic definitions**, -recognized by the key word ``def``. At its simplest, it is just -the definition of one constant, e.g. -``` - def one = succ zero ; -``` -We can also define a function with arguments, -``` - def Neg A = Impl A Abs ; -``` -which is still a special case of the most general notion of -definition, that of a group of **pattern equations**: -``` - def sum x zero = x ; - def sum x (succ y) = succ (sum x y) ; -``` -To compute a term is, as in functional programming languages, -simply to follow a chain of reductions until no definition -can be applied. For instance, we compute -``` - sum one one --> - sum (succ zero) (succ zero) --> - succ (sum (succ zero) zero) --> - succ (succ zero) -``` -Computation in GF is performed with the ``pt`` command and the -``compute`` transformation, e.g. -``` - > p -tr "1 + 1" | pt -transform=compute -tr | l - sum one one - succ (succ zero) - s(s(0)) + lin Pizza = mkN "pizza" ; -- Str -> N + lin Hand = mkN "mano" Fem ; -- Str -> Gender -> N + lin Man = mkN "uomo" "uomini" ; -- Str -> Str -> N ``` -The ``def`` definitions of a grammar induce a notion of -**definitional equality** among trees: two trees are -definitionally equal if they compute into the same tree. -Thus, trivially, all trees in a chain of computation -(such as the one above) -are definitionally equal to each other. So are the trees -``` - sum zero (succ one) - succ one - sum (sum zero zero) (sum (succ zero) one) -``` -and infinitely many other trees. - -A fact that has to be emphasized about ``def`` definitions is that -they are //not// performed as a first step of linearization. -We say that **linearization is intensional**, which means that -the definitional equality of two trees does not imply that -they have the same linearizations. For instance, the seven terms -above all have different linearizations in arithmetic notation: -``` - 1 + 1 - s(0) + s(0) - s(s(0) + 0) - s(s(0)) - 0 + s(0) - s(1) - 0 + 0 + s(0) + 1 -``` -This notion of intensionality is -no more exotic than the intensionality of any **pretty-printing** -function of a programming language (function that shows -the expressions of the language as strings). It is vital for -pretty-printing to be intensional in this sense - if we want, -for instance, to trace a chain of computation by pretty-printing each -intermediate step, what we want to see is a sequence of different -expression, which are definitionally equal. - -What is more exotic is that GF has two ways of referring to the -abstract syntax objects. In the concrete syntax, the reference is intensional. -In the abstract syntax, the reference is extensional, since -**type checking is extensional**. The reason is that, -in the type theory with dependent types, types may depend on terms. -Two types depending on terms that are definitionally equal are -equal types. For instance, -``` - Proof (Odd one) - Proof (Odd (succ zero)) -``` -are equal types. Hence, any tree that type checks as a proof that -1 is odd also type checks as a proof that the successor of 0 is odd. -(Recall, in this connection, that the -arguments a category depends on never play any role -in the linearization of trees of that category, -nor in the definition of the linearization type.) - -In addition to computation, definitions impose a -**paraphrase** relation on expressions: -two strings are paraphrases if they -are linearizations of trees that are -definitionally equal. -Paraphrases are sometimes interesting for -translation: the **direct translation** -of a string, which is the linearization of the same tree -in the targer language, may be inadequate because it is e.g. -unidiomatic or ambiguous. In such a case, -the translation algorithm may be made to consider -translation by a paraphrase. -To stress express the distinction between -**constructors** (=**canonical** functions) -and other functions, GF has a judgement form -``data`` to tell that certain functions are canonical, e.g. -``` - data Nat = succ | zero ; -``` -Unlike in Haskell, but similarly to ALF (where constructor functions -are marked with a flag ``C``), -new constructors can be added to -a type with new ``data`` judgements. The type signatures of constructors -are given separately, in ordinary ``fun`` judgements. -One can also write directly -``` - data succ : Nat -> Nat ; -``` -which is equivalent to the two judgements -``` - fun succ : Nat -> Nat ; - data Nat = succ ; -``` %--! -==More features of the module system== - -===Interfaces, instances, and functors=== - - -===Resource grammars and their reuse=== +==Using the resource grammar library TODO== A resource grammar is a grammar built on linguistic grounds, to describe a language rather than a domain. @@ -2654,59 +1669,7 @@ documents: for resource grammarians developing the resource. -However, to give a flavour of both using and writing resource grammars, -we have created a miniature resource, which resides in the -subdirectory [``resource`` resource]. Its API consists of the following -three modules: - -[Syntax resource/Syntax.gf] - syntactic structures, language-independent: -``` - -``` -[LexEng resource/LexEng.gf] - lexical paradigms, English: -``` - -``` -[LexIta resource/LexIta.gf] - lexical paradigms, Italian: -``` - -``` - - -Only these three modules should be ``open``ed in applications. -The implementations of the resource are given in the following four modules: - -[MorphoEng resource/MorphoEng.gf], -``` - -``` -[MorphoIta resource/MorphoIta.gf]: low-level morphology -- [SyntaxEng resource/SyntaxEng.gf]. - [SyntaxIta resource/SyntaxIta.gf]: definitions of syntactic structures - - -An example use of the resource resides in the -subdirectory [``applications`` applications]. -It implements the abstract syntax -[``FoodComments`` applications/FoodComments.gf] for English and Italian. -The following diagram shows the module structure, indicating by -colours which modules are written by the grammarian. The two blue modules -form the abstract syntax. The three red modules form the concrete syntax. -The two green modules are trivial instantiations of a functor. -The rest of the modules (black) come from the resource. - -[Multi.png] - - - -===Restricted inheritance and qualified opening=== - - -==Using the standard resource library== - -The example files of this chapter can be found in -the directory [``arithm`` ./arithm]. - +===Interfaces, instances, and functors=== ===The simplest way=== @@ -2862,9 +1825,1019 @@ concrete LexSwe of Lex = CatSwe ** open ParadigmsSwe in { } ``` +===Restricted inheritance and qualified opening=== -==Transfer modules== + + +%--! +==More constructs for concrete syntax== + +In this chapter, we go through constructs that are not necessary in simple grammars +or when the concrete syntax relies on libraries, but very useful when writing advanced +concrete syntax implementations, such as resource grammar libraries. + + +%--! +===Local definitions=== + +Local definitions ("``let`` expressions") are used in functional +programming for two reasons: to structure the code into smaller +expressions, and to avoid repeated computation of one and +the same expression. Here is an example, from +[``MorphoIta`` resource/MorphoIta.gf]: +``` + oper regNoun : Str -> Noun = \vino -> + let + vin = init vino ; + o = last vino + in + case o of { + "a" => mkNoun Fem vino (vin + "e") ; + "o" | "e" => mkNoun Masc vino (vin + "i") ; + _ => mkNoun Masc vino vino + } ; +``` + + +===Record extension and subtyping=== + +Record types and records can be **extended** with new fields. For instance, +in German it is natural to see transitive verbs as verbs with a case. +The symbol ``**`` is used for both constructs. +``` + lincat TV = Verb ** {c : Case} ; + + lin Follow = regVerb "folgen" ** {c = Dative} ; +``` +To extend a record type or a record with a field whose label it +already has is a type error. + +A record type //T// is a **subtype** of another one //R//, if //T// has +all the fields of //R// and possibly other fields. For instance, +an extension of a record type is always a subtype of it. + +If //T// is a subtype of //R//, an object of //T// can be used whenever +an object of //R// is required. For instance, a transitive verb can +be used whenever a verb is required. + +**Contravariance** means that a function taking an //R// as argument +can also be applied to any object of a subtype //T//. + + + +===Tuples and product types=== + +Product types and tuples are syntactic sugar for record types and records: +``` + T1 * ... * Tn === {p1 : T1 ; ... ; pn : Tn} + === {p1 = T1 ; ... ; pn = Tn} +``` +Thus the labels ``p1, p2,...`` are hard-coded. + + +===Record and tuple patterns=== + +Record types of parameter types are also parameter types. +A typical example is a record of agreement features, e.g. French +``` + oper Agr : PType = {g : Gender ; n : Number ; p : Person} ; +``` +Notice the term ``PType`` rather than just ``Type`` referring to +parameter types. Every ``PType`` is also a ``Type``, but not vice-versa. + +Pattern matching is done in the expected way, but it can moreover +utilize partial records: the branch +``` + {g = Fem} => t +``` +in a table of type ``Agr => T`` means the same as +``` + {g = Fem ; n = _ ; p = _} => t +``` +Tuple patterns are translated to record patterns in the +same way as tuples to records; partial patterns make it +possible to write, slightly surprisingly, +``` + case of { + => t + ... + } +``` + + +%--! +===Regular expression patterns=== + +To define string operations computed at compile time, such +as in morphology, it is handy to use regular expression patterns: + - //p// ``+`` //q// : token consisting of //p// followed by //q// + - //p// ``*`` : token //p// repeated 0 or more times + (max the length of the string to be matched) + - ``-`` //p// : matches anything that //p// does not match + - //x// ``@`` //p// : bind to //x// what //p// matches + - //p// ``|`` //q// : matches what either //p// or //q// matches + + +The last three apply to all types of patterns, the first two only to token strings. +As an example, we give a rule for the formation of English word forms +ending with an //s// and used in the formation of both plural nouns and +third-person present-tense verbs. +``` + add_s : Str -> Str = \w -> case w of { + _ + "oo" => s + "s" ; -- bamboo + _ + ("s" | "z" | "x" | "sh" | "o") => w + "es" ; -- bus, hero + _ + ("a" | "o" | "u" | "e") + "y" => w + "s" ; -- boy + x + "y" => x + "ies" ; -- fly + _ => w + "s" -- car + } ; +``` +Here is another example, the plural formation in Swedish 2nd declension. +The second branch uses a variable binding with ``@`` to cover the cases where an +unstressed pre-final vowel //e// disappears in the plural +(//nyckel-nycklar, seger-segrar, bil-bilar//): +``` + plural2 : Str -> Str = \w -> case w of { + pojk + "e" => pojk + "ar" ; + nyck + "e" + l@("l" | "r" | "n") => nyck + l + "ar" ; + bil => bil + "ar" + } ; +``` + + +Semantics: variables are always bound to the **first match**, which is the first +in the sequence of binding lists ``Match p v`` defined as follows. In the definition, +``p`` is a pattern and ``v`` is a value. +``` + Match (p1|p2) v = Match p1 v ++ Match p2 v + Match (p1+p2) s = [Match p1 s1 ++ Match p2 s2 | + i <- [0..length s], (s1,s2) = splitAt i s] + Match p* s = [[]] if Match "" s ++ Match p s ++ Match (p+p) s ++... /= [] + Match -p v = [[]] if Match p v = [] + Match c v = [[]] if c == v -- for constant and literal patterns c + Match x v = [[(x,v)]] -- for variable patterns x + Match x@p v = [[(x,v)]] + M if M = Match p v /= [] + Match p v = [] otherwise -- failure +``` +Examples: +- ``x + "e" + y`` matches ``"peter"`` with ``x = "p", y = "ter"`` +- ``x + "er"*`` matches ``"burgerer"`` with ``x = "burg" + + + + + +%--! +===Prefix-dependent choices=== + +Sometimes a token has different forms depending on the token +that follows. An example is the English indefinite article, +which is //an// if a vowel follows, //a// otherwise. +Which form is chosen can only be decided at run time, i.e. +when a string is actually build. GF has a special construct for +such tokens, the ``pre`` construct exemplified in +``` + oper artIndef : Str = + pre {"a" ; "an" / strs {"a" ; "e" ; "i" ; "o"}} ; +``` +Thus +``` + artIndef ++ "cheese" ---> "a" ++ "cheese" + artIndef ++ "apple" ---> "an" ++ "apple" +``` +This very example does not work in all situations: the prefix +//u// has no general rules, and some problematic words are +//euphemism, one-eyed, n-gram//. It is possible to write +``` + oper artIndef : Str = + pre {"a" ; + "a" / strs {"eu" ; "one"} ; + "an" / strs {"a" ; "e" ; "i" ; "o" ; "n-"} + } ; +``` + + +===Predefined types and operations=== + +GF has the following predefined categories in abstract syntax: +``` + cat Int ; -- integers, e.g. 0, 5, 743145151019 + cat Float ; -- floats, e.g. 0.0, 3.1415926 + cat String ; -- strings, e.g. "", "foo", "123" +``` +The objects of each of these categories are **literals** +as indicated in the comments above. No ``fun`` definition +can have a predefined category as its value type, but +they can be used as arguments. For example: +``` + fun StreetAddress : Int -> String -> Address ; + lin StreetAddress number street = {s = number.s ++ street.s} ; + + -- e.g. (StreetAddress 10 "Downing Street") : Address +``` +FIXME: The linearization type is ``{s : Str}`` for all these categories. + + + + +==More concepts of abstract syntax== + +This section is about the use of the type theory part of GF for +including more semantics in grammars. Some of the subsections present +ideas that have not yet been used in real-world applications, and whose +tool support outside the GF program is not complete. + + +===GF as a logical framework=== + +In this section, we will show how +to encode advanced semantic concepts in an abstract syntax. +We use concepts inherited from **type theory**. Type theory +is the basis of many systems known as **logical frameworks**, which are +used for representing mathematical theorems and their proofs on a computer. +In fact, GF has a logical framework as its proper part: +this part is the abstract syntax. + +In a logical framework, the formalization of a mathematical theory +is a set of type and function declarations. The following is an example +of such a theory, represented as an ``abstract`` module in GF. +``` +abstract Arithm = { + cat + Prop ; -- proposition + Nat ; -- natural number + fun + Zero : Nat ; -- 0 + Succ : Nat -> Nat ; -- successor of x + Even : Nat -> Prop ; -- x is even + And : Prop -> Prop -> Prop ; -- A and B + } +``` +A concrete syntax is given below, as an example of using the resource grammar +library. + + + +===Dependent types=== + +**Dependent types** are a characteristic feature of GF, +inherited from the +**constructive type theory** of Martin-Löf and +distinguishing GF from most other grammar formalisms and +functional programming languages. +The initial main motivation for developing GF was, indeed, +to have a grammar formalism with dependent types. +As can be inferred from the fact that we introduce them only now, +after having written lots of grammars without them, +dependent types are no longer the only motivation for GF. +But they are still important and interesting. + + +Dependent types can be used for stating stronger +**conditions of well-formedness** than non-dependent types. +A simple example is postal addresses. Ignoring the other details, +let us take a look at addresses consisting of +a street, a city, and a country. +``` +abstract Address = { + cat + Address ; Country ; City ; Street ; + + fun + mkAddress : Country -> City -> Street -> Address ; + + UK, France : Country ; + Paris, London, Grenoble : City ; + OxfordSt, ShaftesburyAve, BdRaspail, RueBlondel, AvAlsaceLorraine : Street ; + } +``` +The linearization rules are straightforward, +``` + lin + mkAddress country city street = + ss (street.s ++ "," ++ city.s ++ "," ++ country.s) ; + UK = ss ("U.K.") ; + France = ss ("France") ; + Paris = ss ("Paris") ; + London = ss ("London") ; + Grenoble = ss ("Grenoble") ; + OxfordSt = ss ("Oxford" ++ "Street") ; + ShaftesburyAve = ss ("Shaftesbury" ++ "Avenue") ; + BdRaspail = ss ("boulevard" ++ "Raspail") ; + RueBlondel = ss ("rue" ++ "Blondel") ; + AvAlsaceLorraine = ss ("avenue" ++ "Alsace-Lorraine") ; +``` +Notice that, in ``mkAddress``, we have +reversed the order of the constituents. The type of ``mkAddress`` +in the abstract syntax takes its arguments in a "logical" order, +with increasing precision. (This order is sometimes even used in the +concrete syntax of addresses, e.g. in Russia). + +Both existing and non-existing addresses are recognized by this +grammar. The non-existing ones in the following randomly generated +list have afterwards been marked by *: +``` + > gr -cat=Address -number=7 | l + + * Oxford Street , Paris , France + * Shaftesbury Avenue , Grenoble , U.K. + boulevard Raspail , Paris , France + * rue Blondel , Grenoble , U.K. + * Shaftesbury Avenue , Grenoble , France + * Oxford Street , London , France + * Shaftesbury Avenue , Grenoble , France +``` +Dependent types provide a way to guarantee that addresses are +well-formed. What we do is to include **contexts** in +``cat`` judgements: +``` + cat + Address ; + Country ; + City Country ; + Street (x : Country)(City x) ; +``` +The first two judgements are as before, but the third one makes +``City`` dependent on ``Country``: there are no longer just cities, +but cities of the U.K. and cities of France. The fourth judgement +makes ``Street`` dependent on ``City``; but since +``City`` is itself dependent on ``Country``, we must +include them both in the context, moreover guaranteeing that +the city is one of the given country. Since the context itself +is built by using a dependent type, we have to use variables +to indicate the dependencies. The judgement we used for ``City`` +is actually shorthand for +``` + cat City (x : Country) +``` +which is only possible if the subsequent context does not depend on ``x``. + +The ``fun`` judgements of the grammar are modified accordingly: +``` + fun + mkAddress : (x : Country) -> (y : City x) -> Street x y -> Address ; + + UK : Country ; + France : Country ; + Paris : City France ; + London : City UK ; + Grenoble : City France ; + OxfordSt : Street UK London ; + ShaftesburyAve : Street UK London ; + BdRaspail : Street France Paris ; + RueBlondel : Street France Paris ; + AvAlsaceLorraine : Street France Grenoble ; +``` +Since the type of ``mkAddress`` now has dependencies among +its argument types, we have to use variables just like we used in +the context of ``Street`` above. What we claimed to be the +"logical" order of the arguments is now forced by the type system +of GF: a variable must be declared (=bound) before it can be +referenced (=used). + +The effect of dependent types is that the *-marked addresses above are +no longer well-formed. What the GF parser actually does is that it +initially accepts them (by using a context-free parsing algorithm) +and then rejects them (by type checking). The random generator does not produce +illegal addresses (this could be useful in bulk mailing!). +The linearization algorithm does not care about type dependencies; +actually, since the //categories// (ignoring their arguments) +are the same in both abstract syntaxes, +we use the same concrete syntax +for both of them. + +**Remark**. Function types //without// +variables are actually a shorthand notation: writing +``` + fun PredV1 : NP -> V1 -> S +``` +is shorthand for +``` + fun PredV1 : (x : NP) -> (y : V1) -> S +``` +or any other naming of the variables. Actually the use of variables +sometimes shortens the code, since we can write e.g. +``` + oper triple : (x,y,z : Str) -> Str = ... +``` +If a bound variable is not used, it can here, as elswhere in GF, be replaced by +a wildcard: +``` + oper triple : (_,_,_ : Str) -> Str = ... +``` + + + +===Dependent types in concrete syntax=== + +The **functional fragment** of GF +terms and types comprises function types, applications, lambda +abstracts, constants, and variables. This fragment is similar in +abstract and concrete syntax. In particular, +dependent types are also available in concrete syntax. +We have not made use of them yet, +but we will now look at one example of how they +can be used. + +Those readers who are familiar with functional programming languages +like ML and Haskell, may already have missed **polymorphic** +functions. For instance, Haskell programmers have access to +the functions +``` + const :: a -> b -> a + const c _ = c + + flip :: (a -> b -> c) -> b -> a -> c + flip f y x = f x y +``` +which can be used for any given types ``a``,``b``, and ``c``. + +The GF counterpart of polymorphic functions are **monomorphic** +functions with explicit **type variables**. Thus the above +definitions can be written +``` + oper const :(a,b : Type) -> a -> b -> a = + \_,_,c,_ -> c ; + + oper flip : (a,b,c : Type) -> (a -> b ->c) -> b -> a -> c = + \_,_,_,f,x,y -> f y x ; +``` +When the operations are used, the type checker requires +them to be equipped with all their arguments; this may be a nuisance +for a Haskell or ML programmer. + + + +===Expressing selectional restrictions=== + +This section introduces a way of using dependent types to +formalize a notion known as **selectional restrictions** +in linguistics. We first present a mathematical model +of the notion, and then integrate it in a linguistically +motivated syntax. + +In linguistics, a +grammar is usually thought of as being about **syntactic well-formedness** +in a rather liberal sense: an expression can be well-formed without +being meaningful, in other words, without being +**semantically well-formed**. +For instance, the sentence +``` + the number 2 is equilateral +``` +is syntactically well-formed but semantically ill-formed. +It is well-formed because it combines a well-formed +noun phrase ("the number 2") with a well-formed +verb phrase ("is equilateral") and satisfies the agreement +rule saying that the verb phrase is inflected in the +number of the noun phrase: +``` + fun PredVP : NP -> VP -> S ; + lin PredVP np v = {s = np.s ++ vp.s ! np.n} ; +``` +It is ill-formed because the predicate "is equilateral" +is only defined for triangles, not for numbers. + +In a straightforward type-theoretical formalization of +mathematics, domains of mathematical objects +are defined as types. In GF, we could write +``` + cat Nat ; + cat Triangle ; + cat Prop ; +``` +for the types of natural numbers, triangles, and propositions, +respectively. +Noun phrases are typed as objects of basic types other than +``Prop``, whereas verb phrases are functions from basic types +to ``Prop``. For instance, +``` + fun two : Nat ; + fun Even : Nat -> Prop ; + fun Equilateral : Triangle -> Prop ; +``` +With these judgements, and the linearization rules +``` + lin two = ss ["the number 2"] ; + lin Even x = ss (x.s ++ ["is even"]) ; + lin Equilateral x = ss (x.s ++ ["is equilateral"]) ; +``` +we can form the proposition ``Even two`` +``` + the number 2 is even +``` +but no proposition linearized to +``` + the number 2 is equilateral +``` +since ``Equilateral two`` is not a well-formed type-theoretical object. +It is not even accepted by the context-free parser. + +When formalizing mathematics, e.g. in the purpose of +computer-assisted theorem proving, we are certainly interested +in semantic well-formedness: we want to be sure that a proposition makes +sense before we make the effort of proving it. The straightforward typing +of nouns and predicates shown above is the way in which this +is guaranteed in various proof systems based on type theory. +(Notice that it is still possible to form //false// propositions, +e.g. "the number 3 is even". +False and meaningless are different things.) + +As shown by the linearization rules for ``two``, ``Even``, +etc, it //is// possible to use straightforward mathematical typings +as the abstract syntax of a grammar. However, this syntax is not very +expressive linguistically: for instance, there is no distinction between +adjectives and verbs. It is hard to give rules for structures like +adjectival modification ("even number") and conjunction of predicates +("even or odd"). + +By using dependent types, it is simple to combine a linguistically +motivated system of categories with mathematically motivated +type restrictions. What we need is a category of domains of +individual objects, +``` + cat Dom +``` +and dependencies of other categories on this: +``` + cat + S ; -- sentence + V1 Dom ; -- one-place verb with specific subject type + V2 Dom Dom ; -- two-place verb with specific subject and object types + A1 Dom ; -- one-place adjective + A2 Dom Dom ; -- two-place adjective + NP Dom ; -- noun phrase for an object of specific type + Conj ; -- conjunction + Det ; -- determiner +``` +Having thus parametrized categories on domains, we have to reformulate +the rules of predication, etc, accordingly. This is straightforward: +``` + fun + PredV1 : (A : Dom) -> NP A -> V1 A -> S ; + ComplV2 : (A,B : Dom) -> V2 A B -> NP B -> V1 A ; + DetCN : Det -> (A : Dom) -> NP A ; + ModA1 : (A : Dom) -> A1 A -> Dom ; + ConjS : Conj -> S -> S -> S ; + ConjV1 : (A : Dom) -> Conj -> V1 A -> V1 A -> V1 A ; +``` +In linearization rules, +we use wildcards for the domain arguments, +because they don't affect linearization: +``` + lin + PredV1 _ np vp = ss (np.s ++ vp.s) ; + ComplV2 _ _ v2 np = ss (v2.s ++ np.s) ; + DetCN det cn = ss (det.s ++ cn.s) ; + ModA1 cn a1 = ss (a1.s ++ cn.s) ; + ConjS conj s1 s2 = ss (s1.s ++ conj.s ++ s2.s) ; + ConjV1 _ conj v1 v2 = ss (v1.s ++ conj.s ++ v2.s) ; +``` +The domain arguments thus get suppressed in linearization. +Parsing initially returns metavariables for them, +but type checking can usually restore them +by inference from those arguments that are not suppressed. + +One traditional linguistic example of domain restrictions +(= selectional restrictions) is the contrast between the two sentences +``` + John plays golf + golf plays John +``` +To explain the contrast, we introduce the functions +``` + human : Dom ; + game : Dom ; + play : V2 human game ; + John : NP human ; + Golf : NP game ; +``` +Both sentences still pass the context-free parser, +returning trees with lots of metavariables of type ``Dom``: +``` + PredV1 ?0 John (ComplV2 ?1 ?2 play Golf) + PredV1 ?0 Golf (ComplV2 ?1 ?2 play John) +``` +But only the former sentence passes the type checker, which moreover +infers the domain arguments: +``` + PredV1 human John (ComplV2 human game play Golf) +``` +To try this out in GF, use ``pt = put_term`` with the **tree transformation** +that solves the metavariables by type checking: +``` + > p -tr "John plays golf" | pt -transform=solve + > p -tr "golf plays John" | pt -transform=solve +``` +In the latter case, no solutions are found. + +A known problem with selectional restrictions is that they can be more +or less liberal. For instance, +``` + John loves Mary + John loves golf +``` +should both make sense, even though ``Mary`` and ``golf`` +are of different types. A natural solution in this case is to +formalize ``love`` as a **polymorphic** verb, which takes +a human as its first argument but an object of any type as its second +argument: +``` + fun love : (A : Dom) -> V2 human A ; + lin love _ = ss "loves" ; +``` +In other words, it is possible for a human to love anything. + +A problem related to polymorphism is **subtyping**: what +is meaningful for a ``human`` is also meaningful for +a ``man`` and a ``woman``, but not the other way round. +One solution to this is **coercions**: functions that +"lift" objects from subtypes to supertypes. + + +===Case study: selectional restrictions and statistical language models TODO=== + + +===Proof objects=== + +Perhaps the most well-known idea in constructive type theory is +the **Curry-Howard isomorphism**, also known as the +**propositions as types principle**. Its earliest formulations +were attempts to give semantics to the logical systems of +propositional and predicate calculus. In this section, we will consider +a more elementary example, showing how the notion of proof is useful +outside mathematics, as well. + +We first define the category of unary (also known as Peano-style) +natural numbers: +``` + cat Nat ; + fun Zero : Nat ; + fun Succ : Nat -> Nat ; +``` +The **successor function** ``Succ`` generates an infinite +sequence of natural numbers, beginning from ``Zero``. + +We then define what it means for a number //x// to be //less than// +a number //y//. Our definition is based on two axioms: +- ``Zero`` is less than ``Succ`` //y// for any //y//. +- If //x// is less than //y//, then``Succ`` //x// is less than ``Succ`` //y//. + + +The most straightforward way of expressing these axioms in type theory +is as typing judgements that introduce objects of a type ``Less`` //x y //: +``` + cat Less Nat Nat ; + fun lessZ : (y : Nat) -> Less Zero (Succ y) ; + fun lessS : (x,y : Nat) -> Less x y -> Less (Succ x) (Succ y) ; +``` +Objects formed by ``lessZ`` and ``lessS`` are +called **proof objects**: they establish the truth of certain +mathematical propositions. +For instance, the fact that 2 is less that +4 has the proof object +``` + lessS (Succ Zero) (Succ (Succ (Succ Zero))) + (lessS Zero (Succ (Succ Zero)) (lessZ (Succ Zero))) +``` +whose type is +``` + Less (Succ (Succ Zero)) (Succ (Succ (Succ (Succ Zero)))) +``` +which is the formalization of the proposition that 2 is less than 4. + +GF grammars can be used to provide a **semantic control** of +well-formedness of expressions. We have already seen examples of this: +the grammar of well-formed addresses and the grammar with +selectional restrictions above. By introducing proof objects +we have now added a very powerful technique of expressing semantic conditions. + +A simple example of the use of proof objects is the definition of +well-formed //time spans//: a time span is expected to be from an earlier to +a later time: +``` + from 3 to 8 +``` +is thus well-formed, whereas +``` + from 8 to 3 +``` +is not. The following rules for spans impose this condition +by using the ``Less`` predicate: +``` + cat Span ; + fun span : (m,n : Nat) -> Less m n -> Span ; +``` +A possible practical application of this idea is **proof-carrying documents**: +to be semantically well-formed, the abstract syntax of a document must contain a proof +of some property, although the proof is not shown in the concrete document. +Think, for instance, of small documents describing flight connections: + +//To fly from Gothenburg to Prague, first take LH3043 to Frankfurt, then OK0537 to Prague.// + +The well-formedness of this text is partly expressible by dependent typing: +``` + cat + City ; + Flight City City ; + fun + Gothenburg, Frankfurt, Prague : City ; + LH3043 : Flight Gothenburg Frankfurt ; + OK0537 : Flight Frankfurt Prague ; +``` +This rules out texts saying //take OK0537 from Gothenburg to Prague//. However, there is a +further condition saying that it must be possible to change from LH3043 to OK0537 in Frankfurt. +This can be modelled as a proof object of a suitable type, which is required by the constructor +that connects flights. +``` + cat + IsPossible (x,y,z : City)(Flight x y)(Flight y z) ; + fun + Connect : (x,y,z : City) -> + (u : Flight x y) -> (v : Flight y z) -> + IsPossible x y z u v -> Flight x z ; +``` + + + +===Variable bindings=== + +Mathematical notation and programming languages have lots of +expressions that **bind** variables. For instance, +a universally quantifier proposition +``` + (All x)B(x) +``` +consists of the **binding** ``(All x)`` of the variable ``x``, +and the **body** ``B(x)``, where the variable ``x`` can have +**bound occurrences**. + +Variable bindings appear in informal mathematical language as well, for +instance, +``` + for all x, x is equal to x + + the function that for any numbers x and y returns the maximum of x+y + and x*y +``` +In type theory, variable-binding expression forms can be formalized +as functions that take functions as arguments. The universal +quantifier is defined +``` + fun All : (Ind -> Prop) -> Prop +``` +where ``Ind`` is the type of individuals and ``Prop``, +the type of propositions. If we have, for instance, the equality predicate +``` + fun Eq : Ind -> Ind -> Prop +``` +we may form the tree +``` + All (\x -> Eq x x) +``` +which corresponds to the ordinary notation +``` + (All x)(x = x). +``` + + +An abstract syntax where trees have functions as arguments, as in +the two examples above, has turned out to be precisely the right +thing for the semantics and computer implementation of +variable-binding expressions. The advantage lies in the fact that +only one variable-binding expression form is needed, the lambda abstract +``\x -> b``, and all other bindings can be reduced to it. +This makes it easier to implement mathematical theories and reason +about them, since variable binding is tricky to implement and +to reason about. The idea of using functions as arguments of +syntactic constructors is known as **higher-order abstract syntax**. + +The question now arises: how to define linearization rules +for variable-binding expressions? +Let us first consider universal quantification, +``` + fun All : (Ind -> Prop) -> Prop +``` +We write +``` + lin All B = {s = "(" ++ "All" ++ B.$0 ++ ")" ++ B.s} +``` +to obtain the form shown above. +This linearization rule brings in a new GF concept - the ``$0`` +field of ``B`` containing a bound variable symbol. +The general rule is that, if an argument type of a function is +itself a function type ``A -> C``, the linearization type of +this argument is the linearization type of ``C`` +together with a new field ``$0 : Str``. In the linearization rule +for ``All``, the argument ``B`` thus has the linearization +type +``` + {$0 : Str ; s : Str}, +``` +since the linearization type of ``Prop`` is +``` + {s : Str} +``` +In other words, the linearization of a function +consists of a linearization of the body together with a +field for a linearization of the bound variable. +Those familiar with type theory or lambda calculus +should notice that GF requires trees to be in +**eta-expanded** form in order to be linearizable: +any function of type +``` + A -> B +``` +always has a syntax tree of the form +``` + \x -> b +``` +where ``b : B`` under the assumption ``x : A``. +It is in this form that an expression can be analysed +as having a bound variable and a body. + + +Given the linearization rule +``` + lin Eq a b = {s = "(" ++ a.s ++ "=" ++ b.s ++ ")"} +``` +the linearization of +``` + \x -> Eq x x +``` +is the record +``` + {$0 = "x", s = ["( x = x )"]} +``` +Thus we can compute the linearization of the formula, +``` + All (\x -> Eq x x) --> {s = "[( All x ) ( x = x )]"}. +``` + +How did we get the //linearization// of the variable ``x`` +into the string ``"x"``? GF grammars have no rules for +this: it is just hard-wired in GF that variable symbols are +linearized into the same strings that represent them in +the print-out of the abstract syntax. + + +To be able to //parse// variable symbols, however, GF needs to know what +to look for (instead of e.g. trying to parse //any// +string as a variable). What strings are parsed as variable symbols +is defined in the lexical analysis part of GF parsing +``` + > p -cat=Prop -lexer=codevars "(All x)(x = x)" + All (\x -> Eq x x) +``` +(see more details on lexers below). If several variables are bound in the +same argument, the labels are ``$0, $1, $2``, etc. + + + +===Semantic definitions=== + +We have seen that, +just like functional programming languages, GF has declarations +of functions, telling what the type of a function is. +But we have not yet shown how to **compute** +these functions: all we can do is provide them with arguments +and linearize the resulting terms. +Since our main interest is the well-formedness of expressions, +this has not yet bothered +us very much. As we will see, however, computation does play a role +even in the well-formedness of expressions when dependent types are +present. + +GF has a form of judgement for **semantic definitions**, +recognized by the key word ``def``. At its simplest, it is just +the definition of one constant, e.g. +``` + def one = Succ Zero ; +``` +We can also define a function with arguments, +``` + def Neg A = Impl A Abs ; +``` +which is still a special case of the most general notion of +definition, that of a group of **pattern equations**: +``` + def + sum x Zero = x ; + sum x (Succ y) = Succ (Sum x y) ; +``` +To compute a term is, as in functional programming languages, +simply to follow a chain of reductions until no definition +can be applied. For instance, we compute +``` + Sum one one --> + Sum (Succ Zero) (Succ Zero) --> + Succ (sum (Succ Zero) Zero) --> + Succ (Succ Zero) +``` +Computation in GF is performed with the ``pt`` command and the +``compute`` transformation, e.g. +``` + > p -tr "1 + 1" | pt -transform=compute -tr | l + sum one one + Succ (Succ Zero) + s(s(0)) +``` + +The ``def`` definitions of a grammar induce a notion of +**definitional equality** among trees: two trees are +definitionally equal if they compute into the same tree. +Thus, trivially, all trees in a chain of computation +(such as the one above) +are definitionally equal to each other. So are the trees +``` + sum Zero (Succ one) + Succ one + sum (sum Zero Zero) (sum (Succ Zero) one) +``` +and infinitely many other trees. + +A fact that has to be emphasized about ``def`` definitions is that +they are //not// performed as a first step of linearization. +We say that **linearization is intensional**, which means that +the definitional equality of two trees does not imply that +they have the same linearizations. For instance, each of the seven terms +shown above has a different linearizations in arithmetic notation: +``` + 1 + 1 + s(0) + s(0) + s(s(0) + 0) + s(s(0)) + 0 + s(0) + s(1) + 0 + 0 + s(0) + 1 +``` +This notion of intensionality is +no more exotic than the intensionality of any **pretty-printing** +function of a programming language (function that shows +the expressions of the language as strings). It is vital for +pretty-printing to be intensional in this sense - if we want, +for instance, to trace a chain of computation by pretty-printing each +intermediate step, what we want to see is a sequence of different +expression, which are definitionally equal. + +What is more exotic is that GF has two ways of referring to the +abstract syntax objects. In the concrete syntax, the reference is intensional. +In the abstract syntax, the reference is extensional, since +**type checking is extensional**. The reason is that, +in the type theory with dependent types, types may depend on terms. +Two types depending on terms that are definitionally equal are +equal types. For instance, +``` + Proof (Odd one) + Proof (Odd (Succ Zero)) +``` +are equal types. Hence, any tree that type checks as a proof that +1 is odd also type checks as a proof that the successor of 0 is odd. +(Recall, in this connection, that the +arguments a category depends on never play any role +in the linearization of trees of that category, +nor in the definition of the linearization type.) + +In addition to computation, definitions impose a +**paraphrase** relation on expressions: +two strings are paraphrases if they +are linearizations of trees that are +definitionally equal. +Paraphrases are sometimes interesting for +translation: the **direct translation** +of a string, which is the linearization of the same tree +in the targer language, may be inadequate because it is e.g. +unidiomatic or ambiguous. In such a case, +the translation algorithm may be made to consider +translation by a paraphrase. + +To stress express the distinction between +**constructors** (=**canonical** functions) +and other functions, GF has a judgement form +``data`` to tell that certain functions are canonical, e.g. +``` + data Nat = Succ | Zero ; +``` +Unlike in Haskell, but similarly to ALF (where constructor functions +are marked with a flag ``C``), +new constructors can be added to +a type with new ``data`` judgements. The type signatures of constructors +are given separately, in ordinary ``fun`` judgements. +One can also write directly +``` + data Succ : Nat -> Nat ; +``` +which is equivalent to the two judgements +``` + fun Succ : Nat -> Nat ; + data Nat = Succ ; +``` + + +===Case study: representing anaphoric reference TODO=== + + +==Transfer modules TODO== Transfer means noncompositional tree-transforming operations. The command ``apply_transfer = at`` is typically used in a pipe: @@ -2880,7 +2853,7 @@ See the for more information. -==Practical issues== +==Practical issues TODO== ===Lexers and unlexers=== @@ -2913,7 +2886,6 @@ Given by ``help -lexer``, ``help -unlexer``: -unlexer=codelit like code, but remove string literal quotes -unlexer=concat remove all spaces -unlexer=bind like identity, but bind at "&+" - ``` @@ -2946,8 +2918,6 @@ Both Flite and ATK are freely available through the links above, but they are not distributed together with GF. - - ===Multilingual syntax editor=== The @@ -2956,7 +2926,7 @@ describes the use of the editor, which works for any multilingual GF grammar. Here is a snapshot of the editor: -[../quick-editor.gif] +[../quick-editor.png] The grammars of the snapshot are from the [Letter grammar package http://www.cs.chalmers.se/~aarne/GF/examples/letter]. @@ -2999,7 +2969,7 @@ A summary is given in the following chart of GF grammar compiler phases: [../gf-compiler.png] -==Case studies== +==Larger case studies TODO== ===Interfacing formal and natural languages=== @@ -3011,3 +2981,8 @@ English and German. A simpler example will be explained here. + +===A multimodal dialogue system=== + +See TALK project deliverables, [TALK homepage http://www.talk-project.org] + diff --git a/lib/resource-1.0/finnish/CatFin.gf b/lib/resource-1.0/finnish/CatFin.gf index c00d6589f..bdded05b5 100644 --- a/lib/resource-1.0/finnish/CatFin.gf +++ b/lib/resource-1.0/finnish/CatFin.gf @@ -1,4 +1,4 @@ -concrete CatFin of Cat = CommonX ** open ResFin, Prelude in { +concrete CatFin of Cat = CommonX - [Adv] ** open ResFin, Prelude in { flags optimize=all_subs ; @@ -70,24 +70,28 @@ concrete CatFin of Cat = CommonX ** open ResFin, Prelude in { Conj = {s : Str ; n : Number} ; DConj = {s1,s2 : Str ; n : Number} ; Subj = {s : Str} ; - Prep = Compl ; -- Open lexical classes, e.g. Lexicon - V, VS, VQ = Verb1 ; -- = {s : VForm => Str ; sc : Case} ; - V2, VA = Verb1 ** {c2 : Compl} ; - V2A = Verb1 ** {c2, c3 : Compl} ; - VV = Verb1 ; ---- infinitive form - V3 = Verb1 ** {c2, c3 : Compl} ; + V = ResFin.V ; + V2 = ResFin.V2 ; + VA = ResFin.VA ; + VS = ResFin.VS ; + VQ = ResFin.VQ ; + V2A = ResFin.V2A ; + VV = ResFin.VV ; + V3 = ResFin.V3 ; - A = {s : Degree => AForm => Str} ; - A2 = {s : Degree => AForm => Str ; c2 : Compl} ; + A = ResFin.A ; + A2 = ResFin.A2 ; - N = {s : NForm => Str} ; - N2 = {s : NForm => Str} ** {c2 : Compl} ; - N3 = {s : NForm => Str} ** {c2,c3 : Compl} ; - PN = {s : Case => Str} ; + N = ResFin.N ; + N2 = ResFin.N2 ; + N3 = ResFin.N3 ; + PN = ResFin.PN ; + + Adv = ResFin.Adv ; + Prep = ResFin.Prep ; -oper Verb1 = {s : VForm => Str ; sc : NPForm} ; } diff --git a/lib/resource-1.0/finnish/GrammarFin.gf b/lib/resource-1.0/finnish/GrammarFin.gf index 6ae2ee9ea..9ed42e3b5 100644 --- a/lib/resource-1.0/finnish/GrammarFin.gf +++ b/lib/resource-1.0/finnish/GrammarFin.gf @@ -11,7 +11,7 @@ concrete GrammarFin of Grammar = RelativeFin, ConjunctionFin, PhraseFin, - TextX, + TextX - [Adv], IdiomFin, StructuralFin ** { diff --git a/lib/resource-1.0/finnish/ParadigmsFin.gf b/lib/resource-1.0/finnish/ParadigmsFin.gf index df83ed1f9..55331f940 100644 --- a/lib/resource-1.0/finnish/ParadigmsFin.gf +++ b/lib/resource-1.0/finnish/ParadigmsFin.gf @@ -25,8 +25,7 @@ resource ParadigmsFin = open (Predef=Predef), Prelude, - MorphoFin, - CatFin + MorphoFin in { flags optimize=noexpand ; diff --git a/lib/resource-1.0/finnish/ResFin.gf b/lib/resource-1.0/finnish/ResFin.gf index 03d92dbe7..a9f81764d 100644 --- a/lib/resource-1.0/finnish/ResFin.gf +++ b/lib/resource-1.0/finnish/ResFin.gf @@ -569,4 +569,26 @@ oper a = agrP3 Sg ; -- does not matter (--- at least in Slash) isPron = False -- has no special accusative } ; + +-- To export + + N : Type = {s : NForm => Str} ; + N2 = {s : NForm => Str} ** {c2 : Compl} ; + N3 = {s : NForm => Str} ** {c2,c3 : Compl} ; + PN = {s : Case => Str} ; + + A = {s : Degree => AForm => Str} ; + A2 = {s : Degree => AForm => Str ; c2 : Compl} ; + + V, VS, VQ = Verb1 ; -- = {s : VForm => Str ; sc : Case} ; + V2, VA = Verb1 ** {c2 : Compl} ; + V2A = Verb1 ** {c2, c3 : Compl} ; + VV = Verb1 ; ---- infinitive form + V3 = Verb1 ** {c2, c3 : Compl} ; + + Verb1 = {s : VForm => Str ; sc : NPForm} ; + + Prep = Compl ; + Adv = {s : Str} ; + }