diff --git a/doc/gf-logo.png b/doc/gf-logo.png new file mode 100644 index 000000000..4d29b7d8e Binary files /dev/null and b/doc/gf-logo.png differ diff --git a/doc/quick-editor.png b/doc/quick-editor.png new file mode 100644 index 000000000..c840a8108 Binary files /dev/null and b/doc/quick-editor.png differ diff --git a/doc/tutorial/Makefile b/doc/tutorial/Makefile new file mode 100644 index 000000000..d226e7348 --- /dev/null +++ b/doc/tutorial/Makefile @@ -0,0 +1,6 @@ +html: + txt2tags -thtml --toc gf-tutorial2.txt +tex: + txt2tags -ttex --toc gf-tutorial2.txt + pdflatex gf-tutorial2.tex + pdflatex gf-tutorial2.tex diff --git a/doc/tutorial/gf-tutorial2.html b/doc/tutorial/gf-tutorial2.html index 804ed1969..5576428b5 100644 --- a/doc/tutorial/gf-tutorial2.html +++ b/doc/tutorial/gf-tutorial2.html @@ -2,12 +2,13 @@
+
-
+
- this Italian cheese is delicious + this Italian cheese is delicious
in English and Italian. @@ -274,7 +274,7 @@ language, proper translation usually involves more. For instance, the order of words may have to be changed:
- Italian cheese ===> formaggio italiano + Italian cheese ===> formaggio italiano
The full GF grammar format is designed to support such @@ -299,7 +299,7 @@ forms of its words. While the complete description of morphology belongs to resource grammars, this tutorial will explain the programming concepts involved in morphology. This will moreover make it possible to grow the fragment covered by the food example. -The tutorial will in fact build a toy resource grammar in order +The tutorial will in fact build a miniature resource grammar in order to illustrate the module structure of library-based application grammar writing.
@@ -318,14 +318,14 @@ quiz systems, can be built simply by writing scripts for the system. More complicated applications, such as natural-language interfaces and dialogue systems, also require programming in some general-purpose language. We will briefly explain how -GF grammars are used as components of Haskell, Java, and -Prolog grammars. The tutorial concludes with a couple of +GF grammars are used as components of Haskell, Java, Javascript, +and Prolog grammars. The tutorial concludes with a couple of case studies showing how such complete systems can be built.
-The program is open-source free software, which you can download via the
+The GF program is open-source free software, which you can download via the
GF Homepage:
http://www.cs.chalmers.se/~aarne/GF
Now you are ready to try out your first grammar. -We start with one that is not written in GF language, but -in the ubiquitous BNF notation (Backus Naur Form), which GF can also -understand. Type (or copy) the following lines in a file named +We start with one that is not written in the GF language, but +in the much more common BNF notation (Backus Naur Form). The GF +program understands a variant of this notation and translates it +internally to GF's own representation. +
+
+To get started, type (or copy) the following lines into a file named
food.cf:
- S ::= Item "is" Quality ; - Item ::= "this" Kind | "that" Kind ; - Kind ::= Quality Kind ; - Kind ::= "wine" | "cheese" | "fish" ; - Quality ::= "very" Quality ; - Quality ::= "fresh" | "warm" | "Italian" | "expensive" | "delicious" | "boring" ; + Is. S ::= Item "is" Quality ; + That. Item ::= "that" Kind ; + This. Item ::= "this" Kind ; + QKind. Kind ::= Quality Kind ; + Cheese. Kind ::= "cheese" ; + Fish. Kind ::= "fish" ; + Wine. Kind ::= "wine" ; + Italian. Quality ::= "Italian" ; + Boring. Quality ::= "boring" ; + Delicious. Quality ::= "delicious" ; + Expensive. Quality ::= "expensive" ; + Fresh. Quality ::= "fresh" ; + Very. Quality ::= "very" Quality ; + Warm. Quality ::= "warm" ;
-This grammar defines a set of phrases usable to speak about food.
-It builds sentences (S) by assigning Qualities to
-Items. The grammar shows a typical character of GF grammars:
-they are small grammars describing some more or less well-defined
-domain, such as in this case food.
+For those who know ordinary BNF, the
+notation we use includes one extra element: a label appearing
+as the first element of each rule and terminated by a full stop.
+The grammar we wrote defines a set of phrases usable for speaking about food.
+It builds sentences (S) by assigning Qualitys to
+Items. Items are build from Kinds by prepending the
+word "this" or "that". Kinds are either atomic, such as
+"cheese" and "wine", or formed by prepending a Quality to a
+Kind. A Quality is either atomic, such as "Italian" and "boring",
+or built by another Quality by prepending "very". Those familiar with
+the context-free grammar notation will notice that, for instance, the
+following sentence can be built using this grammar:
+
+ this delicious Italian wine is very very expensive ++
-The first GF command when using a grammar is to import it.
+The first GF command needed when using a grammar is to import it.
The command has a long name, import, and a short name, i.
You can type either
-```> import food.cf -
++ > import food.cf +
or
--```> i food.cf -
++ > i food.cf +
to get the same effect. The effect is that the GF program compiles your grammar into an internal @@ -421,18 +446,18 @@ You can now use GF for parsing:
> parse "this cheese is delicious"
- S_Item_is_Quality (Item_this_Kind Kind_cheese) Quality_delicious
+ Is (This Cheese) Delicious
> p "that wine is very very Italian"
- S_Item_is_Quality (Item_that_Kind Kind_wine)
- (Quality_very_Quality (Quality_very_Quality Quality_Italian))
+ Is (That Wine) (Very (Very Italian))
The parse (= p) command takes a string
(in double quotes) and returns an abstract syntax tree - the thing
-beginning with S_Item_Is_Quality. We will see soon how to make sense
-of the abstract syntax trees - now you should just notice that the tree
-is different for the two strings.
+beginning with Is. Trees are built from the rule labels given in the
+grammar, and record the ways in which the rules are used to produce the
+strings. A tree is, in general, something easier than a string
+for a machine to understand and to process further.
Strings that return a tree when parsed do so in virtue of the grammar @@ -452,7 +477,7 @@ You can also use GF for linearizing parsing, taking trees into strings:
- > linearize S_Item_is_Quality (Item_that_Kind Kind_wine) Quality_warm
+ > linearize Is (That Wine) Warm
that wine is warm
@@ -463,40 +488,42 @@ you can obtain a tree from somewhere else. One way to do so is
> generate_random
- S_Item_is_Quality (Item_this_Kind Kind_wine) Quality_delicious
+ Is (This (QKind Italian Fish)) Fresh
Now you can copy the tree and paste it to the linearize command.
-Or, more efficiently, feed random generation into linearization by using
+Or, more conveniently, feed random generation into linearization by using
a pipe.
> gr | l
- this fresh cheese is delicious
+ this Italian fish is fresh
The gibberish code with parentheses returned by the parser does not
-look like trees. Why is it called so? Trees are a data structure that
-represent nesting: trees are branching entities, and the branches
+look like trees. Why is it called so? From the abstract mathematical
+point of view, trees are a data structure that
+represents nesting: trees are branching entities, and the branches
are themselves trees. Parentheses give a linear representation of trees,
useful for the computer. But the human eye may prefer to see a visualization;
for this purpose, GF provides the command visualizre_tree = vt, to which
parsing (and any other tree-producing command) can be piped:
- parse "this delicious cheese is very Italian" | vt + parse "this delicious cheese is very Italian" | vt
-
+
-Random generation can be quite amusing. So you may want to +Random generation is a good way to test a grammar; it can also +be quite amusing. So you may want to generate ten strings with one and the same command:
@@ -559,9 +586,9 @@ want to see:> gr -tr | l -tr | p - S_Item_is_Quality (Item_this_Kind Kind_cheese) Quality_boring + Is (This Cheese) Boring this cheese is boring - S_Item_is_Quality (Item_this_Kind Kind_cheese) Quality_boring + Is (This Cheese) BoringThis facility is good for test purposes: for instance, you @@ -592,91 +619,11 @@ not recognize the string in the file, because it is not a sentence but a sequence of ten sentences.
-Labelled context-free grammars
--The syntax trees returned by GF's parser in the previous examples -are not so nice to look at. The identifiers that form the tree -are labels of the BNF rules. To see which label corresponds to -which rule, you can use the
-print_grammar = pgcommand -with theprinterflag set tocf(which means context-free): -- > print_grammar -printer=cf - - S_Item_is_Quality. S ::= Item "is" Quality ; - Quality_Italian. Quality ::= "Italian" ; - Quality_boring. Quality ::= "boring" ; - Quality_delicious. Quality ::= "delicious" ; - Quality_expensive. Quality ::= "expensive" ; - Quality_fresh. Quality ::= "fresh" ; - Quality_very_Quality. Quality ::= "very" Quality ; - Quality_warm. Quality ::= "warm" ; - Kind_Quality_Kind. Kind ::= Quality Kind ; - Kind_cheese. Kind ::= "cheese" ; - Kind_fish. Kind ::= "fish" ; - Kind_wine. Kind ::= "wine" ; - Item_that_Kind. Item ::= "that" Kind ; - Item_this_Kind. Item ::= "this" Kind ; ---A syntax tree such as -
-- S_Item_is_Quality (Item_this_Kind Kind_wine) Quality_delicious ---encodes the sequence of grammar rules used for building the -tree. If you look at this tree, you will notice that
- -Item_this_Kind-is the label of the rule prefixingthisto aKind, -thereby forming anItem. -Kind_wineis the label of the kind"wine", -and so on. These labels are formed automatically when the grammar -is compiled by GF, in a way that guarantees that different rules -get different labels. -The labelled context-free format
--The labelled context-free grammar format permits user-defined -labels to each rule. -In files with the suffix
-.cf, you can prefix rules with -labels that you provide yourself - these may be more useful -than the automatically generated ones. The following is a possible -labelling offood.cfwith nicer-looking labels. -- Is. S ::= Item "is" Quality ; - That. Item ::= "that" Kind ; - This. Item ::= "this" Kind ; - QKind. Kind ::= Quality Kind ; - Cheese. Kind ::= "cheese" ; - Fish. Kind ::= "fish" ; - Wine. Kind ::= "wine" ; - Italian. Quality ::= "Italian" ; - Boring. Quality ::= "boring" ; - Delicious. Quality ::= "delicious" ; - Expensive. Quality ::= "expensive" ; - Fresh. Quality ::= "fresh" ; - Very. Quality ::= "very" Quality ; - Warm. Quality ::= "warm" ; ---With this grammar, the trees look as follows: -
-- > parse -tr "this delicious cheese is very Italian" | vt - Is (This (QKind Delicious Cheese)) (Very Italian) -- --
--
The .gf grammar format
-To see what there is in GF's shell state when a grammar -has been imported, you can give the plain command -
print_grammar = pg. +To see GF's internal representation of a grammar +that you have imported, you can give the command +print_grammar = pg,> print_grammar @@ -691,12 +638,12 @@ However, we will now start the demonstration how GF's own notation gives you much more expressive power than the.cfformat. We will introduce the.gfformat by presenting -one more way of defining the same grammar as in +another way of defining the same grammar as infood.cf. Then we will show how the full GF grammar format enables you -to do things that are not possible in the weaker formats. +to do things that are not possible in the context-free format. - +Abstract and concrete syntax
A GF grammar consists of two main parts: @@ -707,14 +654,14 @@ A GF grammar consists of two main parts:
-The CF format fuses these two things together, but it is possible -to take them apart. For instance, the sentence formation rule +The context-free format fuses these two things together, but it is always +possible to take them apart. For instance, the sentence formation rule
Is. S ::= Item "is" Quality ;
-is interpreted as the following pair of rules: +is interpreted as the following pair of GF rules:
fun Is : Item -> Quality -> S ;
@@ -731,7 +678,7 @@ The latter rule, with the keyword lin, belongs to the concrete synt
It defines the linearization function for
syntax trees of form (Is item quality).
-
+
Judgement forms
Rules in a GF grammar are called judgements, and the keywords
@@ -759,7 +706,6 @@ judgement forms:
-
We return to the precise meanings of these judgement forms later. First we will look at how judgements are grouped into modules, and show how the food grammar is expressed by using modules and judgements.
- +A GF grammar consists of modules, @@ -801,8 +746,8 @@ module forms are abstract syntax A, with judgements in the module body M.
The linearization type of a category is a record type, with zero of more fields of different types. The simplest record @@ -861,7 +806,7 @@ can be used for lists of tokens. The expression
denotes the empty token list.
- +
To express the abstract syntax of food.cf in
@@ -874,7 +819,7 @@ a file Food.gf, we write two kinds of judgements:
- abstract Food = {
+ abstract Food = {
cat
S ; Item ; Kind ; Quality ;
@@ -886,14 +831,27 @@ a file Food.gf, we write two kinds of judgements:
Wine, Cheese, Fish : Kind ;
Very : Quality -> Quality ;
Fresh, Warm, Italian, Expensive, Delicious, Boring : Quality ;
- }
+ }
Notice the use of shorthands permitting the sharing of
-the keyword in subsequent judgements, and of the type
-in subsequent fun judgements.
+the keyword in subsequent judgements,
+ cat S ; Item ; === cat S ; cat Item ; ++
+and of the type in subsequent fun judgements,
+
+ fun Wine, Fish : Kind ; === + fun Wine : Kind ; Fish : Kind ; === + fun Wine : Kind ; fun Fish : Kind ; ++
+The order of judgements in a module is free. +
+
Each category introduced in Food.gf is
@@ -902,7 +860,7 @@ function is given a lin rule. Similar shorthands
apply as in abstract modules.
- concrete FoodEng of Food = {
+ concrete FoodEng of Food = {
lincat
S, Item, Kind, Quality = {s : Str} ;
@@ -922,16 +880,16 @@ apply as in abstract modules.
Expensive = {s = "expensive"} ;
Delicious = {s = "delicious"} ;
Boring = {s = "boring"} ;
- }
+ }
-
+
-Module name + .gf = file name
+Source files: Module name + .gf = file name
-Each module is compiled into a .gfc file.
+Target files: each module is compiled into a .gfc file.
Import FoodEng.gf and see what happens
@@ -952,7 +910,7 @@ GF source files. When reading a module, GF decides whether
to use an existing .gfc file or to generate
a new one, by looking at modification times.
The main advantage of separating abstract from concrete syntax is that
@@ -965,7 +923,7 @@ translation. Let us build an Italian concrete syntax for
Food and then test the resulting
multilingual grammar.
concrete FoodIta of Food = {
@@ -993,7 +951,7 @@ multilingual grammar.
-
+
Import the two grammars in the same GF session. @@ -1032,7 +990,7 @@ To see what grammars are in scope and which is the main one, use the command actual concretes : FoodIta FoodEng
- +
If translation is what you want to do with a set of grammars, a convenient
@@ -1055,7 +1013,7 @@ A dot . terminates the translation session.
>
This is a simple language exercise that can be automatically
@@ -1095,9 +1053,9 @@ file for later use, by the command translation_list = tl
The number flag gives the number of sentences generated.
The module system of GF makes it possible to extend a @@ -1132,7 +1090,7 @@ be built for concrete syntaxes: The effect of extension is that all of the contents of the extended and extending module are put together.
- +
Specialized vocabularies can be represented as small grammars that
@@ -1167,7 +1125,7 @@ At this point, you would perhaps like to go back to
Food and take apart Wine to build a special
Drink module.
When you have created all the abstract syntaxes and @@ -1195,8 +1153,8 @@ The graph uses
To document your grammar, you may want to print the
graph into a file, e.g. a .png file that
@@ -1223,9 +1181,9 @@ are available:
> help -printer
In comparison to the .cf format, the .gf format looks rather
@@ -1247,7 +1205,7 @@ changing parts, parameters. In functional programming languages, such as
Haskell, it is possible to share much more than in
languages such as C and Java.
GF is a functional programming language, not only in the sense that
@@ -1277,7 +1235,7 @@ its type, and an expression defining it. As for the syntax of the defining
expression, notice the lambda abstraction form \x -> t of
the function.
Operator definitions can be included in a concrete syntax. @@ -1305,7 +1263,7 @@ Resource modules can extend other resource modules, in the same way as modules of other types can extend modules of the same type. Thus it is possible to build resource hierarchies.
- +
Any number of resource modules can be
@@ -1340,22 +1298,22 @@ opened in a new version of FoodEng.
}
-The same string operations could be use to write FoodIta
+The same string operations could be used to write FoodIta
more concisely.
Using operations defined in resource modules is a way to avoid repetitive code. In addition, it enables a new kind of modularity and division of labour in grammar writing: grammarians familiar with -the linguistic details of a language can put this knowledge +the linguistic details of a language can make this knowledge available through resource grammar modules, whose users only need to pick the right operations and not to know their implementation details.
- +Suppose we want to say, with the vocabulary included in @@ -1373,9 +1331,9 @@ singular forms. The introduction of plural forms requires two things:
@@ -1390,7 +1348,7 @@ and many new expression forms. We also need to generalize linearization types from strings to more complex types.
- +We define the parameter type of number in Englisn by @@ -1422,6 +1380,10 @@ example shows such a table: } ;
+The table consists of branches, where a pattern on the
+left of the arrow => is assigned a value on the right.
+
The application of a table to a parameter is done by the selection
operator !. For instance,
!. For instance,
table {Sg => "cheese" ; Pl => "cheeses"} ! Pl
-is a selection, whose value is "cheeses".
+is a selection that computes into the value "cheeses".
+This computation is performed by pattern matching: return
+the value from the first branch whose pattern matches the
+selection argument.
All English common nouns are inflected in number, most of them in the -same way: the plural form is formed from the singular form by adding the +same way: the plural form is obtained from the singular by adding the ending s. This rule is an example of a paradigm - a formula telling how the inflection forms of a word are formed.
-From GF point of view, a paradigm is a function that takes a lemma -
+From the GF point of view, a paradigm is a function that takes a lemma -
also known as a dictionary form - and returns an inflection
table of desired type. Paradigms are not functions in the sense of the
fun judgements of abstract syntax (which operate on trees and not
@@ -1465,7 +1430,7 @@ are written together to form one token. Thus, for instance,
(regNoun "cheese").s ! Pl ---> "cheese" + "s" ---> "cheeses"
Some English nouns, such as mouse, are so irregular that
@@ -1506,7 +1471,7 @@ interface (i.e. the system of type signatures) that makes it
correct to use these functions in concrete modules. In programming
terms, Noun is then treated as an abstract datatype.
In addition to the completely regular noun paradigm regNoun,
@@ -1534,11 +1499,11 @@ all characters but the last) of a string:
yNoun : Str -> Noun = \fly -> mkNoun fly (init fly + "ies") ;
-The operator init belongs to a set of operations in the
+The operation init belongs to a set of operations in the
resource module Prelude, which therefore has to be
opened so that init can be used.
It may be hard for the user of a resource morphology to pick the right
@@ -1568,15 +1533,13 @@ this, either use mkNoun or modify
regNoun so that the "y" case does not
apply if the second-last character is a vowel.
-Expressions of the table form are built from lists of
-argument-value pairs. These pairs are called the branches
-of the table. In addition to constants introduced in
-param definitions, the left-hand side of a branch can more
-generally be a pattern, and the computation of selection is
-then performed by pattern matching:
+We have so far built all expressions of the table form
+from branches whose patterns are constants introduced in
+param definitions, as well as constant strings.
+But there are more expressive patterns. Here is a summary of the possible forms:
A common idiom is to
gather the oper and param definitions
@@ -1655,19 +1618,18 @@ module depends on. The directory prelude is a subdirectory of
set the environment variable GF_LIB_PATH to point to this
directory.
-To test a resource module independently, you can import it
-with a flag that tells GF to retain the oper definitions
+To test a resource module independently, you must import it
+with the flag -retain, which tells GF to retain oper definitions
in the memory; the usual behaviour is that oper definitions
are just applied to compile linearization rules
(this is called inlining) and then thrown away.
- > i -retain MorphoEng.gf + > i -retain MorphoEng.gf-
The command compute_concrete = cc computes any expression
formed by operations and other GF constructs. For example,
@@ -1698,8 +1660,8 @@ Why does the command also show the operations that form
Verb is first computed, and its value happens to be
the same as the value of Noun.
We can now enrich the concrete syntax definitions to comprise morphology. This will involve a more radical @@ -1709,7 +1671,7 @@ parameters and linearization types are different in different languages - but this does not prevent the use of a common abstract syntax.
- +
The rule of subject-verb agreement in English says that the verb
@@ -1731,7 +1693,7 @@ whereas the number of NP is a variable feature (or a
The agreement rule itself is expressed in the linearization rule of -the predication structure: +the predication function:
lin PredVP np vp = {s = np.s ++ vp.s ! np.n} ;
@@ -1744,7 +1706,7 @@ plural determiners These and Those.
The reader is invited to inspect the way in which agreement works in
the formation of sentences.
-
+
English concrete syntax with parameters
The grammar uses both
@@ -1791,7 +1753,7 @@ and parametrized modules.
}
-
+
The reader familiar with a functional programming language such as @@ -1844,7 +1806,7 @@ can be defined }
- +
Even though morphology is in GF
@@ -1876,7 +1838,7 @@ the category is set to be something else than S. For instance,
Finally, a list of morphological exercises can be generated
-off-line saved in a
+off-line and saved in a
file for later use, by the command morpho_list = ml
@@ -1885,7 +1847,7 @@ file for later use, by the commandmorpho_list = mlThe
- +numberflag gives the number of exercises generated.Discontinuous constituents
A linearization type may contain more strings than one. @@ -1926,31 +1888,7 @@ valued field labelled
- -s. Therefore, discontinuous constituents are not a good idea in top-level categories accessed by the users of a grammar application.More constructs for concrete syntax
- -Local definitions
--Local definitions ("
-letexpressions") are used in functional -programming for two reasons: to structure the code into smaller -expressions, and to avoid repeated computation of one and -the same expression. Here is an example, from -``MorphoIta: -- oper regNoun : Str -> Noun = \vino -> - let - vin = init vino ; - o = last vino - in - case o of { - "a" => mkNoun Fem vino (vin + "e") ; - "o" | "e" => mkNoun Masc vino (vin + "i") ; - _ => mkNoun Masc vino vino - } ; -- - +Free variation
Sometimes there are many alternative ways to define a concrete syntax. @@ -1975,27 +1913,259 @@ can be used e.g. if a word lacks a certain form. In general,
+ +variantsshould be used cautiously. It is not recommended for modules aimed to be libraries, because the user of the library has no way to choose among the variants. -Moreover,variantsis only defined for basic types (Str-and parameter types). The grammar compiler will admit -variantsfor any types, but it will push it to the -level of basic types in a way that may be unwanted. -For instance, German has two words meaning "car", -Wagen, which is Masculine, and Auto, which is Neuter. -However, if one writes +Overloading of operations
++Large libraries, such as the GF Resource Grammar Library, may define +hundreds of names, which can be unpractical +for both the library writer and the user. The writer has to invent longer +and longer names which are not always intuitive, +and the user has to learn or at least be able to find all these names. +A solution to this problem, adopted by languages such as C++, is overloading: +the same name can be used for several functions. When such a name is used, the +compiler performs overload resolution to find out which of the possible functions +is meant. The resolution is based on the types of the functions: all functions that +have the same name must have different types. +
++In C++, functions with the same name can be scattered everywhere in the program. +In GF, they must be grouped together in
overloadgroups. Here is an example +of an overload group, defining four ways to define nouns in Italian:- variants {{s = "Wagen" ; g = Masc} ; {s = "Auto" ; g = Neutr}} + oper mkN = overload { + mkN : Str -> N = -- regular nouns + mkN : Str -> Gender -> N = -- regular nouns with unexpected gender + mkN : Str -> Str -> N = -- irregular nouns + mkN : Str -> Str -> Gender -> N = -- irregular nouns with unexpected gender + }-this will compute to +All of the following uses of
mkNare easy to resolve:- {s = variants {"Wagen" ; "Auto"} ; g = variants {Masc ; Neutr}} + lin Pizza = mkN "pizza" ; -- Str -> N + lin Hand = mkN "mano" Fem ; -- Str -> Gender -> N + lin Man = mkN "uomo" "uomini" ; -- Str -> Str -> N ++ + +Using the resource grammar library TODO
++A resource grammar is a grammar built on linguistic grounds, +to describe a language rather than a domain. +The GF resource grammar library, which contains resource grammars for +10 languages, is described more closely in the following +documents: +
+
+The simplest way is to open a top-level Lang module
+and a Paradigms module:
+
+ abstract Foo = ... + + concrete FooEng = open LangEng, ParadigmsEng in ... + concrete FooSwe = open LangSwe, ParadigmsSwe in ...
-which will also accept erroneous combinations of strings and genders. +Here is an example.
+
+ abstract Arithm = {
+ cat
+ Prop ;
+ Nat ;
+ fun
+ Zero : Nat ;
+ Succ : Nat -> Nat ;
+ Even : Nat -> Prop ;
+ And : Prop -> Prop -> Prop ;
+ }
+
+ --# -path=.:alltenses:prelude
+
+ concrete ArithmEng of Arithm = open LangEng, ParadigmsEng in {
+ lincat
+ Prop = S ;
+ Nat = NP ;
+ lin
+ Zero =
+ UsePN (regPN "zero" nonhuman) ;
+ Succ n =
+ DetCN (DetSg (SgQuant DefArt) NoOrd) (ComplN2 (regN2 "successor") n) ;
+ Even n =
+ UseCl TPres ASimul PPos
+ (PredVP n (UseComp (CompAP (PositA (regA "even"))))) ;
+ And x y =
+ ConjS and_Conj (BaseS x y) ;
+
+ }
+
+ --# -path=.:alltenses:prelude
+
+ concrete ArithmSwe of Arithm = open LangSwe, ParadigmsSwe in {
+ lincat
+ Prop = S ;
+ Nat = NP ;
+ lin
+ Zero =
+ UsePN (regPN "noll" neutrum) ;
+ Succ n =
+ DetCN (DetSg (SgQuant DefArt) NoOrd)
+ (ComplN2 (mkN2 (mk2N "efterföljare" "efterföljare")
+ (mkPreposition "till")) n) ;
+ Even n =
+ UseCl TPres ASimul PPos
+ (PredVP n (UseComp (CompAP (PositA (regA "jämn"))))) ;
+ And x y =
+ ConjS and_Conj (BaseS x y) ;
+ }
+
+
++The definitions in this example were found by parsing: +
++ > i LangEng.gf + + -- for Successor: + > p -cat=NP -mcfg -parser=topdown "the mother of Paris" + + -- for Even: + > p -cat=S -mcfg -parser=topdown "Paris is old" + + -- for And: + > p -cat=S -mcfg -parser=topdown "Paris is old and I am old" ++
+The use of parsing can be systematized by example-based grammar writing, +to which we will return later. +
+ +
+The interesting thing now is that the
+code in ArithmSwe is similar to the code in ArithmEng, except for
+some lexical items ("noll" vs. "zero", "efterföljare" vs. "successor",
+"jämn" vs. "even"). How can we exploit the similarities and
+actually share code between the languages?
+
+The solution is to use a functor: an incomplete module that opens
+an abstract as an interface, and then instantiate it to different
+languages that implement the interface. The structure is as follows:
+
+ abstract Foo ... + + incomplete concrete FooI = open Lang, Lex in ... + + concrete FooEng of Foo = FooI with (Lang=LangEng), (Lex=LexEng) ; + concrete FooSwe of Foo = FooI with (Lang=LangSwe), (Lex=LexSwe) ; ++
+where Lex is an abstract lexicon that includes the vocabulary
+specific to this application:
+
+ abstract Lex = Cat ** ... + + concrete LexEng of Lex = CatEng ** open ParadigmsEng in ... + concrete LexSwe of Lex = CatSwe ** open ParadigmsSwe in ... ++
+Here, again, a complete example (abstract Arithm is as above):
+
+ incomplete concrete ArithmI of Arithm = open Lang, Lex in {
+ lincat
+ Prop = S ;
+ Nat = NP ;
+ lin
+ Zero =
+ UsePN zero_PN ;
+ Succ n =
+ DetCN (DetSg (SgQuant DefArt) NoOrd) (ComplN2 successor_N2 n) ;
+ Even n =
+ UseCl TPres ASimul PPos
+ (PredVP n (UseComp (CompAP (PositA even_A)))) ;
+ And x y =
+ ConjS and_Conj (BaseS x y) ;
+ }
+
+ --# -path=.:alltenses:prelude
+ concrete ArithmEng of Arithm = ArithmI with
+ (Lang = LangEng),
+ (Lex = LexEng) ;
+
+ --# -path=.:alltenses:prelude
+ concrete ArithmSwe of Arithm = ArithmI with
+ (Lang = LangSwe),
+ (Lex = LexSwe) ;
+
+ abstract Lex = Cat ** {
+ fun
+ zero_PN : PN ;
+ successor_N2 : N2 ;
+ even_A : A ;
+ }
+
+ concrete LexSwe of Lex = CatSwe ** open ParadigmsSwe in {
+ lin
+ zero_PN = regPN "noll" neutrum ;
+ successor_N2 =
+ mkN2 (mk2N "efterföljare" "efterföljare") (mkPreposition "till") ;
+ even_A = regA "jämn" ;
+ }
+
+
+
++In this chapter, we go through constructs that are not necessary in simple grammars +or when the concrete syntax relies on libraries, but very useful when writing advanced +concrete syntax implementations, such as resource grammar libraries. +
+ +
+Local definitions ("let expressions") are used in functional
+programming for two reasons: to structure the code into smaller
+expressions, and to avoid repeated computation of one and
+the same expression. Here is an example, from
+MorphoIta:
+
+ oper regNoun : Str -> Noun = \vino ->
+ let
+ vin = init vino ;
+ o = last vino
+ in
+ case o of {
+ "a" => mkNoun Fem vino (vin + "e") ;
+ "o" | "e" => mkNoun Masc vino (vin + "i") ;
+ _ => mkNoun Masc vino vino
+ } ;
+
+
+
Record types and records can be extended with new fields. For instance, @@ -2025,7 +2195,7 @@ be used whenever a verb is required. Contravariance means that a function taking an R as argument can also be applied to any object of a subtype T.
- +Product types and tuples are syntactic sugar for record types and records: @@ -2035,9 +2205,9 @@ Product types and tuples are syntactic sugar for record types and records: <t1, ..., tn> === {p1 = T1 ; ... ; pn = Tn}
-Thus the labels p1, p2,...` are hard-coded.
+Thus the labels p1, p2,... are hard-coded.
Record types of parameter types are also parameter types. @@ -2048,7 +2218,7 @@ A typical example is a record of agreement features, e.g. French
Notice the term PType rather than just Type referring to
-parameter types. Every PType is also a Type.
+parameter types. Every PType is also a Type, but not vice-versa.
Pattern matching is done in the expected way, but it can moreover @@ -2075,7 +2245,7 @@ possible to write, slightly surprisingly, }
- +To define string operations computed at compile time, such @@ -2092,8 +2262,24 @@ as in morphology, it is handy to use regular expression patterns:
The last three apply to all types of patterns, the first two only to token strings. -Example: plural formation in Swedish 2nd declension -(pojke-pojkar, nyckel-nycklar, seger-segrar, bil-bilar): +As an example, we give a rule for the formation of English word forms +ending with an s and used in the formation of both plural nouns and +third-person present-tense verbs. +
+
+ add_s : Str -> Str = \w -> case w of {
+ _ + "oo" => s + "s" ; -- bamboo
+ _ + ("s" | "z" | "x" | "sh" | "o") => w + "es" ; -- bus, hero
+ _ + ("a" | "o" | "u" | "e") + "y" => w + "s" ; -- boy
+ x + "y" => x + "ies" ; -- fly
+ _ => w + "s" -- car
+ } ;
+
+
+Here is another example, the plural formation in Swedish 2nd declension.
+The second branch uses a variable binding with @ to cover the cases where an
+unstressed pre-final vowel e disappears in the plural
+(nyckel-nycklar, seger-segrar, bil-bilar):
plural2 : Str -> Str = \w -> case w of {
@@ -2102,17 +2288,7 @@ Example: plural formation in Swedish 2nd declension
bil => bil + "ar"
} ;
--Another example: English noun plural formation. -
-
- plural : Str -> Str = \w -> case w of {
- _ + ("s" | "z" | "x" | "sh") => w + "es" ;
- _ + ("a" | "o" | "u" | "e") + "y" => w + "s" ;
- x + "y" => x + "ies" ;
- _ => w + "s"
- } ;
-
+
Semantics: variables are always bound to the first match, which is the first
in the sequence of binding lists Match p v defined as follows. In the definition,
@@ -2137,7 +2313,7 @@ Examples:
x + "er"* matches "burgerer" with ``x = "burg"
Sometimes a token has different forms depending on the token @@ -2156,7 +2332,7 @@ Thus
artIndef ++ "cheese" ---> "a" ++ "cheese"
- artIndef ++ "apple" ---> "an" ++ "cheese"
+ artIndef ++ "apple" ---> "an" ++ "apple"
This very example does not work in all situations: the prefix @@ -2171,7 +2347,7 @@ This very example does not work in all situations: the prefix } ;
- +GF has the following predefined categories in abstract syntax: @@ -2194,11 +2370,17 @@ they can be used as arguments. For example: -- e.g. (StreetAddress 10 "Downing Street") : Address
-The linearization type is {s : Str} for all these categories.
+FIXME: The linearization type is {s : Str} for all these categories.
+This section is about the use of the type theory part of GF for +including more semantics in grammars. Some of the subsections present +ideas that have not yet been used in real-world applications, and whose +tool support outside the GF program is not complete. +
+
In this section, we will show how
@@ -2217,8 +2399,8 @@ of such a theory, represented as an abstract module in GF.
abstract Arithm = {
cat
- Prop ; -- proposition
- Nat ; -- natural number
+ Prop ; -- proposition
+ Nat ; -- natural number
fun
Zero : Nat ; -- 0
Succ : Nat -> Nat ; -- successor of x
@@ -2230,7 +2412,7 @@ of such a theory, represented as an abstract module in GF.
A concrete syntax is given below, as an example of using the resource grammar
library.
-
+
Dependent types
Dependent types are a characteristic feature of GF,
@@ -2266,12 +2448,10 @@ a street, a city, and a country.
}
-The linearization rules -are straightforward, +The linearization rules are straightforward,
lin
-
mkAddress country city street =
ss (street.s ++ "," ++ city.s ++ "," ++ country.s) ;
UK = ss ("U.K.") ;
@@ -2286,11 +2466,11 @@ are straightforward,
AvAlsaceLorraine = ss ("avenue" ++ "Alsace-Lorraine") ;
-with the exception of mkAddress, where we have
+Notice that, in mkAddress, we have
reversed the order of the constituents. The type of mkAddress
in the abstract syntax takes its arguments in a "logical" order,
-with increasing precision. (This order is sometimes even used in the concrete
-syntax of addresses, e.g. in Russia).
+with increasing precision. (This order is sometimes even used in the
+concrete syntax of addresses, e.g. in Russia).
Both existing and non-existing addresses are recognized by this
@@ -2314,10 +2494,11 @@ well-formed. What we do is to include contexts in
cat judgements:
- cat Address ; - cat Country ; - cat City Country ; - cat Street (x : Country)(y : City x) ; + cat + Address ; + Country ; + City Country ; + Street (x : Country)(City x) ;
The first two judgements are as before, but the third one makes
@@ -2342,19 +2523,18 @@ The fun judgements of the grammar are modified accordingly:
fun
+ mkAddress : (x : Country) -> (y : City x) -> Street x y -> Address ;
- mkAddress : (x : Country) -> (y : City x) -> Street x y -> Address ;
-
- UK : Country ;
- France : Country ;
- Paris : City France ;
- London : City UK ;
- Grenoble : City France ;
- OxfordSt : Street UK London ;
- ShaftesburyAve : Street UK London ;
- BdRaspail : Street France Paris ;
- RueBlondel : Street France Paris ;
- AvAlsaceLorraine : Street France Grenoble ;
+ UK : Country ;
+ France : Country ;
+ Paris : City France ;
+ London : City UK ;
+ Grenoble : City France ;
+ OxfordSt : Street UK London ;
+ ShaftesburyAve : Street UK London ;
+ BdRaspail : Street France Paris ;
+ RueBlondel : Street France Paris ;
+ AvAlsaceLorraine : Street France Grenoble ;
Since the type of mkAddress now has dependencies among
@@ -2394,11 +2574,17 @@ or any other naming of the variables. Actually the use of variables
sometimes shortens the code, since we can write e.g.
- fun ConjNP : Conj -> (x,y : NP) -> NP ; - oper triple : (x,y,z : Str) -> Str = \x,y,z -> x ++ y ++ z ; + oper triple : (x,y,z : Str) -> Str = ... ++
+If a bound variable is not used, it can here, as elswhere in GF, be replaced by +a wildcard: +
++ oper triple : (_,_,_ : Str) -> Str = ...- +
The functional fragment of GF @@ -2443,7 +2629,7 @@ When the operations are used, the type checker requires them to be equipped with all their arguments; this may be a nuisance for a Haskell or ML programmer.
- +This section introduces a way of using dependent types to @@ -2467,8 +2653,8 @@ For instance, the sentence is syntactically well-formed but semantically ill-formed. It is well-formed because it combines a well-formed noun phrase ("the number 2") with a well-formed -verb phrase ("is equilateral") in accordance with the -rule that the verb phrase is inflected in the +verb phrase ("is equilateral") and satisfies the agreement +rule saying that the verb phrase is inflected in the number of the noun phrase:
@@ -2523,6 +2709,7 @@ but no proposition linearized to
since Equilateral two is not a well-formed type-theoretical object.
+It is not even accepted by the context-free parser.
When formalizing mathematics, e.g. in the purpose of @@ -2559,64 +2746,15 @@ and dependencies of other categories on this:
cat
S ; -- sentence
- V1 Dom ; -- one-place verb
- V2 Dom Dom ; -- two-place verb
+ V1 Dom ; -- one-place verb with specific subject type
+ V2 Dom Dom ; -- two-place verb with specific subject and object types
A1 Dom ; -- one-place adjective
A2 Dom Dom ; -- two-place adjective
- PN Dom ; -- proper name
- NP Dom ; -- noun phrase
+ NP Dom ; -- noun phrase for an object of specific type
Conj ; -- conjunction
Det ; -- determiner
-The number of Dom arguments depends on the semantic type
-corresponding to the category: one-place verbs and adjectives
-correspond to types of the form
-
- A -> Prop --
-whereas two-place verbs and adjectives correspond to types of the form -
-- A -> B -> Prop --
-where the domains A and B can be distinct.
-Proper names correspond to types of the form
-
- A --
-that is, individual objects of the domain A. Noun phrases
-correspond to
-
- (A -> Prop) -> Prop --
-that is, quantifiers over the domain A.
-Sentences, conjunctions, and determiners correspond to
-
- Prop - Prop -> Prop -> Prop - (A : Dom) -> (A -> Prop) -> Prop --
-respectively,
-and are thus independent of domain. As for common nouns CN,
-the simplest semantics is that they correspond to
-
- Dom --
-In this section, we will, in fact, write Dom instead of CN.
-
Having thus parametrized categories on domains, we have to reformulate the rules of predication, etc, accordingly. This is straightforward:
@@ -2624,7 +2762,6 @@ the rules of predication, etc, accordingly. This is straightforward: fun PredV1 : (A : Dom) -> NP A -> V1 A -> S ; ComplV2 : (A,B : Dom) -> V2 A B -> NP B -> V1 A ; - UsePN : (A : Dom) -> PN A -> NP A ; DetCN : Det -> (A : Dom) -> NP A ; ModA1 : (A : Dom) -> A1 A -> Dom ; ConjS : Conj -> S -> S -> S ; @@ -2632,14 +2769,13 @@ the rules of predication, etc, accordingly. This is straightforward:In linearization rules, -we typically use wildcards for the domain arguments, -to get arities right: +we use wildcards for the domain arguments, +because they don't affect linearization:
lin
PredV1 _ np vp = ss (np.s ++ vp.s) ;
ComplV2 _ _ v2 np = ss (v2.s ++ np.s) ;
- UsePN _ pn = pn ;
DetCN det cn = ss (det.s ++ cn.s) ;
ModA1 cn a1 = ss (a1.s ++ cn.s) ;
ConjS conj s1 s2 = ss (s1.s ++ conj.s ++ s2.s) ;
@@ -2666,24 +2802,23 @@ To explain the contrast, we introduce the functions
human : Dom ;
game : Dom ;
play : V2 human game ;
- John : PN human ;
- Golf : PN game ;
+ John : NP human ;
+ Golf : NP game ;
Both sentences still pass the context-free parser,
returning trees with lots of metavariables of type Dom:
- PredV1 ?0 (UsePN ?1 John) (ComplV2 ?2 ?3 play (UsePN ?4 Golf)) - - PredV1 ?0 (UsePN ?1 Golf) (ComplV2 ?2 ?3 play (UsePN ?4 John)) + PredV1 ?0 John (ComplV2 ?1 ?2 play Golf) + PredV1 ?0 Golf (ComplV2 ?1 ?2 play John)
But only the former sentence passes the type checker, which moreover infers the domain arguments:
- PredV1 human (UsePN human John) (ComplV2 human game play (UsePN game Golf)) + PredV1 human John (ComplV2 human game play Golf)
To try this out in GF, use pt = put_term with the tree transformation
@@ -2705,7 +2840,7 @@ or less liberal. For instance,
John loves golf
-both make sense, even though Mary and golf
+should both make sense, even though Mary and golf
are of different types. A natural solution in this case is to
formalize love as a polymorphic verb, which takes
a human as its first argument but an object of any type as its second
@@ -2716,16 +2851,21 @@ argument:
lin love _ = ss "loves" ;
-Problems remain, such as subtyping (e.g. what
-is meaningful for a human is also meaningful for
-a man and a woman, but not the other way round)
-and the extended use of expressions (e.g. a metaphoric use that
-makes sense of "golf plays John").
+In other words, it is possible for a human to love anything.
+A problem related to polymorphism is subtyping: what
+is meaningful for a human is also meaningful for
+a man and a woman, but not the other way round.
+One solution to this is coercions: functions that
+"lift" objects from subtypes to supertypes.
+
-Perhaps the most well-known feature of constructive type theory is
+Perhaps the most well-known idea in constructive type theory is
the Curry-Howard isomorphism, also known as the
propositions as types principle. Its earliest formulations
were attempts to give semantics to the logical systems of
@@ -2747,61 +2887,109 @@ The successor function Succ generates an infinite
sequence of natural numbers, beginning from Zero.
-We then define what it means for a number x to be less than +We then define what it means for a number x to be less than a number y. Our definition is based on two axioms:
Zero is less than Succ y for any y.
-x is less than y, thenSucc x is less than Succ y.
-
+Zero is less than Succ y for any y.
+Succ x is less than Succ y.
+
The most straightforward way of expressing these axioms in type theory
-is as typing judgements that introduce objects of a type Less x y:
+is as typing judgements that introduce objects of a type Less //x y //:
+
cat Less Nat Nat ;
fun lessZ : (y : Nat) -> Less Zero (Succ y) ;
fun lessS : (x,y : Nat) -> Less x y -> Less (Succ x) (Succ y) ;
+
Objects formed by lessZ and lessS are
called proof objects: they establish the truth of certain
mathematical propositions.
For instance, the fact that 2 is less that
4 has the proof object
+
lessS (Succ Zero) (Succ (Succ (Succ Zero)))
(lessS Zero (Succ (Succ Zero)) (lessZ (Succ Zero)))
+whose type is +
Less (Succ (Succ Zero)) (Succ (Succ (Succ (Succ Zero))))
-which is the same thing as the proposition that 2 is less than 4.
-
++which is the formalization of the proposition that 2 is less than 4. +
+GF grammars can be used to provide a semantic control of well-formedness of expressions. We have already seen examples of this: the grammar of well-formed addresses and the grammar with selectional restrictions above. By introducing proof objects -we have now added a very powerful -technique of expressing semantic conditions. -
+we have now added a very powerful technique of expressing semantic conditions. + +A simple example of the use of proof objects is the definition of well-formed time spans: a time span is expected to be from an earlier to a later time: +
from 3 to 8
+is thus well-formed, whereas +
from 8 to 3
+
is not. The following rules for spans impose this condition
by using the Less predicate:
+
cat Span ;
fun span : (m,n : Nat) -> Less m n -> Span ;
-
-
-
++A possible practical application of this idea is proof-carrying documents: +to be semantically well-formed, the abstract syntax of a document must contain a proof +of some property, although the proof is not shown in the concrete document. +Think, for instance, of small documents describing flight connections: +
++To fly from Gothenburg to Prague, first take LH3043 to Frankfurt, then OK0537 to Prague. +
++The well-formedness of this text is partly expressible by dependent typing: +
++ cat + City ; + Flight City City ; + fun + Gothenburg, Frankfurt, Prague : City ; + LH3043 : Flight Gothenburg Frankfurt ; + OK0537 : Flight Frankfurt Prague ; ++
+This rules out texts saying take OK0537 from Gothenburg to Prague. However, there is a +further condition saying that it must be possible to change from LH3043 to OK0537 in Frankfurt. +This can be modelled as a proof object of a suitable type, which is required by the constructor +that connects flights. +
++ cat + IsPossible (x,y,z : City)(Flight x y)(Flight y z) ; + fun + Connect : (x,y,z : City) -> + (u : Flight x y) -> (v : Flight y z) -> + IsPossible x y z u v -> Flight x z ; ++ +
Mathematical notation and programming languages have lots of @@ -2813,8 +3001,8 @@ a universally quantifier proposition
consists of the binding (All x) of the variable x,
-and the body B(x), where the variable x is
-said to occur bound.
+and the body B(x), where the variable x can have
+bound occurrences.
Variable bindings appear in informal mathematical language as well, for
@@ -2901,7 +3089,6 @@ since the linearization type of Prop is
{s : Str}
-(we remind that the order of fields in a record does not matter). In other words, the linearization of a function consists of a linearization of the body together with a field for a linearization of the bound variable. @@ -2911,16 +3098,16 @@ should notice that GF requires trees to be in any function of type
- A -> C + A -> B
always has a syntax tree of the form
- \x -> c + \x -> b
-where c : C under the assumption x : A.
+where b : B under the assumption x : A.
It is in this form that an expression can be analysed
as having a bound variable and a body.
-To be able to -parse variable symbols, however, GF needs to know what +To be able to parse variable symbols, however, GF needs to know what to look for (instead of e.g. trying to parse any string as a variable). What strings are parsed as variable symbols is defined in the lexical analysis part of GF parsing @@ -2968,11 +3154,10 @@ is defined in the lexical analysis part of GF parsing All (\x -> Eq x x)
-(see more details on lexers below).
-If several variables are bound in the same argument, the
-labels are $0, $1, $2, etc.
+(see more details on lexers below). If several variables are bound in the
+same argument, the labels are $0, $1, $2, etc.
We have seen that,
@@ -2993,7 +3178,7 @@ recognized by the key word def. At its simplest, it is just
the definition of one constant, e.g.
- def one = succ zero ; + def one = Succ Zero ;
We can also define a function with arguments, @@ -3006,8 +3191,9 @@ which is still a special case of the most general notion of definition, that of a group of pattern equations:
- def sum x zero = x ; - def sum x (succ y) = succ (sum x y) ; + def + sum x Zero = x ; + sum x (Succ y) = Succ (Sum x y) ;
To compute a term is, as in functional programming languages, @@ -3015,10 +3201,10 @@ simply to follow a chain of reductions until no definition can be applied. For instance, we compute
- sum one one --> - sum (succ zero) (succ zero) --> - succ (sum (succ zero) zero) --> - succ (succ zero) + Sum one one --> + Sum (Succ Zero) (Succ Zero) --> + Succ (sum (Succ Zero) Zero) --> + Succ (Succ Zero)
Computation in GF is performed with the pt command and the
@@ -3027,7 +3213,7 @@ Computation in GF is performed with the pt command and the
> p -tr "1 + 1" | pt -transform=compute -tr | l
sum one one
- succ (succ zero)
+ Succ (Succ Zero)
s(s(0))
@@ -3040,9 +3226,9 @@ Thus, trivially, all trees in a chain of computation
are definitionally equal to each other. So are the trees
- sum zero (succ one) - succ one - sum (sum zero zero) (sum (succ zero) one) + sum Zero (Succ one) + Succ one + sum (sum Zero Zero) (sum (Succ Zero) one)
and infinitely many other trees.
@@ -3052,8 +3238,8 @@ A fact that has to be emphasized about def definitions is that
they are not performed as a first step of linearization.
We say that linearization is intensional, which means that
the definitional equality of two trees does not imply that
-they have the same linearizations. For instance, the seven terms
-above all have different linearizations in arithmetic notation:
+they have the same linearizations. For instance, each of the seven terms
+shown above has a different linearizations in arithmetic notation:
1 + 1
@@ -3085,7 +3271,7 @@ equal types. For instance,
Proof (Odd one)
- Proof (Odd (succ zero))
+ Proof (Odd (Succ Zero))
are equal types. Hence, any tree that type checks as a proof that
@@ -3116,7 +3302,7 @@ and other functions, GF has a judgement form
data to tell that certain functions are canonical, e.g.
- data Nat = succ | zero ;
+ data Nat = Succ | Zero ;
Unlike in Haskell, but similarly to ALF (where constructor functions
@@ -3127,269 +3313,20 @@ are given separately, in ordinary fun judgements.
One can also write directly
- data succ : Nat -> Nat ;
+ data Succ : Nat -> Nat ;
which is equivalent to the two judgements
- fun succ : Nat -> Nat ;
- data Nat = succ ;
-
-
-
-More features of the module system
-
-Interfaces, instances, and functors
-
-Resource grammars and their reuse
-
-A resource grammar is a grammar built on linguistic grounds,
-to describe a language rather than a domain.
-The GF resource grammar library, which contains resource grammars for
-10 languages, is described more closely in the following
-documents:
-
-
-However, to give a flavour of both using and writing resource grammars,
-we have created a miniature resource, which resides in the
-subdirectory resource. Its API consists of the following
-three modules:
-
-Syntax - syntactic structures, language-independent: -
-- --
-LexEng - lexical paradigms, English: -
-- --
-LexIta - lexical paradigms, Italian: -
-- -- -
-Only these three modules should be opened in applications.
-The implementations of the resource are given in the following four modules:
-
-MorphoEng, -
-- --
-MorphoIta: low-level morphology -
- - -
-An example use of the resource resides in the
-subdirectory applications.
-It implements the abstract syntax
-FoodComments for English and Italian.
-The following diagram shows the module structure, indicating by
-colours which modules are written by the grammarian. The two blue modules
-form the abstract syntax. The three red modules form the concrete syntax.
-The two green modules are trivial instantiations of a functor.
-The rest of the modules (black) come from the resource.
-
-
-
-The example files of this chapter can be found in
-the directory arithm.
-
-The simplest way is to open a top-level Lang module
-and a Paradigms module:
-
- abstract Foo = ... - - concrete FooEng = open LangEng, ParadigmsEng in ... - concrete FooSwe = open LangSwe, ParadigmsSwe in ... --
-Here is an example. -
-
- abstract Arithm = {
- cat
- Prop ;
- Nat ;
- fun
- Zero : Nat ;
- Succ : Nat -> Nat ;
- Even : Nat -> Prop ;
- And : Prop -> Prop -> Prop ;
- }
-
- --# -path=.:alltenses:prelude
-
- concrete ArithmEng of Arithm = open LangEng, ParadigmsEng in {
- lincat
- Prop = S ;
- Nat = NP ;
- lin
- Zero =
- UsePN (regPN "zero" nonhuman) ;
- Succ n =
- DetCN (DetSg (SgQuant DefArt) NoOrd) (ComplN2 (regN2 "successor") n) ;
- Even n =
- UseCl TPres ASimul PPos
- (PredVP n (UseComp (CompAP (PositA (regA "even"))))) ;
- And x y =
- ConjS and_Conj (BaseS x y) ;
-
- }
-
- --# -path=.:alltenses:prelude
-
- concrete ArithmSwe of Arithm = open LangSwe, ParadigmsSwe in {
- lincat
- Prop = S ;
- Nat = NP ;
- lin
- Zero =
- UsePN (regPN "noll" neutrum) ;
- Succ n =
- DetCN (DetSg (SgQuant DefArt) NoOrd)
- (ComplN2 (mkN2 (mk2N "efterföljare" "efterföljare")
- (mkPreposition "till")) n) ;
- Even n =
- UseCl TPres ASimul PPos
- (PredVP n (UseComp (CompAP (PositA (regA "jämn"))))) ;
- And x y =
- ConjS and_Conj (BaseS x y) ;
- }
+ fun Succ : Nat -> Nat ;
+ data Nat = Succ ;
--The definitions in this example were found by parsing: -
-- > i LangEng.gf - - -- for Successor: - > p -cat=NP -mcfg -parser=topdown "the mother of Paris" - - -- for Even: - > p -cat=S -mcfg -parser=topdown "Paris is old" - - -- for And: - > p -cat=S -mcfg -parser=topdown "Paris is old and I am old" --
-The use of parsing can be systematized by example-based grammar writing, -to which we will return later. -
+
-The interesting thing now is that the
-code in ArithmSwe is similar to the code in ArithmEng, except for
-some lexical items ("noll" vs. "zero", "efterföljare" vs. "successor",
-"jämn" vs. "even"). How can we exploit the similarities and
-actually share code between the languages?
-
-The solution is to use a functor: an incomplete module that opens
-an abstract as an interface, and then instantiate it to different
-languages that implement the interface. The structure is as follows:
-
- abstract Foo ... - - incomplete concrete FooI = open Lang, Lex in ... - - concrete FooEng of Foo = FooI with (Lang=LangEng), (Lex=LexEng) ; - concrete FooSwe of Foo = FooI with (Lang=LangSwe), (Lex=LexSwe) ; --
-where Lex is an abstract lexicon that includes the vocabulary
-specific to this application:
-
- abstract Lex = Cat ** ... - - concrete LexEng of Lex = CatEng ** open ParadigmsEng in ... - concrete LexSwe of Lex = CatSwe ** open ParadigmsSwe in ... --
-Here, again, a complete example (abstract Arithm is as above):
-
- incomplete concrete ArithmI of Arithm = open Lang, Lex in {
- lincat
- Prop = S ;
- Nat = NP ;
- lin
- Zero =
- UsePN zero_PN ;
- Succ n =
- DetCN (DetSg (SgQuant DefArt) NoOrd) (ComplN2 successor_N2 n) ;
- Even n =
- UseCl TPres ASimul PPos
- (PredVP n (UseComp (CompAP (PositA even_A)))) ;
- And x y =
- ConjS and_Conj (BaseS x y) ;
- }
-
- --# -path=.:alltenses:prelude
- concrete ArithmEng of Arithm = ArithmI with
- (Lang = LangEng),
- (Lex = LexEng) ;
-
- --# -path=.:alltenses:prelude
- concrete ArithmSwe of Arithm = ArithmI with
- (Lang = LangSwe),
- (Lex = LexSwe) ;
-
- abstract Lex = Cat ** {
- fun
- zero_PN : PN ;
- successor_N2 : N2 ;
- even_A : A ;
- }
-
- concrete LexSwe of Lex = CatSwe ** open ParadigmsSwe in {
- lin
- zero_PN = regPN "noll" neutrum ;
- successor_N2 =
- mkN2 (mk2N "efterföljare" "efterföljare") (mkPreposition "till") ;
- even_A = regA "jämn" ;
- }
-
-
-
-
Transfer means noncompositional tree-transforming operations.
The command apply_transfer = at is typically used in a pipe:
@@ -3407,9 +3344,9 @@ See the
transfer language documentation
for more information.
Lexers and unlexers can be chosen from
@@ -3442,10 +3379,9 @@ Given by help -lexer, help -unlexer:
-unlexer=codelit like code, but remove string literal quotes
-unlexer=concat remove all spaces
-unlexer=bind like identity, but bind at "&+"
-
Issues: @@ -3456,7 +3392,7 @@ Issues:
-fcfg vs. others
-
+
Thespeak_aloud = sa command sends a string to the speech
@@ -3486,7 +3422,7 @@ The method words only for grammars of English.
Both Flite and ATK are freely available through the links
above, but they are not distributed together with GF.
The @@ -3497,18 +3433,18 @@ describes the use of the editor, which works for any multilingual GF grammar. Here is a snapshot of the editor:
-
+
The grammars of the snapshot are from the Letter grammar package.
- +Forthcoming.
- +Other processes can communicate with the GF command interpreter, @@ -3525,7 +3461,7 @@ Thus the most silent way to invoke GF is - +
GF grammars can be used as parts of programs written in the @@ -3537,15 +3473,15 @@ following languages. The links give more documentation.
A summary is given in the following chart of GF grammar compiler phases:
Formal and Informal Software Specifications, @@ -3557,7 +3493,12 @@ English and German.
A simpler example will be explained here.
+ ++See TALK project deliverables, TALK homepage +
- - + + diff --git a/doc/tutorial/gf-tutorial2.txt b/doc/tutorial/gf-tutorial2.txt index 4b20f38cd..9c3ae71b2 100644 --- a/doc/tutorial/gf-tutorial2.txt +++ b/doc/tutorial/gf-tutorial2.txt @@ -1,5 +1,5 @@ Grammatical Framework Tutorial -Author: Aarne Ranta