=> t
...
}
```
%--!
==Regular expression patterns==
To define string operations computed at compile time, such
as in morphology, it is handy to use regular expression patterns:
- //p// ``+`` //q// : token consisting of //p// followed by //q//
- //p// ``*`` : token //p// repeated 0 or more times
(max the length of the string to be matched)
- ``-`` //p// : matches anything that //p// does not match
- //x// ``@`` //p// : bind to //x// what //p// matches
- //p// ``|`` //q// : matches what either //p// or //q// matches
The last three apply to all types of patterns, the first two only to token strings.
As an example, we give a rule for the formation of English word forms
ending with an //s// and used in the formation of both plural nouns and
third-person present-tense verbs.
```
add_s : Str -> Str = \w -> case w of {
_ + "oo" => w + "s" ; -- bamboo
_ + ("s" | "z" | "x" | "sh" | "o") => w + "es" ; -- bus, hero
_ + ("a" | "o" | "u" | "e") + "y" => w + "s" ; -- boy
x + "y" => x + "ies" ; -- fly
_ => w + "s" -- car
} ;
```
Here is another example, the plural formation in Swedish 2nd declension.
The second branch uses a variable binding with ``@`` to cover the cases where an
unstressed pre-final vowel //e// disappears in the plural
(//nyckel-nycklar, seger-segrar, bil-bilar//):
```
plural2 : Str -> Str = \w -> case w of {
pojk + "e" => pojk + "ar" ;
nyck + "e" + l@("l" | "r" | "n") => nyck + l + "ar" ;
bil => bil + "ar"
} ;
```
Semantics: variables are always bound to the **first match**, which is the first
in the sequence of binding lists ``Match p v`` defined as follows. In the definition,
``p`` is a pattern and ``v`` is a value. The semantics is given in Haskell notation.
```
Match (p1|p2) v = Match p1 ++ U Match p2 v
Match (p1+p2) s = [Match p1 s1 ++ Match p2 s2 |
i <- [0..length s], (s1,s2) = splitAt i s]
Match p* s = [[]] if Match "" s ++ Match p s ++ Match (p+p) s ++... /= []
Match -p v = [[]] if Match p v = []
Match c v = [[]] if c == v -- for constant and literal patterns c
Match x v = [[(x,v)]] -- for variable patterns x
Match x@p v = [[(x,v)]] + M if M = Match p v /= []
Match p v = [] otherwise -- failure
```
Examples:
- ``x + "e" + y`` matches ``"peter"`` with ``x = "p", y = "ter"``
- ``x + "er"*`` matches ``"burgerer"`` with ``x = "burg"
**Exercise**. Implement the German **Umlaut** operation on word stems.
The operation changes the vowel of the stressed stem syllable as follows:
//a// to //ä//, //au// to //äu//, //o// to //ö//, and //u// to //ü//. You
can assume that the operation only takes syllables as arguments. Test the
operation to see whether it correctly changes //Arzt// to //Ärzt//,
//Baum// to //Bäum//, //Topf// to //Töpf//, and //Kuh// to //Küh//.
%--!
==Prefix-dependent choices==
Sometimes a token has different forms depending on the token
that follows. An example is the English indefinite article,
which is //an// if a vowel follows, //a// otherwise.
Which form is chosen can only be decided at run time, i.e.
when a string is actually build. GF has a special construct for
such tokens, the ``pre`` construct exemplified in
```
oper artIndef : Str =
pre {"a" ; "an" / strs {"a" ; "e" ; "i" ; "o"}} ;
```
Thus
```
artIndef ++ "cheese" ---> "a" ++ "cheese"
artIndef ++ "apple" ---> "an" ++ "apple"
```
This very example does not work in all situations: the prefix
//u// has no general rules, and some problematic words are
//euphemism, one-eyed, n-gram//. It is possible to write
```
oper artIndef : Str =
pre {"a" ;
"a" / strs {"eu" ; "one"} ;
"an" / strs {"a" ; "e" ; "i" ; "o" ; "n-"}
} ;
```
==Predefined types==
GF has the following predefined categories in abstract syntax:
```
cat Int ; -- integers, e.g. 0, 5, 743145151019
cat Float ; -- floats, e.g. 0.0, 3.1415926
cat String ; -- strings, e.g. "", "foo", "123"
```
The objects of each of these categories are **literals**
as indicated in the comments above. No ``fun`` definition
can have a predefined category as its value type, but
they can be used as arguments. For example:
```
fun StreetAddress : Int -> String -> Address ;
lin StreetAddress number street = {s = number.s ++ street.s} ;
-- e.g. (StreetAddress 10 "Downing Street") : Address
```
FIXME: The linearization type is ``{s : Str}`` for all these categories.
%--!
=Using the resource grammar library=
In this chapter, we will take a look at the GF resource grammar library.
We will use the library to implement a slightly extended ``Food`` grammar
and port it to some new languages.
==The coverage of the library==
The GF Resource Grammar Library contains grammar rules for
10 languages (in addition, 2 languages are available as incomplete
implementations, and a few more are under construction). Its purpose
is to make these rules available for application programmers,
who can thereby concentrate on the semantic and stylistic
aspects of their grammars, without having to think about
grammaticality. The targeted level of application grammarians
is that of a skilled programmer with
a practical knowledge of the target languages, but without
theoretical knowledge about their grammars.
Such a combination of
skills is typical of programmers who, for instance, want to localize
software to new languages.
The current resource languages are
- ``Ara``bic (incomplete)
- ``Cat``alan (incomplete)
- ``Dan``ish
- ``Eng``lish
- ``Fin``nish
- ``Fre``nch
- ``Ger``man
- ``Ita``lian
- ``Nor``wegian
- ``Rus``sian
- ``Spa``nish
- ``Swe``dish
The first three letters (``Eng`` etc) are used in grammar module names.
The incomplete Arabic and Catalan implementations are
enough to be used in many applications; they both contain, amoung other
things, complete inflectional morphology.
==The resource API==
The resource library API is devided into language-specific
and language-independent parts. To put it roughly,
- the syntax API is language-independent, i.e. has the same types and functions for all
languages.
Its name is ``Syntax``//L// for each language //L//
- the morphology API is language-specific, i.e. has partly different types and functions
for different languages.
Its name is ``Paradigms``//L// for each language //L//
A full documentation of the API is available on-line in the
[resource synopsis ../../lib/resource-1.0/synopsis.html]. For our
examples, we will only need a fragment of the full API.
In the first examples,
we will make use of the following categories, from the module ``Syntax``.
|| Category | Explanation | Example ||
| ``Utt`` | sentence, question, word... | "be quiet" |
| ``Adv`` | verb-phrase-modifying adverb, | "in the house" |
| ``AdA`` | adjective-modifying adverb, | "very" |
| ``S`` | declarative sentence | "she lived here" |
| ``Cl`` | declarative clause, with all tenses | "she looks at this" |
| ``AP`` | adjectival phrase | "very warm" |
| ``CN`` | common noun (without determiner) | "red house" |
| ``NP`` | noun phrase (subject or object) | "the red house" |
| ``Det`` | determiner phrase | "those seven" |
| ``Predet`` | predeterminer | "only" |
| ``Quant`` | quantifier with both sg and pl | "this/these" |
| ``Prep`` | preposition, or just case | "in" |
| ``A`` | one-place adjective | "warm" |
| ``N`` | common noun | "house" |
We will need the following syntax rules from ``Syntax``.
|| Function | Type | Example ||
| ``mkUtt`` | ``S -> Utt`` | //John walked// |
| ``mkUtt`` | ``Cl -> Utt`` | //John walks// |
| ``mkCl`` | ``NP -> AP -> Cl`` | //John is very old// |
| ``mkNP`` | ``Det -> CN -> NP`` | //the first old man// |
| ``mkNP`` | ``Predet -> NP -> NP`` | //only John// |
| ``mkDet`` | ``Quant -> Det`` | //this// |
| ``mkCN`` | ``N -> CN`` | //house// |
| ``mkCN`` | ``AP -> CN -> CN`` | //very big blue house// |
| ``mkAP`` | ``A -> AP`` | //old// |
| ``mkAP`` | ``AdA -> AP -> AP`` | //very very old// |
We will also need the following structural words from ``Syntax``.
|| Function | Type | Example ||
| ``all_Predet`` | ``Predet`` | //all// |
| ``defPlDet`` | ``Det`` | //the (houses)// |
| ``this_Quant`` | ``Quant`` | //this// |
| ``very_AdA`` | ``AdA`` | //very// |
For French, we will use the following part of ``ParadigmsFre``.
|| Function | Type | Example ||
| ``Gender`` | ``Type`` | - |
| ``masculine`` | ``Gender`` | - |
| ``feminine`` | ``Gender`` | - |
| ``mkN`` | ``(cheval : Str) -> N`` | - |
| ``mkN`` | ``(foie : Str) -> Gender -> N`` | - |
| ``mkA`` | ``(cher : Str) -> A`` | - |
| ``mkA`` | ``(sec,seche : Str) -> A`` | - |
For German, we will use the following part of ``ParadigmsGer``.
|| Function | Type | Example ||
| ``Gender`` | ``Type`` | - |
| ``masculine`` | ``Gender`` | - |
| ``feminine`` | ``Gender`` | - |
| ``neuter`` | ``Gender`` | - |
| ``mkN`` | ``(Stufe : Str) -> N`` | - |
| ``mkN`` | ``(Bild,Bilder : Str) -> Gender -> N`` | - |
| ``mkA`` | ``Str -> A`` | - |
| ``mkA`` | ``(gut,besser,beste : Str) -> A`` | //gut,besser,beste// |
**Exercise**. Try out the morphological paradigms in different languages. Do
in this way:
```
> i -path=alltenses:prelude -retain alltenses/ParadigmsGer.gfr
> cc mkN "Farbe"
> cc mkA "gut" "besser" "beste"
```
==Example: French==
We start with an abstract syntax that is like ``Food`` before, but
has a plural determiner (//all wines//) and some new nouns that will
need different genders in most languages.
```
abstract Food = {
cat
S ; Item ; Kind ; Quality ;
fun
Is : Item -> Quality -> S ;
This, All : Kind -> Item ;
QKind : Quality -> Kind -> Kind ;
Wine, Cheese, Fish, Beer, Pizza : Kind ;
Very : Quality -> Quality ;
Fresh, Warm, Italian, Expensive, Delicious, Boring : Quality ;
}
```
The French implementation opens ``SyntaxFre`` and ``ParadigmsFre``
to get access to the resource libraries needed. In order to find
the libraries, a ``path`` directive is prepended; it is interpreted
relative to the environment variable ``GF_LIB_PATH``.
```
--# -path=.:present:prelude
concrete FoodFre of Food = open SyntaxFre,ParadigmsFre in {
lincat
S = Utt ;
Item = NP ;
Kind = CN ;
Quality = AP ;
lin
Is item quality = mkUtt (mkCl item quality) ;
This kind = mkNP (mkDet this_Quant) kind ;
All kind = mkNP all_Predet (mkNP defPlDet kind) ;
QKind quality kind = mkCN quality kind ;
Wine = mkCN (mkN "vin") ;
Beer = mkCN (mkN "bière") ;
Pizza = mkCN (mkN "pizza" feminine) ;
Cheese = mkCN (mkN "fromage" masculine) ;
Fish = mkCN (mkN "poisson") ;
Very quality = mkAP very_AdA quality ;
Fresh = mkAP (mkA "frais" "fraîche") ;
Warm = mkAP (mkA "chaud") ;
Italian = mkAP (mkA "italien") ;
Expensive = mkAP (mkA "cher") ;
Delicious = mkAP (mkA "délicieux") ;
Boring = mkAP (mkA "ennuyeux") ;
}
```
The ``lincat`` definitions in ``FoodFre`` assign **resource categories**
to **application categories**. In a sense, the application categories
are **semantic**, as they correspond to concepts in the grammar application,
whereas the resource categories are **syntactic**: they give the linguistic
means to express concepts in any application.
The ``lin`` definitions likewise assign resource functions to application
functions. Under the hood, there is a lot of matching with parameters to
take care of word order, inflection, and agreement. But the user of the
library sees nothing of this: the only parameters you need to give are
the genders of some nouns, which cannot be correctly inferred from the word.
In French, for example, the one-argument ``mkN`` assigns the noun the feminine
gender if and only if it ends with an //e//. Therefore the words //fromage// and
//pizza// are given genders. One can of course always give genders manually, to
be on the safe side.
As for inflection, the one-argument adjective pattern ``mkA`` takes care of
completely regular adjective such as //chaud-chaude//, but also of special
cases such as //italien-italienne//, //cher-chère//, and //délicieux-délicieuse//.
But it cannot form //frais-fraîche// properly. Once again, you can give more
forms to be on the safe side. You can also test the paradigms in the GF
program.
**Exercise**. Compile the grammar ``FoodFre`` and generate and parse some sentences.
**Exercise**. Write a concrete syntax of ``Food`` for English or some other language
included in the resource library. You can also compare the output with the hand-written
grammars presented earlier in this tutorial.
**Exercise**. In particular, try to write a concrete syntax for Italian, even if
you don't know Italian. What you need to know is that "beer" is //birra// and
"pizza" is //pizza//, and that all the nouns and adjectives in the grammar
are regular.
==Functor implementation of multilingual grammars==
If you did the exercise of writing a concrete syntax of ``Food`` for some other
language, you probably noticed that much of the code looks exactly the same
as for French. The immediate reason for this is that the ``Syntax`` API is the
same for all languages; the deeper reason is that all languages (at least those
in the resource package) implement the same syntactic structures and tend to use them
in similar ways. Thus it is only the lexical parts of a concrete syntax that
you need to write anew for a new language. In brief,
- first copy the concrete syntax for one language
- then change the words (the strings and perhaps some paradigms)
But programming by copy-and-paste is not worthy of a functional programmer.
Can we write a function that takes care of the shared parts of grammar modules?
Yes, we can. It is not a function in the ``fun`` or ``oper`` sense, but
a function operating on modules, called a **functor**. This construct
is familiar from the functional languages ML and OCaml, but it does not
exist in Haskell. It also bears some resemblance to templates in C++.
Functors are also known as **parametrized modules**.
In GF, a functor is a module that ``open``s one or more **interfaces**.
An ``interface`` is a module similar to a ``resource``, but it only
contains the types of ``oper``s, not their definitions. You can think
of an interface as a kind of a record type. Thus a functor is a kind
of a function taking records as arguments and producins a module
as value.
Let us look at a functor implementation of the ``Food`` grammar.
Consider its module header first:
```
incomplete concrete FoodI of Food = open Syntax, LexFood in
```
In the functor-function analogy, ``FoodI`` would be presented as a function
with the following type signature:
```
FoodI : instance of Syntax -> instance of LexFood -> concrete of Food
```
It takes as arguments two interfaces:
- ``Syntax``, the resource grammar interface
- ``LexFood``, the domain-specific lexicon interface
Functors opening ``Syntax`` and a domain lexicon interface are in fact
so typical in GF applications, that this structure could be called a **design patter**
for GF grammars. The idea in this pattern is, again, that
the languages use the same syntactic structures but different words.
Before going to the details of the module bodies, let us look at how functors
are concretely used. An interface has a header such as
```
interface LexFood = open Syntax in
```
To give an ``instance`` of it means that all ``oper``s are given definitione (of
appropriate types). For example,
```
instance LexFoodGer of LexFood = open SyntaxGer, ParadigmsGer in
```
Notice that when an interface opens an interface, such as ``Syntax``, then its instance
opens an instance of it. But the instance may also open some resources - typically,
a domain lexicon instance opens a ``Paradigms`` module.
In the function-functor analogy, we now have
```
SyntaxGer : instance of Syntax
LexFoodGer : instance of LexFood
```
Thus we can complete the German implementation by "applying" the functor:
```
FoodI SyntaxGer LexFoodGer : concrete of Food
```
The GF syntax for doing so is
```
concrete FoodGer of Food = FoodI with
(Syntax = SyntaxGer),
(LexFood = LexFoodGer) ;
```
Notice that this is the //complete// module, not just a header of it.
The module body is received from ``FoodI``, by instantiating the
interface constants with their definitions given in the German
instances.
A module of this form, characterized by the keyword ``with``, is
called a **functor instantiation**.
Here is the complete code for the functor ``FoodI``:
```
incomplete concrete FoodI of Food = open Syntax, LexFood in {
lincat
S = Utt ;
Item = NP ;
Kind = CN ;
Quality = AP ;
lin
Is item quality = mkUtt (mkCl item quality) ;
This kind = mkNP (mkDet this_Quant) kind ;
All kind = mkNP all_Predet (mkNP defPlDet kind) ;
QKind quality kind = mkCN quality kind ;
Wine = mkCN wine_N ;
Beer = mkCN beer_N ;
Pizza = mkCN pizza_N ;
Cheese = mkCN cheese_N ;
Fish = mkCN fish_N ;
Very quality = mkAP very_AdA quality ;
Fresh = mkAP fresh_A ;
Warm = mkAP warm_A ;
Italian = mkAP italian_A ;
Expensive = mkAP expensive_A ;
Delicious = mkAP delicious_A ;
Boring = mkAP boring_A ;
}
```
==Interfaces and instances==
Let us now define the ``LexFood`` interface:
```
interface LexFood = open Syntax in {
oper
wine_N : N ;
beer_N : N ;
pizza_N : N ;
cheese_N : N ;
fish_N : N ;
fresh_A : A ;
warm_A : A ;
italian_A : A ;
expensive_A : A ;
delicious_A : A ;
boring_A : A ;
}
```
In this interface, only lexical items are declared. In general, an
interface can declare any functions and also types. The ``Syntax``
interface does so.
Here is the German instance of the interface:
```
instance LexFoodGer of LexFood = open SyntaxGer, ParadigmsGer in {
oper
wine_N = mkN "Wein" ;
beer_N = mkN "Bier" "Biere" neuter ;
pizza_N = mkN "Pizza" "Pizzen" feminine ;
cheese_N = mkN "Käse" "Käsen" masculine ;
fish_N = mkN "Fisch" ;
fresh_A = mkA "frisch" ;
warm_A = mkA "warm" "wärmer" "wärmste" ;
italian_A = mkA "italienisch" ;
expensive_A = mkA "teuer" ;
delicious_A = mkA "köstlich" ;
boring_A = mkA "langweilig" ;
}
```
Just to complete the picture, we repeat the German functor instantiation
for ``FoodI``, this time with a path directive that makes it compilable.
```
--# -path=.:present:prelude
concrete FoodGer of Food = FoodI with
(Syntax = SyntaxGer),
(LexFood = LexFoodGer) ;
```
**Exercise**. Compile and test ``FoodGer``.
**Exercise**. Refactor ``FoodFre`` into a functor instantiation.
==Adding languages to a functor implementation==
Once we have an application grammar defined by using a functor,
adding a new language is simple. Just two modules need to be written:
- a domain lexicon instance
- a functor instantiation
The functor instantiation is completely mechanical to write.
Here is one for Finnish:
```
--# -path=.:present:prelude
concrete FoodFin of Food = FoodI with
(Syntax = SyntaxFin),
(LexFood = LexFoodFin) ;
```
The domain lexicon instance requires some knowledge of the words of the
language: what words are used for which concepts, how the words are
inflected, plus features such as genders. Here is a lexicon instance for
Finnish:
```
instance LexFoodFin of LexFood = open SyntaxFin, ParadigmsFin in {
oper
wine_N = mkN "viini" ;
beer_N = mkN "olut" ;
pizza_N = mkN "pizza" ;
cheese_N = mkN "juusto" ;
fish_N = mkN "kala" ;
fresh_A = mkA "tuore" ;
warm_A = mkA "lämmin" ;
italian_A = mkA "italialainen" ;
expensive_A = mkA "kallis" ;
delicious_A = mkA "herkullinen" ;
boring_A = mkA "tylsä" ;
}
```
**Exercise**. Instantiate the functor ``FoodI`` to some language of
your choice.
==Division of labour revisited==
One purpose with the resource grammars was stated to be a division
of labour between linguists and application grammarians. We can now
reflect on what this means more precisely, by asking ourselves what
skills are required of grammarians working on different components.
Building a GF application starts from the abstract syntax. Writing
an abstract syntax requires
- understanding the semantic structure of the application domain
- knowledge of the GF fragment with categories and functions
If the concrete syntax is written by means of a functor, the programmer
has to decide what parts of the implementation are put to the interface
and what parts are shared in the functor. This requires
- knowing how the domain concepts are expressed in natural language
- knowledge of the resource grammar library - the categories and combinators
- understanding what parts are likely to be expressed in language-dependent
ways, so that they must belong to the interface and not the functor
- knowledge of the GF fragment with function applications and strings
Instantiating a ready-made functor to a new language is less demanding.
It requires essentially
- knowing how the domain words are expressed in the language
- knowing, roughly, how these words are inflected
- knowledge of the paradigms available in the library
- knowledge of the GF fragment with function applications and strings
Notice that none of these tasks requires the use of GF records, tables,
or parameters. Thus only a small fragment of GF is needed; the rest of
GF is only relevant for those who write the libraries.
Of course, grammar writing is not always straightforward usage of libraries.
For example, GF can be used for other languages than just those in the
libraries - for both natural and formal languages. A knowledge of records
and tables can, unfortunately, also be needed for understanding GF's error
messages.
**Exercise**. Design a small grammar that can be used for controlling
an MP3 player. The grammar should be able to recognize commands such
as //play this song//, with the following variations:
- verbs: //play//, //remove//
- objects: //song//, //artist//
- determiners: //this//, //the previous//
- verbs without arguments: //stop//, //pause//
The implementation goes in the following phases:
+ abstract syntax
+ functor and lexicon interface
+ lexicon instance for the first language
+ functor instantiation for the first language
+ lexicon instance for the second language
+ functor instantiation for the second language
+ ...
==Restricted inheritance==
A functor implementation using the resource ``Syntax`` interface
works as long as all concepts are expressed by using the same structures
in all languages. If this is not the case, the deviant linearization can
be made into a parameter and moved to the domain lexicon interface.
Let us take a slightly contrived example: assume that English has
no word for ``Pizza``, but has to use the paraphrase //Italian pie//.
This paraphrase is no longer a noun ``N``, but a complex phrase
in the category ``CN``. An obvious way to solve this problem is
to change interface ``LexEng`` so that the constant declared for
``Pizza`` gets a new type:
```
oper pizza_CN : CN ;
```
But this solution is unstable: we may end up changing the interface
and the function with each new language, and we must every time also
change the interface instances for the old languages to maintain
type correctness.
A better solution is to use **restricted inheritance**: the English
instantiation inherits the functor implementation except for the
constant ``Pizza``. This is how we write:
```
--# -path=.:present:prelude
concrete FoodEng of Food = FoodI - [Pizza] with
(Syntax = SyntaxEng),
(LexFood = LexFoodEng) **
open SyntaxEng, ParadigmsEng in {
lin Pizza = mkCN (mkA "Italian") (mkN "pie") ;
}
```
Restricted inheritance is available for all inherited modules. One can for
instance exclude some mushrooms and pick up just some fruit in
the ``FoodMarket`` example:
```
abstract Foodmarket = Food, Fruit [Peach], Mushroom - [Agaric]
```
A concrete syntax of ``Foodmarket`` must then indicate the same inheritance
restrictions.
**Exercise**. Change ``FoodGer`` in such a way that it says, instead of
//X is Y//, the equivalent of //X must be Y// (//X muss Y sein//).
You will have to browse the full resource API to find all
the functions needed.
==Browsing the resource with GF commands==
In addition to reading the
[resource synopsis ../../lib/resource-1.0/synopsis.html], you
can find resource function combinations by using the parser. This
is so because the resource library is in the end implemented as
a top-level ``abstract-concrete`` grammar, on which parsing
and linearization work.
Unfortunately, only English and the Scandinavian languages can be
parsed within acceptable computer resource limits when the full
resource is used.
To look for a syntax tree in the overload API by parsing, do like this:
```
> $GF_LIB_PATH
> i -path=alltenses:prelude alltenses/OverLangEng.gfc
> p -cat=S -overload "this grammar is too big"
mkS (mkCl (mkNP (mkDet this_Quant) grammar_N) (mkAP too_AdA big_A))
```
To view linearizations in all languages by parsing from English:
```
> i alltenses/langs.gfcm
> p -cat=S -lang=LangEng "this grammar is too big" | tb
UseCl TPres ASimul PPos (PredVP (DetCN (DetSg (SgQuant this_Quant)
NoOrd) (UseN grammar_N)) (UseComp (CompAP (AdAP too_AdA (PositA big_A)))))
Den här grammatiken är för stor
Esta gramática es demasiado grande
(Cyrillic: eta grammatika govorit des'at' jazykov)
Denne grammatikken er for stor
Questa grammatica è troppo grande
Diese Grammatik ist zu groß
Cette grammaire est trop grande
Tämä kielioppi on liian suuri
This grammar is too big
Denne grammatik er for stor
```
Unfortunately, the Russian grammar uses at the moment a different
character encoding than the rest and is therefore not displayed correctly
in a terminal window. However, the GF syntax editor does display all
examples correctly:
```
% gfeditor alltenses/langs.gfcm
```
When you have constructed the tree, you will see the following screen:
#BCEN
[../../lib/resource-1.0/doc/10lang-small.png]
#ECEN
**Exercise**. Find the resource grammar translations for the following
English phrases (parse in the category ``Phr``). You can first try to
build the terms manually.
//every man loves a woman//
//this grammar speaks more than ten languages//
//which languages aren't in the grammar//
//which languages did you want to speak//
=More concepts of abstract syntax=
==GF as a logical framework==
In this section, we will show how
to encode advanced semantic concepts in an abstract syntax.
We use concepts inherited from **type theory**. Type theory
is the basis of many systems known as **logical frameworks**, which are
used for representing mathematical theorems and their proofs on a computer.
In fact, GF has a logical framework as its proper part:
this part is the abstract syntax.
In a logical framework, the formalization of a mathematical theory
is a set of type and function declarations. The following is an example
of such a theory, represented as an ``abstract`` module in GF.
```
abstract Arithm = {
cat
Prop ; -- proposition
Nat ; -- natural number
fun
Zero : Nat ; -- 0
Succ : Nat -> Nat ; -- successor of x
Even : Nat -> Prop ; -- x is even
And : Prop -> Prop -> Prop ; -- A and B
}
```
**Exercise**. Give a concrete syntax of ``Arithm``, either from scatch or
by using the resource library.
==Dependent types==
**Dependent types** are a characteristic feature of GF,
inherited from the **constructive type theory** of Martin-Löf and
distinguishing GF from most other grammar formalisms and
functional programming languages.
Dependent types can be used for stating stronger
**conditions of well-formedness** than ordinary types.
A simple example is a "smart house" system, which
defines voice commands for household appliances. This example
is borrowed from the
[Regulus Book http://cslipublications.stanford.edu/site/1575865262.html]
(Rayner & al. 2006).
One who enters a smart house can use speech to dim lights, switch
on the fan, etc. For each ``Kind`` of a device, there is a set of
``Actions`` that can be performed on it; thus one can dim the lights but
not the fan, for example. These dependencies can be expressed by
by making the type ``Action`` dependent on ``Kind``. We express this
as follows in ``cat`` declarations:
```
cat
Command ;
Kind ;
Action Kind ;
Device Kind ;
```
The crucial use of the dependencies is made in the rule for forming commands:
```
fun CAction : (k : Kind) -> Action k -> Device k -> Command ;
```
In other words: an action and a device can be combined into a command only
if they are of the same ``Kind`` ``k``. If we have the functions
```
DKindOne : (k : Kind) -> Device k ; -- the light
light, fan : Kind ;
dim : Action light ;
```
we can form the syntax tree
```
CAction light dim (DKindOne light)
```
but we cannot form the trees
```
CAction light dim (DKindOne fan)
CAction fan dim (DKindOne light)
CAction fan dim (DKindOne fan)
```
Linearization rules are written as usual: the concrete syntax does not
know if a category is a dependent type. In English, you can write as follows:
```
lincat Action = {s : Str} ;
lin CAction kind act dev = {s = act.s ++ dev.s} ;
```
Notice that the argument ``kind`` does not appear in the linearization.
The type checker will be able to reconstruct it from the ``dev`` argument.
Parsing with dependent types is performed in two phases:
+ context-free parsing
+ filtering through type checker
If you just parse in the usual way, you don't enter the second phase, and
the ``kind`` argument is not found:
```
> parse "dim the light"
CAction ? dim (DKindOne light)
```
Moreover, type-incorrect commands are not rejected:
```
> parse "dim the fan"
CAction ? dim (DKindOne fan)
```
The question mark ``?`` is a **metavariable**, and is returned by the parser
for any subtree that is suppressed by a linearization rule.
To get rid of metavariables, you must feed the parse result into the
second phase of **solving** them. The ``solve`` process uses the dependent
type checker to restore the values of the metavariables. It is invoked by
the command ``put_tree = pt`` with the flag ``-transform=solve``:
```
> parse "dim the light" | put_tree -transform=solve
CAction light dim (DKindOne light)
```
The ``solve`` process may fail, in which case no tree is returned:
```
> parse "dim the fan" | put_tree -transform=solve
no tree found
```
**Exercise**. Write an abstract syntax module with above contents
and an appropriate English concrete syntax. Try to parse the commands
//dim the light// and //dim the fan//, with and without ``solve`` filtering.
**Exercise**. Perform random and exhaustive generation, with and without
``solve`` filtering.
**Exercise**. Add some device kinds and actions to the grammar.
==Polymorphism==
Sometimes an action can be performed on all kinds of devices. It would be
possible to introduce separate ``fun`` constants for each kind-action pair,
but this would be tedious. Instead, one can use **polymorphic** actions,
i.e. actions that take a ``Kind`` as an argument and produce an ``Action``
for that ``Kind``:
```
fun switchOn, switchOff : (k : Kind) -> Action k ;
```
Functions that are not polymorphic are **monomorphic**. However, the
dichotomy into monomorphism and full polymorphism is not always sufficien
for good semantic modelling: very typically, some actions are defined
for a proper subset of devices, but not just one. For instance, both doors and
windows can be opened, whereas lights cannot.
We will return to this problem by introducing the
concept of **restricted polymorphism** later,
after a chapter on proof objects.
==Dependent types and spoken language models==
We have used dependent types to control semantic well-formedness
in grammars. This is important in traditional type theory
applications such as proof assistants, where only mathematically
meaningful formulas should be constructed. But semantic filtering has
also proved important in speech recognition, because it reduces the
ambiguity of the results.
===Grammar-based language models===
The standard way of using GF in speech recognition is by building
**grammar-based language models**. To this end, GF comes with compilers
into several formats that are used in speech recognition systems.
One such format is GSL, used in the [Nuance speech recognizer www.nuance.com].
It is produced from GF simply by printing a grammar with the flag
``-printer=gsl``.
```
> import -conversion=finite SmartEng.gf
> print_grammar -printer=gsl
;GSL2.0
; Nuance speech recognition grammar for SmartEng
; Generated by GF
.MAIN SmartEng_2
SmartEng_0 [("switch" "off") ("switch" "on")]
SmartEng_1 ["dim" ("switch" "off")
("switch" "on")]
SmartEng_2 [(SmartEng_0 SmartEng_3)
(SmartEng_1 SmartEng_4)]
SmartEng_3 ("the" SmartEng_5)
SmartEng_4 ("the" SmartEng_6)
SmartEng_5 "fan"
SmartEng_6 "light"
```
Now, GSL is a context-free format, so how does it cope with dependent types?
In general, dependent types can give rise to infinitely many basic types
(exercise!), whereas a context-free grammar can by definition only have
finitely many nonterminals.
This is where the flag ``-conversion=finite`` is needed in the ``import``
command. Its effect is to convert a GF grammar with dependent types to
one without, so that each instance of a dependent type is replaced by
an atomic type. This can then be used as a nonterminal in a context-free
grammar. The ``finite`` conversion presupposes that every
dependent type has only finitely many instances, which is in fact
the case in the ``Smart`` grammar.
**Exercise**. If you have access to the Nuance speech recognizer,
test it with GF-generated language models for ``SmartEng``. Do this
both with and without ``-conversion=finite``.
**Exercise**. Construct an abstract syntax with infinitely many instances
of dependent types.
===Statistical language models===
An alternative to grammar-based language models are
**statistical language models** (**SLM**s). An SLM is
built from a **corpus**, i.e. a set of utterances. It specifies the
probability of each **n-gram**, i.e. sequence of //n// words. The
typical value of //n// is 2 (bigrams) or 3 (trigrams).
One advantage of SLMs over grammar-based models is that they are
**robust**, i.e. they can be used to recognize sequences that would
be out of the grammar or the corpus. Another advantage is that
an SLM can be built "for free" if a corpus is available.
However, collecting a corpus can require a lot of work, and writing
a grammar can be less demanding, especially with tools such as GF or
Regulus. This advantage of grammars can be combined with robustness
by creating a back-up SLM from a **synthesized corpus**. This means
simply that the grammar is used for generating such a corpus.
In GF, this can be done with the ``generate_trees`` command.
As with grammar-based models, the quality of the SLM is better
if meaningless utterances are excluded from the corpus. Thus
a good way to generate an SLM from a GF grammar is by using
dependent types and filter the results through the type checker:
```
> generate_trees | put_trees -transform=solve | linearize
```
**Exercise**. Measure the size of the corpus generated from
``SmartEng``, with and without type checker filtering.
==Digression: dependent types in concrete syntax==
===Variables in function types===
A dependent function type needs to introduce a variable for
its argument type, as in
```
switchOff : (k : Kind) -> Action k
```
Function types //without//
variables are actually a shorthand notation: writing
```
fun PredVP : NP -> VP -> S
```
is shorthand for
```
fun PredVP : (x : NP) -> (y : VP) -> S
```
or any other naming of the variables. Actually the use of variables
sometimes shortens the code, since they can share a type:
```
octuple : (x,y,z,u,v,w,s,t : Str) -> Str
```
If a bound variable is not used, it can here, as elsewhere in GF, be replaced by
a wildcard:
```
octuple : (_,_,_,_,_,_,_,_ : Str) -> Str
```
A good practice for functions with many arguments of the same type
is to indicate the number of arguments:
```
octuple : (x1,_,_,_,_,_,_,x8 : Str) -> Str
```
One can also use the variables to document what each argument is expected
to provide, as is done in inflection paradigms in the resource grammar.
```
mkV : (drink,drank,drunk : Str) -> V
```
===Polymorphism in concrete syntax===
The **functional fragment** of GF
terms and types comprises function types, applications, lambda
abstracts, constants, and variables. This fragment is similar in
abstract and concrete syntax. In particular,
dependent types are also available in concrete syntax.
We have not made use of them yet,
but we will now look at one example of how they
can be used.
Those readers who are familiar with functional programming languages
like ML and Haskell, may already have missed **polymorphic**
functions. For instance, Haskell programmers have access to
the functions
```
const :: a -> b -> a
const c _ = c
flip :: (a -> b -> c) -> b -> a -> c
flip f y x = f x y
```
which can be used for any given types ``a``,``b``, and ``c``.
The GF counterpart of polymorphic functions are **monomorphic**
functions with explicit **type variables**. Thus the above
definitions can be written
```
oper const :(a,b : Type) -> a -> b -> a =
\_,_,c,_ -> c ;
oper flip : (a,b,c : Type) -> (a -> b ->c) -> b -> a -> c =
\_,_,_,f,x,y -> f y x ;
```
When the operations are used, the type checker requires
them to be equipped with all their arguments; this may be a nuisance
for a Haskell or ML programmer.
==Proof objects==
Perhaps the most well-known idea in constructive type theory is
the **Curry-Howard isomorphism**, also known as the
**propositions as types principle**. Its earliest formulations
were attempts to give semantics to the logical systems of
propositional and predicate calculus. In this section, we will consider
a more elementary example, showing how the notion of proof is useful
outside mathematics, as well.
We first define the category of unary (also known as Peano-style)
natural numbers:
```
cat Nat ;
fun Zero : Nat ;
fun Succ : Nat -> Nat ;
```
The **successor function** ``Succ`` generates an infinite
sequence of natural numbers, beginning from ``Zero``.
We then define what it means for a number //x// to be //less than//
a number //y//. Our definition is based on two axioms:
- ``Zero`` is less than ``Succ`` //y// for any //y//.
- If //x// is less than //y//, then ``Succ`` //x// is less than ``Succ`` //y//.
The most straightforward way of expressing these axioms in type theory
is as typing judgements that introduce objects of a type ``Less`` //x y//:
```
cat Less Nat Nat ;
fun lessZ : (y : Nat) -> Less Zero (Succ y) ;
fun lessS : (x,y : Nat) -> Less x y -> Less (Succ x) (Succ y) ;
```
Objects formed by ``lessZ`` and ``lessS`` are
called **proof objects**: they establish the truth of certain
mathematical propositions.
For instance, the fact that 2 is less that
4 has the proof object
```
lessS (Succ Zero) (Succ (Succ (Succ Zero)))
(lessS Zero (Succ (Succ Zero)) (lessZ (Succ Zero)))
```
whose type is
```
Less (Succ (Succ Zero)) (Succ (Succ (Succ (Succ Zero))))
```
which is the formalization of the proposition that 2 is less than 4.
GF grammars can be used to provide a **semantic control** of
well-formedness of expressions. We have already seen examples of this:
the grammar of well-formed actions on household devices. By introducing proof objects
we have now added a very powerful technique of expressing semantic conditions.
A simple example of the use of proof objects is the definition of
well-formed //time spans//: a time span is expected to be from an earlier to
a later time:
```
from 3 to 8
```
is thus well-formed, whereas
```
from 8 to 3
```
is not. The following rules for spans impose this condition
by using the ``Less`` predicate:
```
cat Span ;
fun span : (m,n : Nat) -> Less m n -> Span ;
```
**Exercise**. Write an abstract and concrete syntax with the
concepts of this section, and experiment with it in GF.
**Exercise**. Define the notions of "even" and "odd" in terms
of proof objects. **Hint**. You need one function for proving
that 0 is even, and two other functions for propagating the
properties.
===Proof-carrying documents===
Another possible application of proof objects is **proof-carrying documents**:
to be semantically well-formed, the abstract syntax of a document must contain a proof
of some property, although the proof is not shown in the concrete document.
Think, for instance, of small documents describing flight connections:
//To fly from Gothenburg to Prague, first take LH3043 to Frankfurt, then OK0537 to Prague.//
The well-formedness of this text is partly expressible by dependent typing:
```
cat
City ;
Flight City City ;
fun
Gothenburg, Frankfurt, Prague : City ;
LH3043 : Flight Gothenburg Frankfurt ;
OK0537 : Flight Frankfurt Prague ;
```
This rules out texts saying //take OK0537 from Gothenburg to Prague//.
However, there is a
further condition saying that it must be possible to
change from LH3043 to OK0537 in Frankfurt.
This can be modelled as a proof object of a suitable type,
which is required by the constructor
that connects flights.
```
cat
IsPossible (x,y,z : City)(Flight x y)(Flight y z) ;
fun
Connect : (x,y,z : City) ->
(u : Flight x y) -> (v : Flight y z) ->
IsPossible x y z u v -> Flight x z ;
```
==Restricted polymorphism==
In the first version of the smart house grammar ``Smart``,
all Actions were either of
- **monomorphic**: defined for one Kind
- **polymorphic**: defined for all Kinds
To make this scale up for new Kinds, we can refine this to
**restricted polymorphism**: defined for Kinds of a certain **class**
The notion of class can be expressed in abstract syntax
by using the Curry-Howard isomorphism as follows:
- a class is a **predicate** of Kinds - i.e. a type depending of Kinds
- a Kind is in a class if there is a proof object of this type
Here is an example with switching and dimming. The classes are called
``switchable`` and ``dimmable``.
```
cat
Switchable Kind ;
Dimmable Kind ;
fun
switchable_light : Switchable light ;
switchable_fan : Switchable fan ;
dimmable_light : Dimmable light ;
switchOn : (k : Kind) -> Switchable k -> Action k ;
dim : (k : Kind) -> Dimmable k -> Action k ;
```
One advantage of this formalization is that classes for new
actions can be added incrementally.
**Exercise**. Write a new version of the ``Smart`` grammar with
classes, and test it in GF.
**Exercise**. Add some actions, kinds, and classes to the grammar.
Try to port the grammar to a new language. You will probably find
out that restricted polymorphism works differently in different languages.
For instance, in Finnish not only doors but also TVs and radios
can be "opened", which means switching them on.
==Variable bindings==
Mathematical notation and programming languages have
expressions that **bind** variables. For instance,
a universally quantifier proposition
```
(All x)B(x)
```
consists of the **binding** ``(All x)`` of the variable ``x``,
and the **body** ``B(x)``, where the variable ``x`` can have
**bound occurrences**.
Variable bindings appear in informal mathematical language as well, for
instance,
```
for all x, x is equal to x
the function that for any numbers x and y returns the maximum of x+y
and x*y
Let x be a natural number. Assume that x is even. Then x + 3 is odd.
```
In type theory, variable-binding expression forms can be formalized
as functions that take functions as arguments. The universal
quantifier is defined
```
fun All : (Ind -> Prop) -> Prop
```
where ``Ind`` is the type of individuals and ``Prop``,
the type of propositions. If we have, for instance, the equality predicate
```
fun Eq : Ind -> Ind -> Prop
```
we may form the tree
```
All (\x -> Eq x x)
```
which corresponds to the ordinary notation
```
(All x)(x = x).
```
An abstract syntax where trees have functions as arguments, as in
the two examples above, has turned out to be precisely the right
thing for the semantics and computer implementation of
variable-binding expressions. The advantage lies in the fact that
only one variable-binding expression form is needed, the lambda abstract
``\x -> b``, and all other bindings can be reduced to it.
This makes it easier to implement mathematical theories and reason
about them, since variable binding is tricky to implement and
to reason about. The idea of using functions as arguments of
syntactic constructors is known as **higher-order abstract syntax**.
The question now arises: how to define linearization rules
for variable-binding expressions?
Let us first consider universal quantification,
```
fun All : (Ind -> Prop) -> Prop
```
We write
```
lin All B = {s = "(" ++ "All" ++ B.$0 ++ ")" ++ B.s}
```
to obtain the form shown above.
This linearization rule brings in a new GF concept - the ``$0``
field of ``B`` containing a bound variable symbol.
The general rule is that, if an argument type of a function is
itself a function type ``A -> C``, the linearization type of
this argument is the linearization type of ``C``
together with a new field ``$0 : Str``. In the linearization rule
for ``All``, the argument ``B`` thus has the linearization
type
```
{$0 : Str ; s : Str},
```
since the linearization type of ``Prop`` is
```
{s : Str}
```
In other words, the linearization of a function
consists of a linearization of the body together with a
field for a linearization of the bound variable.
Those familiar with type theory or lambda calculus
should notice that GF requires trees to be in
**eta-expanded** form in order to be linearizable:
any function of type
```
A -> B
```
always has a syntax tree of the form
```
\x -> b
```
where ``b : B`` under the assumption ``x : A``.
It is in this form that an expression can be analysed
as having a bound variable and a body.
Given the linearization rule
```
lin Eq a b = {s = "(" ++ a.s ++ "=" ++ b.s ++ ")"}
```
the linearization of
```
\x -> Eq x x
```
is the record
```
{$0 = "x", s = ["( x = x )"]}
```
Thus we can compute the linearization of the formula,
```
All (\x -> Eq x x) --> {s = "[( All x ) ( x = x )]"}.
```
How did we get the //linearization// of the variable ``x``
into the string ``"x"``? GF grammars have no rules for
this: it is just hard-wired in GF that variable symbols are
linearized into the same strings that represent them in
the print-out of the abstract syntax.
To be able to //parse// variable symbols, however, GF needs to know what
to look for (instead of e.g. trying to parse //any//
string as a variable). What strings are parsed as variable symbols
is defined in the lexical analysis part of GF parsing
```
> p -cat=Prop -lexer=codevars "(All x)(x = x)"
All (\x -> Eq x x)
```
(see more details on lexers below). If several variables are bound in the
same argument, the labels are ``$0, $1, $2``, etc.
**Exercise**. Write an abstract syntax of the whole
**predicate calculus**, with the
**connectives** "and", "or", "implies", and "not", and the
**quantifiers** "exists" and "for all". Use higher-order functions
to guarantee that unbounded variables do not occur.
**Exercise**. Write a concrete syntax for your favourite
notation of predicate calculus. Use Latex as target language
if you want nice output. You can also try producing Haskell boolean
expressions. Use as many parenthesis as you need to
guarantee non-ambiguity.
==Semantic definitions==
We have seen that,
just like functional programming languages, GF has declarations
of functions, telling what the type of a function is.
But we have not yet shown how to **compute**
these functions: all we can do is provide them with arguments
and linearize the resulting terms.
Since our main interest is the well-formedness of expressions,
this has not yet bothered
us very much. As we will see, however, computation does play a role
even in the well-formedness of expressions when dependent types are
present.
GF has a form of judgement for **semantic definitions**,
recognized by the key word ``def``. At its simplest, it is just
the definition of one constant, e.g.
```
def one = Succ Zero ;
```
We can also define a function with arguments,
```
def Neg A = Impl A Abs ;
```
which is still a special case of the most general notion of
definition, that of a group of **pattern equations**:
```
def
sum x Zero = x ;
sum x (Succ y) = Succ (Sum x y) ;
```
To compute a term is, as in functional programming languages,
simply to follow a chain of reductions until no definition
can be applied. For instance, we compute
```
Sum one one -->
Sum (Succ Zero) (Succ Zero) -->
Succ (sum (Succ Zero) Zero) -->
Succ (Succ Zero)
```
Computation in GF is performed with the ``pt`` command and the
``compute`` transformation, e.g.
```
> p -tr "1 + 1" | pt -transform=compute -tr | l
sum one one
Succ (Succ Zero)
s(s(0))
```
The ``def`` definitions of a grammar induce a notion of
**definitional equality** among trees: two trees are
definitionally equal if they compute into the same tree.
Thus, trivially, all trees in a chain of computation
(such as the one above)
are definitionally equal to each other. So are the trees
```
sum Zero (Succ one)
Succ one
sum (sum Zero Zero) (sum (Succ Zero) one)
```
and infinitely many other trees.
A fact that has to be emphasized about ``def`` definitions is that
they are //not// performed as a first step of linearization.
We say that **linearization is intensional**, which means that
the definitional equality of two trees does not imply that
they have the same linearizations. For instance, each of the seven terms
shown above has a different linearizations in arithmetic notation:
```
1 + 1
s(0) + s(0)
s(s(0) + 0)
s(s(0))
0 + s(0)
s(1)
0 + 0 + s(0) + 1
```
This notion of intensionality is
no more exotic than the intensionality of any **pretty-printing**
function of a programming language (function that shows
the expressions of the language as strings). It is vital for
pretty-printing to be intensional in this sense - if we want,
for instance, to trace a chain of computation by pretty-printing each
intermediate step, what we want to see is a sequence of different
expression, which are definitionally equal.
What is more exotic is that GF has two ways of referring to the
abstract syntax objects. In the concrete syntax, the reference is intensional.
In the abstract syntax, the reference is extensional, since
**type checking is extensional**. The reason is that,
in the type theory with dependent types, types may depend on terms.
Two types depending on terms that are definitionally equal are
equal types. For instance,
```
Proof (Odd one)
Proof (Odd (Succ Zero))
```
are equal types. Hence, any tree that type checks as a proof that
1 is odd also type checks as a proof that the successor of 0 is odd.
(Recall, in this connection, that the
arguments a category depends on never play any role
in the linearization of trees of that category,
nor in the definition of the linearization type.)
In addition to computation, definitions impose a
**paraphrase** relation on expressions:
two strings are paraphrases if they
are linearizations of trees that are
definitionally equal.
Paraphrases are sometimes interesting for
translation: the **direct translation**
of a string, which is the linearization of the same tree
in the targer language, may be inadequate because it is e.g.
unidiomatic or ambiguous. In such a case,
the translation algorithm may be made to consider
translation by a paraphrase.
To stress express the distinction between
**constructors** (=**canonical** functions)
and other functions, GF has a judgement form
``data`` to tell that certain functions are canonical, e.g.
```
data Nat = Succ | Zero ;
```
Unlike in Haskell, but similarly to ALF (where constructor functions
are marked with a flag ``C``),
new constructors can be added to
a type with new ``data`` judgements. The type signatures of constructors
are given separately, in ordinary ``fun`` judgements.
One can also write directly
```
data Succ : Nat -> Nat ;
```
which is equivalent to the two judgements
```
fun Succ : Nat -> Nat ;
data Nat = Succ ;
```
**Exercise**. Implement an interpreter of a small functional programming
language with natural numbers, lists, pairs, lambdas, etc. Use higher-order
abstract syntax with semantic definitions. As target language, use
your favourite programming language.
**Exercise**. To make your interpreted language look nice, use
**precedences** instead of putting parentheses everywhere.
You can use the [precedence library ../../lib/prelude/Precedence.gf]
of GF to facilitate this.
=Practical issues=
==Lexers and unlexers==
Lexers and unlexers can be chosen from
a list of predefined ones, using the flags``-lexer`` and `` -unlexer`` either
in the grammar file or on the GF command line. Here are some often-used lexers
and unlexers:
```
The default is words.
-lexer=words tokens are separated by spaces or newlines
-lexer=literals like words, but GF integer and string literals recognized
-lexer=vars like words, but "x","x_...","$...$" as vars, "?..." as meta
-lexer=chars each character is a token
-lexer=code use Haskell's lex
-lexer=codevars like code, but treat unknown words as variables, ?? as meta
-lexer=text with conventions on punctuation and capital letters
-lexer=codelit like code, but treat unknown words as string literals
-lexer=textlit like text, but treat unknown words as string literals
The default is unwords.
-unlexer=unwords space-separated token list (like unwords)
-unlexer=text format as text: punctuation, capitals, paragraph
-unlexer=code format as code (spacing, indentation)
-unlexer=textlit like text, but remove string literal quotes
-unlexer=codelit like code, but remove string literal quotes
-unlexer=concat remove all spaces
```
More options can be found by ``help -lexer`` and ``help -unlexer``:
==Speech input and output==
The ``speak_aloud = sa`` command sends a string to the speech
synthesizer
[Flite http://www.speech.cs.cmu.edu/flite/doc/].
It is typically used via a pipe:
``` generate_random | linearize | speak_aloud
The result is only satisfactory for English.
The ``speech_input = si`` command receives a string from a
speech recognizer that requires the installation of
[ATK http://mi.eng.cam.ac.uk/~sjy/software.htm].
It is typically used to pipe input to a parser:
``` speech_input -tr | parse
The method words only for grammars of English.
Both Flite and ATK are freely available through the links
above, but they are not distributed together with GF.
==Multilingual syntax editor==
The
[Editor User Manual http://www.cs.chalmers.se/~aarne/GF2.0/doc/javaGUImanual/javaGUImanual.htm]
describes the use of the editor, which works for any multilingual GF grammar.
Here is a snapshot of the editor:
#BCEN
#EDITORPNG
#ECEN
The grammars of the snapshot are from the
[Letter grammar package http://www.cs.chalmers.se/~aarne/GF/examples/letter].
==Communicating with GF==
Other processes can communicate with the GF command interpreter,
and also with the GF syntax editor. Useful flags when invoking GF are
- ``-batch`` suppresses the promps and structures the communication with XML tags.
- ``-s`` suppresses non-output non-error messages and XML tags.
- ``-nocpu`` suppresses CPU time indication.
Thus the most silent way to invoke GF is
```
gf -batch -s -nocpu
```
=Embedded grammars in Haskell and Java=
GF grammars can be used as parts of programs written in the
following languages. We will go through a skeleton application in
Haskell, while the next chapter will show how to build an
application in Java.
We will show how to build a minimal resource grammar
application whose architecture scales up to much
larger applications. The application is run from the
shell by the command
```
math
```
whereafter it reads user input in English and French.
To each input line, it answers by the truth value of
the sentence.
```
./math
zéro est pair
True
zero is odd
False
zero is even and zero is odd
False
```
The source of the application consists of the following
files:
```
LexEng.gf -- English instance of Lex
LexFre.gf -- French instance of Lex
Lex.gf -- lexicon interface
Makefile -- a makefile
MathEng.gf -- English instantiation of MathI
MathFre.gf -- French instantiation of MathI
Math.gf -- abstract syntax
MathI.gf -- concrete syntax functor for Math
Run.hs -- Haskell Main module
```
The system was built in 22 steps explained below.
==Writing GF grammars==
===Creating the first grammar===
1. Write ``Math.gf``, which defines what you want to say.
```
abstract Math = {
cat Prop ; Elem ;
fun
And : Prop -> Prop -> Prop ;
Even : Elem -> Prop ;
Zero : Elem ;
}
```
2. Write ``Lex.gf``, which defines which language-dependent
parts are needed in the concrete syntax. These are mostly
words (lexicon), but can in fact be any operations. The definitions
only use resource abstract syntax, which is opened.
```
interface Lex = open Syntax in {
oper
even_A : A ;
zero_PN : PN ;
}
```
3. Write ``LexEng.gf``, the English implementation of ``Lex.gf``
This module uses English resource libraries.
```
instance LexEng of Lex = open GrammarEng, ParadigmsEng in {
oper
even_A = regA "even" ;
zero_PN = regPN "zero" ;
}
```
4. Write ``MathI.gf``, a language-independent concrete syntax of
``Math.gf``. It opens interfaces.
which makes it an incomplete module, aka. parametrized module, aka.
functor.
```
incomplete concrete MathI of Math =
open Syntax, Lex in {
flags startcat = Prop ;
lincat
Prop = S ;
Elem = NP ;
lin
And x y = mkS and_Conj x y ;
Even x = mkS (mkCl x even_A) ;
Zero = mkNP zero_PN ;
}
```
5. Write ``MathEng.gf``, which is just an instatiation of ``MathI.gf``,
replacing the interfaces by their English instances. This is the module
that will be used as a top module in GF, so it contains a path to
the libraries.
```
instance LexEng of Lex = open SyntaxEng, ParadigmsEng in {
oper
even_A = mkA "even" ;
zero_PN = mkPN "zero" ;
}
```
===Testing===
6. Test the grammar in GF by random generation and parsing.
```
$ gf
> i MathEng.gf
> gr -tr | l -tr | p
And (Even Zero) (Even Zero)
zero is evenand zero is even
And (Even Zero) (Even Zero)
```
When importing the grammar, you will fail if you haven't
- correctly defined your ``GF_LIB_PATH`` as ``GF/lib``
- installed the resource package or
compiled the resource from source by ``make`` in ``GF/lib/resource-1.0``
===Adding a new language===
7. Now it is time to add a new language. Write a French lexicon ``LexFre.gf``:
```
instance LexFre of Lex = open SyntaxFre, ParadigmsFre in {
oper
even_A = mkA "pair" ;
zero_PN = mkPN "zéro" ;
}
```
8. You also need a French concrete syntax, ``MathFre.gf``:
```
--# -path=.:present:prelude
concrete MathFre of Math = MathI with
(Syntax = SyntaxFre),
(Lex = LexFre) ;
```
9. This time, you can test multilingual generation:
```
> i MathFre.gf
> gr | tb
Even Zero
zéro est pair
zero is even
```
===Extending the language===
10. You want to add a predicate saying that a number is odd.
It is first added to ``Math.gf``:
```
fun Odd : Elem -> Prop ;
```
11. You need a new word in ``Lex.gf``.
```
oper odd_A : A ;
```
12. Then you can give a language-independent concrete syntax in
``MathI.gf``:
```
lin Odd x = mkS (mkCl x odd_A) ;
```
13. The new word is implemented in ``LexEng.gf``.
```
oper odd_A = mkA "odd" ;
```
14. The new word is implemented in ``LexFre.gf``.
```
oper odd_A = mkA "impair" ;
```
15. Now you can test with the extended lexicon. First empty
the environment to get rid of the old abstract syntax, then
import the new versions of the grammars.
```
> e
> i MathEng.gf
> i MathFre.gf
> gr | tb
And (Odd Zero) (Even Zero)
zéro est impair et zéro est pair
zero is odd and zero is even
```
==Building a user program==
===Producing a compiled grammar package===
16. Your grammar is going to be used by persons wh``MathEng.gf``o do not need
to compile it again. They may not have access to the resource library,
either. Therefore it is advisable to produce a multilingual grammar
package in a single file. We call this package ``math.gfcm`` and
produce it, when we have ``MathEng.gf`` and
``MathEng.gf`` in the GF state, by the command
```
> pm | wf math.gfcm
```
===Writing the Haskell application===
17. Write the Haskell main file ``Run.hs``. It uses the ``EmbeddedAPI``
module defining some basic functionalities such as parsing.
The answer is produced by an interpreter of trees returned by the parser.
```
module Main where
import GSyntax
import GF.Embed.EmbedAPI
main :: IO ()
main = do
gr <- file2grammar "math.gfcm"
loop gr
loop :: MultiGrammar -> IO ()
loop gr = do
s <- getLine
interpret gr s
loop gr
interpret :: MultiGrammar -> String -> IO ()
interpret gr s = do
let tss = parseAll gr "Prop" s
case (concat tss) of
[] -> putStrLn "no parse"
t:_ -> print $ answer $ fg t
answer :: GProp -> Bool
answer p = case p of
(GOdd x1) -> odd (value x1)
(GEven x1) -> even (value x1)
(GAnd x1 x2) -> answer x1 && answer x2
value :: GElem -> Int
value e = case e of
GZero -> 0
```
18. The syntax trees manipulated by the interpreter are not raw
GF trees, but objects of the Haskell datatype ``GProp``.
From any GF grammar, a file ``GFSyntax.hs`` with
datatypes corresponding to its abstract
syntax can be produced by the command
```
> pg -printer=haskell | wf GSyntax.hs
```
The module also defines the overloaded functions
``gf`` and ``fg`` for translating from these types to
raw trees and back.
===Compiling the Haskell grammar===
19. Before compiling ``Run.hs``, you must check that the
embedded GF modules are found. The easiest way to do this
is by two symbolic links to your GF source directories:
```
$ ln -s /home/aarne/GF/src/GF
$ ln -s /home/aarne/GF/src/Transfer/
```
20. Now you can run the GHC Haskell compiler to produce the program.
```
$ ghc --make -o math Run.hs
```
The program can be tested with the command ``./math``.
===Building a distribution===
21. For a stand-alone binary-only distribution, only
the two files ``math`` and ``math.gfcm`` are needed.
For a source distribution, the files mentioned in
the beginning of this documents are needed.
===Using a Makefile===
22. As a part of the source distribution, a ``Makefile`` is
essential. The ``Makefile`` is also useful when developing the
application. It should always be possible to build an executable
from source by typing ``make``. Here is a minimal such ``Makefile``:
```
all:
echo "pm | wf math.gfcm" | gf MathEng.gf MathFre.gf
echo "pg -printer=haskell | wf GSyntax.hs" | gf math.gfcm
ghc --make -o math Run.hs
```
=Embedded grammars in Java=
Forthcoming; at the moment, the document
[``http://www.cs.chalmers.se/~bringert/gf/gf-java.html`` http://www.cs.chalmers.se/~bringert/gf/gf-java.html]
by Björn Bringert gives more information on Java.
=Further reading=
Syntax Editor User Manual:
[``http://www.cs.chalmers.se/~aarne/GF2.0/doc/javaGUImanual/javaGUImanual.htm`` http://www.cs.chalmers.se/~aarne/GF2.0/doc/javaGUImanual/javaGUImanual.htm]
Resource Grammar Synopsis (on using resource grammars):
[``http://www.cs.chalmers.se/~aarne/GF/lib/resource-1.0/synopsis.html`` ../../lib/resource-1.0/synopsis.html]
Resource Grammar HOWTO (on writing resource grammars):
[``http://www.cs.chalmers.se/~aarne/GF/lib/resource-1.0/synopsis.html`` ../../lib/resource-1.0/doc/Resource-HOWTO.html]
GF Homepage:
[``http://www.cs.chalmers.se/~aarne/GF/doc`` ../..]