From 3e6a60df4155a2e25e96e85bf4162e11fc82eb06 Mon Sep 17 00:00:00 2001
From: aarne
-Grammatical Framework Tutorial
Aarne Ranta
-Draft, November 2007
+Version 3, February 2008
-
@@ -227,120 +268,104 @@ Draft, November 2007
+
+
+
-
-
-
-
-
-
+
+
+
-
-
-
-
Overview
-
-This tutorial gives a hands-on introduction to grammar writing in GF. -It has been written for all programmers -who want to learn to write grammars in GF. -It will go through the programming concepts of GF, and also -explain, without presupposing them, the main ingredients of GF: -linguistics, functional programming, and type theory. -This knowledge will be introduced as a part of grammar writing -practice. -Thus the tutorial should be accessible to anyone who has some -previous experience from any programming language; the basics -of using computers are also presupposed, e.g. the use of -text editors and the management of files. -
--We start in the second chapter -by building a "Hello World" grammar, which covers greetings -in three languages: English (hello world), -Finnish (terve maailma), and Italian (ciao mondo). -This multilingual grammar is based on the most central idea of GF: -the distinction between abstract syntax -(the logical structure) and concrete syntax (the -sequence of words). -
--From the "Hello World" example, we proceed -in the third chapter -to a larger grammar for the domain of food. -In this grammar, you can say things like -
-formaggio italiano -
-vino delizioso, vini deliziosi, pizza deliziosa, pizze deliziose -
-The complete description of morphology and syntax in natural -languages is in GF preferably left to the resource grammar library. -Its use is therefore an important part of GF programming, and -it is covered in the fifth chapter. How to contribute to resource -grammars as an author will only be covered in Part III; -however, the tutorial does go through all the -programming concepts of GF, including those involved in -resource grammars. -
--In addition to multilinguality, semantics is an important aspect of GF -grammars. The "purely linguistic" aspects (morphology and syntax) belong to -the concrete syntax part of GF, whereas semantics is expressed in the abstract -syntax. After the presentation of concrete syntax constructs, we proceed -in the sixth chapter to the enrichment of abstract syntax with dependent types, -variable bindings, and semantic definitions. -the seventh chapter concludes the tutorial by technical tips for implementing formal -languages. It will also illustrate the close relation between GF grammars -and compilers by actually implementing a small compiler from C-like statements -and expressions to machine code similar to Java Virtual Machine. -
--English and Italian are used as example languages in many grammars. -Of course, we will not presuppose that the reader knows any Italian. -We have chosen Italian because it has a rich structure -that illustrates very well the capacities of GF. -Moreover, even those readers who don't know Italian, will find many of -its words familiar, due to the Latin heritage. -The exercises will encourage the reader to -port the examples to other languages as well; in particular, -it should be instructive for the reader to look at her -own native language from the point of view of writing a grammar -implementation. -
--To learn how to write GF grammars is not the only goal of -this tutorial. We will also explain the most important -commands of the GF system, mostly in passing. With these commands, -simple application programs such as translation and -quiz systems, can be built simply by writing scripts for the -GF system. More complicated applications, such as natural-language -interfaces and dialogue systems, moreover require programming in -some general-purpose language; such applications are covered in the eighth chapter. +
-+Hands-on introduction to grammar writing in GF. +
++Main ingredients of GF: +
++Prerequisites: +
++ +
+ ++Lesson 1: a multilingual "Hello World" grammar. English, Finnish, Italian. +
++Lesson 2: a larger grammar for the domain of food. English and Italian. +
++Lesson 3: parameters - morphology and agreement. +
++Lesson 4: using the resource grammar library. +
++Lesson 5: semantics - dependent types, variable bindings, +and semantic definitions. +
++Lesson 6: implementing formal languages. +
++Lesson 7: embedded grammar applications. +
++ +
+ ++You can chop this tutorial into a set of slides by the command +
++ htmls gf-tutorial.html ++
+where the program htmls is distributed with GF (see below), in
+
+The slides will appear as a set of files beginning with 01-gf-tutorial.htmls.
+
+Internal links will not work in the slide format, except for those in the +upper left corner of each slide, and the links behind the "Contents" link. +
++ +
+ +-In this chapter, we will introduce the GF system and write the first GF grammar, -a "Hello World" grammar. While extremely small, this grammar already illustrates -how GF can be used for the tasks of translation and multilingual -generation. +Goals:
- ++ +
+We use the term GF for three different things: @@ -352,18 +377,33 @@ We use the term GF for three different things:
-The relation between these things is obvious: the GF system is an implementation +The GF system is an implementation of the GF programming language, which in turn is built on the ideas of the -GF theory. The main focus of this book is on the GF programming language. -We learn how grammars are written in this language. At the same time, we learn -the way of thinking in the GF theory. To make this all useful and fun, and -to encourage experimenting, we make the grammars run on a computer by +GF theory. +
++The main focus of this tutorial is on using the GF programming language. +
++At the same time, we learn the way of thinking in the GF theory. +
++We make the grammars run on a computer by using the GF system.
-A GF program is called a grammar. A grammar is, traditionally, a -definition of a language. From this definition, different language -processing components can be derived: + +
+ ++A GF program is called a grammar. +
++A grammar defines of a language. +
++From this definition, processing components can be derived:
-A GF grammar is thus a declarative program from which these -procedures can be automatically derived. In general, a GF grammar -is multilingual: it defines many languages and translations between them. +In general, a GF grammar is multilingual:
- ++ +
+-The GF system is open-source free software, which can be downloaded via the -GF Homepage: -
gf.digitalgrammars.com
-+There you find
-In particular, many of the examples in this book are included in the
-subdirectory examples/tutorial of the source distribution package.
-This directory is also available
+Many examples in this tutorial are
online.
-If you want to compile GF from source, you need a Haskell compiler. -To compile the interactive editor, you also need a Java compilers. -But normally you don't have to compile anything yourself, and you definitely -don't need to know Haskell or Java to use GF. +Normally you don't have to compile GF yourself.
-We are assuming the availability of a Unix shell. Linux and Mac OS X users -have it automatically, the latter under the name "terminal". -Windows users are recommended to install Cywgin, the free Unix shell for Windows. +But, if you do want to compile GF from source, you need the Haskell compiler +GHC.
- ++To compile the interactive editor, you also need a Java compiler. +
++We assume a Unix shell: Bash in Linux, "terminal" in Mac OS X, or +Cygwin in Windows. +
++ +
+
-To start the GF system, assuming you have installed it, just type
-gf in the Unix (or Cygwin) shell:
+Type gf in the Unix (or Cygwin) shell:
% gf
@@ -429,7 +479,7 @@ The command
will give you a list of available commands.
-As a common convention in this book, we will use
+As a common convention, we will use
% as a prompt that marks system commands
@@ -440,13 +490,17 @@ As a common convention in this book, we will use
Thus you should not type these prompts, but only the characters that
follow them.
-
++ +
+-The tradition in programming language tutorials is to start with a -program that prints "Hello World" on the terminal. GF should be no -exception. But our program has features that distinguish it from -most "Hello World" programs: +Like most programming language tutorials, we start with a +program that prints "Hello World" on the terminal. +
++Extra features:
+ +
+A GF program, in general, is a multilingual grammar. Its main parts @@ -466,10 +523,18 @@ are
-The abstract syntax defines, in a language-independent way, what meanings -can be expressed in the grammar. In the "Hello World" grammar we want -to express Greetings, where we greet a Recipient, which can be -World or Mum or Friends. Here is the entire +The abstract syntax defines what meanings +can be expressed in the grammar +
++ +
+GF code for the abstract syntax:
@@ -495,19 +560,17 @@ The code has the following parts:
Greeting is the
- main category, i.e. the one in which parsing and generation are
- performed by default
- Greeting and Recipient
- are categories, i.e. types of meanings
- Hello constructing a greeting from a recipient,
- as well as the three possible recipients
+ default start category for parsing and generation
+ -A concrete syntax defines a mapping from the abstract meanings to their -expressions in a language. We first give an English concrete syntax: + +
++English concrete syntax (mapping from meanings to strings):
concrete HelloEng of Hello = {
@@ -532,16 +595,18 @@ The major parts of this code are:
Greeting and Recipient are records with a string s
Hello greeting
- has a function telling that the word hello is prefixed to the string
- s contained in the record recip
+ each of the meanings defined in the abstract syntax
-To make the grammar truly multilingual, we add a Finnish and an Italian concrete
-syntax:
+Notice the concatenation ++ and the record projection ..
+
+ +
++Finnish and an Italian concrete syntaxes:
concrete HelloFin of Hello = {
@@ -562,42 +627,35 @@ syntax:
Friends = {s = "amici"} ;
}
+
-Now we have a trilingual grammar usable for translation and -many other tasks, which we will now start experimenting with. +
- -
-In order to compile the grammar in GF, each of the four modules
-has to be put into a file named Modulename.gf:
+In order to compile the grammar in GF, each of the f
+We create four files, named Modulename.gf:
Hello.gf HelloEng.gf HelloFin.gf HelloIta.gf
-The first GF command needed when using a grammar is to import it.
-The command has a long name, import, and a short name, i.
-When you have started GF (by the shell command gf), you can thus type either
+The first GF command: import a grammar.
> import HelloEng.gf
-or +All commands also have short names; here:
> i HelloEng.gf
-to get the same effect. In general, all GF commands have a long and a short name; -short names are convenient when typing commands by hand, whereas long command -names are more readable in scripts, i.e. files that include sequences of commands. -
-
-The effect of import is that the GF system compiles your grammar
-into an internal representation, and shows a new prompt when it is ready.
-It will also show how much CPU time was consumed:
+The GF system will compile your grammar
+into an internal representation and show the CPU time was consumed, followed
+by a new prompt:
> i HelloEng.gf
@@ -607,33 +665,31 @@ It will also show how much CPU time was consumed:
12 msec
>
+
-You can now use GF for parsing: + +
+
+You can use GF for parsing (parse = p)
> parse "hello world"
Hello World
-The parse (= p) command takes a string
-(in double quotes) and returns an abstract syntax tree --- the meaning
-of the string as defined in the abstract syntax.
-A tree is, in general, something easier than a string
-for a machine to understand and to process further, although this
-is not so obvious in this simple grammar. The syntax for trees is that
-of function application, which in GF is written
+Parsing takes a string into an abstract syntax tree.
+
+The notation for trees is that of function application:
function argument1 ... argumentn
-Parentheses are only needed for grouping. For instance, f (a b) is
-f applied to the application of a to b. This is different
-from f a b, which is f applied to a and b.
+Parentheses are only needed for grouping.
-Strings that return a tree when parsed do so in virtue of the grammar -you imported. Try to parse something that is not in grammar, and you will fail +Parsing something that is not in grammar will fail:
> parse "hello dad"
@@ -642,47 +698,33 @@ you imported. Try to parse something that is not in grammar, and you will fail
> parse "world hello"
no tree found
+
-In the first example, the failure is caused by an unknown word. -In the second example, the combination of words is ungrammatical. +
-In addition to parsing, you can also use GF for linearization
-(linearize = l). This is the inverse of
-parsing, taking trees into strings:
+You can also use GF for linearization (linearize = l).
+It takes trees into strings:
> linearize Hello World
hello world
-What is the use of this? Typically not that you type in a tree at -the GF prompt. The utility of linearization comes from the fact that -you can obtain a tree from somewhere else --- for instance, from -a parser. A prime example of this is translation: you parse -with one concrete syntax and linearize with another. Let us -now do this by first importing the Italian grammar: +Translation: pipe linearization to parsing:
+ > import HelloEng.gf
> import HelloIta.gf
-
-
-We can now parse with HelloEng and pipe the result
-into linearizing with HelloIta:
-
+
> parse -lang=HelloEng "hello mum" | linearize -lang=HelloIta
ciao mamma
-Notice that, since there are now two concrete syntaxes read into the
-system, the commands use a language flag to indicate
-which concrete syntax is used in each operation. If no language flag is
-given, the last-imported language is applied.
+Default of the language flag (-lang): the last-imported concrete syntax.
-To conclude the translation exercise, we import the Finnish grammar -and pipe English parsing into multilingual generation: +Multilingual generation:
> parse -lang=HelloEng "hello friends" | linearize -multi
@@ -692,51 +734,53 @@ and pipe English parsing into multilingual generation:
-Exercise. Test the parsing and translation examples shown above, as well as -some other examples, in different combinations of languages. +
-
-Exercise. Extend the grammar Hello.gf and some of the
+
+
Hello.gf and some of the
concrete syntaxes by five new recipients and one new greeting
form.
-
--Exercise. Add a concrete syntax for some other +
+-Exercise. Add a pair of greetings that are expressed in one and the same way in -one language and in two different ways in another. For instance, good morning -and good afternoon in English are both expressed as buongiorno in Italian. +
+
-Exercise. Inject errors in the Hello grammars, for example, leave out
-some line, omit a variable in a lin rule, or change the name in one occurrence
+
Hello grammars, for example, leave out
+some line, omit a variable in a lin rule, or change the name
+in one occurrence
of a variable. Inspect the error messages generated by GF.
++
- +
-A normal "hello world" program written in C is executable from the
-Unix shell and print its output on the terminal. This is possible in GF
-as well, by using the gf program in a Unix pipe. Invoking gf
-can be made with grammar names as arguments,
-
- % gf HelloEng.gf HelloFin.gf HelloIta.gf --
-which has the same effect as opening gf and then importing the
-grammars. A command can be send to this gf state by piping it from
-Unix's echo command:
+You can use the gf program in a Unix pipe.
% echo "l -multi Hello Wordl" | gf HelloEng.gf HelloFin.gf HelloIta.gf
-which will execute the command and then quit. Alternatively, one can write -a script, a file containing the lines +You can also write a script, a file containing the lines
import HelloEng.gf
@@ -744,6 +788,12 @@ a script, a file containing the lines
import HelloIta.gf
linearize -multi Hello World
+
++ +
+ +
If we name this script hello.gfs, we can do
hello.gfs, we can do
The options -batch and -s ("silent") remove prompts, CPU time,
-and other messages. Writing GF scripts and Unix shell scripts that call
-GF is the simplest way to build application programs that use GF grammars.
-In the eighth chapter, we will see how to build stand-alone programs that don't need
-the GF system to run.
+and other messages.
+
+See Lesson 7, for stand-alone programs that don't need the GF system to run.
Exercise. (For Unix hackers.) Write a GF application that reads an English string from the standard input and writes an Italian translation to the output.
- ++ +
+-Now we have built our first multilingual grammar and seen the basic -functionalities of GF: parsing and linearization. We have tested -these functionalities inside the GF program. In the forthcoming -chapters, we will build larger grammars and can then get more out of -these functionalities. But we will also introduce new ones: +Some more functions that will be covered:
-The usefulness of GF would be quite limited if grammars were -usable only inside the GF system. In the eighth chapter, -we will see other ways of using grammars: + +
+ ++Application programs, using techniques from Lesson 7:
-All GF functionalities, both those inside the GF program and those
-ported to other environments,
-are of course already applicable to the simplest of grammars,
-such as the Hello grammars presented above. But the main focus
-of this tutorial will be on grammar writing. Thus we will show
-how larger and more expressive grammars can be built by using
-the constructs of the GF programming language, before entering the
-applications.
+
-As the last section of each chapter, we will give a summary of the GF language -features covered in the chapter. The presentation is rather technical and intended -as a reference for later use, rather than to be read at once. The summaries -may cover some new features, which complement the discussion in the main chapter. -
- --A GF grammar consists of modules, -into which judgements are grouped. The most important -module forms are -
-abstract A = {...} , abstract syntax A with judgements in
- the module body {...}.
-concrete C of A = {...}, concrete syntax C of the
- abstract syntax A, with judgements in the module body {...}.
-
-Each module is written in a file named Modulename.gf.
-
-Rules in a module body are called judgements. Keywords such as
-fun and lin are used for distinguishing between
-judgement forms. Here is a summary of the most important
-judgement forms, which we have considered by now:
-
| form | -reading | -module type | -|
|---|---|---|---|
cat C |
-C is a category | -abstract | -|
fun f : A |
-f is a function of type A | -abstract | -|
lincat C = T |
-category C has linearization type T | -concrete | -|
lin f x1 ... xn = t |
-function f has linearization t | -concrete | -|
flags p = v |
-flag p has value v | -any | -|
-Both abstract and concrete modules may moreover contain comments of the forms -
--- anything until a newline
-{- anything except hyphen followed by closing brace -}
--Judgements are terminated by semicolons. Shorthands permit the sharing of -the keyword in subsequent judgements, -
-- cat C ; D ; === cat C ; cat D ; --
-and of the right-hand-side in subsequent judgements of the same form -
-- fun f, g : A ; === fun f : A ; g : A ; --
-We will use the symbol === to indicate syntactic sugar when
-speaking about GF. Thus it is not a symbol of the GF language.
-
-Each judgement declares a name, which is an identifier.
-An identifier is a letter followed by a sequence of letters, digits, and
-characters ' or _. Each identifier can only be
-defined once in the same module (that is, as next to the judgement keyword;
-local variables such as those in lin judgemenrs can be
-reused in other judgements).
-
-Names are in scope in the rest of the module, i.e. usable in the other -judgements of the module (subject to type restrictions, of course). Also -the name of the module is an identifier in scope. -
--The order of judgements in a module is free. In particular, an identifier -need not be declared before it is used. -
- -
-A type in an abstract syntax are either a basic type,
-i.e. one introduced in a cat judgement, or a
-function type of the form
-
- A1 -> ... -> An -> A --
-where each of A1, ..., An, A is a basic type.
-The last type in the arrow-separated sequence
-is the value type of the function type, and the earlier types are
-its argument types.
-
-In a concrete syntax, the available types include -
-Str
-{ r1 : T1 ; ... ; rn : Tn }
--Token lists are often briefly called strings. -
--Each semi-colon separated part in a record type is called a -field. The identifier introduced by the left-hand-side of a field -is called a label. -
--A term in abstract syntax is a function application of form -
-- f a1 ... an --
-where f is a function declared in a fun judgement and a1 ... an
-are terms. These terms are also called abstract syntax trees, or just
-trees.
-The tree above is well-typed and has the type A, if
-
- f : A1 -> ... -> An -> A --
-and each ai has type an.
-
-A term used in concrete syntax has one the forms -
-"foo", of type Str
-"foo" ++ "bar",
-{ r1 = t1 ; ... ; rn = Tn },
- of type { r1 : R1 ; ... ; rn : Rn }
-t.r of a term t that has a record type,
- with the record label r; the projection has the corresponding record
- field type
-x bound by the left-hand-side of a lin rule,
- of the corresponding linearization type
-
-Each quoted string is treated as one token, and strings concatenated by
-++ are treated as separate tokens. Tokens are, by default, written with
-a space in between. This behaviour can be changed by lexer and unlexer
-flags, as will be explained later "Rseclexing. Therefore it is usually
-not correct to have a space in a token. Writing
-
- "hello world" --
-in a grammar would give the parser the task to find a token with a space
-in it, rather than two tokens "hello" and "world". If the latter is
-what is meant, it is possible to use the shorthand
-
- ["hello world"] === "hello" ++ "world" --
-The empty string is denoted by [] or, equivalently, `` or ``[].
-
-An important functionality of the GF system is static type checking. -This means that the grammars are controlled to be well-formed, so that all -run-time errors are eliminated. The main type checking principles are the -following: -
-lincat of each cat and a lin
- for each fun in the abstract syntax that it is "of"
-lin rules are type checked with respect to the lincat and fun
- rules
-
-In this chapter, we will write a grammar that has much more structure than
-the Hello grammar. We will look at how the abstract syntax
-is divided into suitable categories, and how infinitely many
-phrases can be generated by using recursive rules. We will also
-introduce modularity by showing how a grammar can be
-divided into modules, and how functional programming
-can be used to share code in and among modules.
+Goals:
+ +
+-We will write a grammar that -defines a set of phrases usable for speaking about food: +Phrases usable for speaking about food:
Phrase
Phrase can be built by assigning a Quality to an Item
(e.g. this cheese is Italian)
-Item are build from a Kind by prefixing this or that
+Item is build from a Kind by prefixing this or that
(e.g. this wine)
Kind is either atomic (e.g. cheese), or formed
qualifying a given Kind with a Quality (e.g. Italian cheese)
@@ -1062,7 +895,7 @@ defines a set of phrases usable for speaking about food:
-These verbal descriptions can be expressed as the following abstract syntax: +Abstract syntax:
abstract Food = {
@@ -1082,23 +915,18 @@ These verbal descriptions can be expressed as the following abstract syntax:
}
-In this abstract syntax, we can build Phrases such as
+Example Phrase
Is (This (QKind Delicious (QKind Italian Wine))) (Very (Very Expensive))
-
--In the English concrete syntax, we will want to linearize this into -
-
this delicious Italian wine is very very expensive
-
--The English concrete syntax gives no surprises: +
+ +
concrete FoodEng of Food = {
@@ -1122,8 +950,12 @@ The English concrete syntax gives no surprises:
Boring = {s = "boring"} ;
}
+
-Let us test how the grammar works in parsing: + +
++Test the grammar for parsing:
> import FoodEng.gf
@@ -1131,8 +963,7 @@ Let us test how the grammar works in parsing:
Is (This (QKind Delicious Wine)) (Very (Very Italian))
-We can also try parsing in other categories than the startcat,
-by setting the command-line cat flag:
+Parse in other categories setting the cat flag:
p -cat=Kind "very Italian wine"
@@ -1140,32 +971,32 @@ by setting the command-line cat flag:
-Exercise. Extend the Food grammar by ten new food kinds and
+
+
Food grammar by ten new food kinds and
qualities, and run the parser with new kinds of examples.
-
--Exercise. Add a rule that enables question phrases of the form +
+-Exercise. Enable the optional prefixing of +
++
- +-When we have a grammar above a trivial size, especially a recursive -one, we need more efficient ways of testing it than just by parsing -sentences that happen to come to our minds. One way to do this is -based on automatic generation, which can be either -random generation or exhaustive generation. -
-
-Random generation (generate_random = gr) is an operation that
-builds a random tree in accordance with an abstract syntax:
+Random generation (generate_random = gr): build
+build a random tree in accordance with an abstract syntax:
> generate_random
@@ -1179,55 +1010,41 @@ By using a pipe, random generation can be fed into linearization:
this Italian fish is fresh
-Random generation is a good way to test a grammar. It can also give results -that are surprising, which shows how fast we lose intuition -when we write complex grammars. -
-
-By using the number flag, several trees can be generated
-in one command:
+Use the number flag to generate several trees:
- > gr -number=10 | l
+ > gr -number=4 | l
that wine is boring
that fresh cheese is fresh
that cheese is very boring
this cheese is Italian
- that expensive cheese is expensive
- that fish is fresh
- that wine is very Italian
- this wine is Italian
- this cheese is boring
- this fish is boring
+
++ +
To generate all phrases that a grammar can produce,
-GF provides the command generate_trees = gt.
+use generate_trees = gt.
> generate_trees | l
that cheese is very Italian
that cheese is very boring
that cheese is very delicious
- that cheese is very expensive
- that cheese is very fresh
...
- this wine is expensive
this wine is fresh
this wine is warm
-
-We get quite a few trees but not all of them: only up to a given
-depth of trees. The default depth is 3; the depth can be
+The default depth is 3; the depth can be
set by using the depth flag:
> generate_trees -depth=5 | l
-Other options to the generation commands (like all commands) can be seen
-by GF's help = h command:
+What options a command has can be seen by the help = h command:
> help gr
@@ -1235,24 +1052,26 @@ by GF's help = h command:
-Exercise. If the command gt generated all
-trees in your grammar, it would never terminate. Why?
+
-Exercise. Measure how many trees the grammar gives with depths 4 and 5, + +
gt generated all
+trees in your grammar, it would never terminate. Why?
+
+wc to count lines.
-
-
--A pipe of GF commands can have any length, but the "output type" -(either string or tree) of one command must always match the "input type" -of the next command, in order for the result to make sense. +
+ +
-The intermediate results in a pipe can be observed by putting the
-tracing option -tr to each command whose output you
+Put the tracing option -tr to each command whose output you
want to see:
@@ -1263,7 +1082,7 @@ want to see:
Is (This Cheese) Boring
-This facility is useful for test purposes: the pipe above can show +Useful for test purposes: the pipe above can show if a grammar is ambiguous, i.e. contains strings that can be parsed in more than one way.
@@ -1271,45 +1090,43 @@ contains strings that can be parsed in more than one way. Exercise. Extend theFood grammar so that it produces ambiguous
strings, and try out the ambiguity test.
-
++ +
+
-To save the outputs of GF commands into a file, you can
-pipe it to the write_file = wf command,
+To save the outputs into a file, pipe it to the write_file = wf command,
> gr -number=10 | linearize | write_file exx.tmp
-You can read the file back to GF with the
-read_file = rf command,
+To read a file to GF, use the read_file = rf command,
> read_file exx.tmp | parse -lines
-Notice the flag -lines given to the parsing
-command. This flag tells GF to parse each line of
-the file separately. Without the flag, the grammar could
-not recognize the string in the file, because it is not
-a sentence but a sequence of ten sentences.
+The flag -lines tells GF to parse each line of
+the file separately.
Files with examples can be used for regression testing -of grammars. The most systematic way to do this is by -generating treebanks; see here. +of grammars - the most systematic way to do this is by +treebanks; see here.
- ++ +
+
-The gibberish code with parentheses returned by the parser does not
-look like trees. Why is it called so? From the abstract mathematical
-point of view, trees are a data structure that
-represents nesting: trees are branching entities, and the branches
-are themselves trees. Parentheses give a linear representation of trees,
-useful for the computer. But the human eye may prefer to see a visualization;
-for this purpose, GF provides the command visualize_tree = vt, to which
-parsing (and any other tree-producing command) can be piped:
+Parentheses give a linear representation of trees,
+useful for the computer.
+
+Human eye may prefer to see a visualization: visualize_tree = vt:
> parse "this delicious cheese is very Italian" | visualize_tree
@@ -1323,53 +1140,54 @@ This command uses the programs Graphviz and Ghostview, which you
might not have, but which are freely available on the web.
-Alternatively, you can print the tree into a file
-e.g. a .png file that
-can be be viewed with e.g. an HTML browser and also included in an
-HTML document. You can do this
-by saving the file grphtmp.dot, which the command vt
-produces. Then you can process this file with the dot
+You can save the temporary file grphtmp.dot,
+which the command vt produces.
+
+
+Then you can process this file with the dot
program (from the Graphviz package).
% dot -Tpng grphtmp.dot > mytree.png
-
+
+
+
+
System commands
-If you don't have Ghostview, or want to view graphs in some other way,
-you can call dot and a suitable
-viewer (e.g. open in Mac) without leaving GF, by using
-a system command: ! followed by a Unix command,
+You can give a system command without leaving GF:
+! followed by a Unix command,
> ! dot -Tpng grphtmp.dot > mytree.png
> ! open mytree.png
-Another form of system commands are those that receive arguments from
-GF pipes. The escape symbol
-is then ?.
+System commands are those that receive arguments from
+GF pipes: ?.
> generate_trees | ? wc
-Exercise. (Exercise drom 3.3.1 revisited.)
+Exercise.
Measure how many trees the grammar FoodEng gives with depths 4 and 5,
respectively. Use the Unix word count command wc to count lines, and
a pipe from a GF command into a Unix command.
-
+
+
+
+
An Italian concrete syntax
-We write the Italian grammar in a straightforward way, by replacing
-English words with their dictionary equivalents:
+Just (?) replace English words with their dictionary equivalents:
concrete FoodIta of Food = {
@@ -1394,41 +1212,52 @@ English words with their dictionary equivalents:
Boring = {s = "noioso"} ;
}
+
-An alert reader, or one who already knows Italian, may notice one point in
-which the change is more substantial than just replacement of words: the order of
-a quality and the kind it modifies in
+
+
+
+Not just replacing words:
+
+
+The order of a quality and the kind it modifies is changed in
QKind quality kind = {s = kind.s ++ quality.s} ;
-Thus Italian says vino italiano for Italian wine. (Some Italian adjectives
-are put before the noun. This distinction can be controlled by parameters, which
-are introduced in the fourth chapter.)
+Thus Italian says vino italiano for Italian wine.
-Exercise. Write a concrete syntax of Food for some other language.
-You will probably end up with grammatically incorrect linearizations --- but don't
+(Some Italian adjectives
+are put before the noun. This distinction can be controlled by parameters,
+which are introduced in Lesson 3.)
+
+
+
+
+
+Exercises on multilinguality
+
+- Write a concrete syntax of
Food for some other language.
+You will probably end up with grammatically incorrect
+linearizations --- but don't
worry about this yet.
-
-
-Exercise. If you have written Food for German, Swedish, or some
+
+ - If you have written
Food for German, Swedish, or some
other language, test with random or exhaustive generation what constructs
come out incorrect, and prepare a list of those ones that cannot be helped
with the currently available fragment of GF. You can return to your list
-after having worked out the fourth chapter.
+after having worked out Lesson 3.
+
+
+
+
-
+
Free variation
-Sometimes there are alternative ways to define a concrete syntax.
-For instance, if we use the Food grammars in a restaurant phrase
-book, we may want to accept different words for expressing the quality
-"delicious" ---- and different languages can differ in how many
-such words they have. Then we don't want to put the distinctions into
-the abstract syntax, but into concrete syntaxes. Such semantically
-neutral distinctions are known as free variation in linguistics.
+Semantically indistinguishable ways of expressing a thing.
The variants construct of GF expresses free variation. For example,
@@ -1437,11 +1266,9 @@ The variants construct of GF expresses free variation. For example,
lin Delicious = {s = variants {"delicious" ; "exquisit" ; "tasty"}} ;
-says that Delicious can be linearized to any of delicious,
-exquisit, and tasty. As a consequence, both these words result in the
-tree Delicious when parsed. By default, the linearize command
+By default, the linearize command
shows only the first variant from each variants list; to see them
-all, the option -all can be used:
+all, use the option -all:
> p "this exquisit wine is delicious" | l -all
@@ -1450,34 +1277,7 @@ all, the option -all can be used:
...
-In linguistics, it is well known that free variation is almost -non-existing, if all aspects of expressions are taken into account, including style. -Therefore, free variation should not be used in grammars that are meant as -libraries for other grammars, as in the fifth chapter. However, in a specific -application, free variation is an excellent way to scale up the parser to -variations in user input that make no difference in the semantic -treatment. -
-
-An example that clearly illustrates these points is the
-English negation. If we added to the Food grammar the negation
-of a quality, we could accept both contracted and uncontracted not:
-
- fun IsNot : Item -> Quality -> Phrase ;
- lin IsNot item qual =
- {s = item.s ++ variants {"isn't" ; ["is not"]} ++ qual.s} ;
-
--Both forms are likely to occur in user input. Since there is no -corresponding contrast in Italian, we do not want to put the distinction -in the abstract syntax. Yet there is a stylistic difference between -these two forms. In particular, if we are doing generation rather -than parsing, we will want to choose the one or -the other depending on the kind of language we want to generate. -
--A limiting case of free variation is an empty variant list +Limiting case: an empty variant list
variants {}
@@ -1490,23 +1290,18 @@ Free variation works for all types in concrete syntax; all terms in
a variants list must be of the same type.
-Exercise. Modify FoodIta in such a way that a quality can
-be assigned to an item by using two different word orders, exemplified
-by questo vino è delizioso and è delizioso questo vino
-(a real variation in Italian),
-and that it is impossible to say that something is boring
-(a rather contrived example).
+
-
+
More application of multilingual grammars
-
+
Multilingual treebanks
-A multilingual treebank is a set of trees with their
-translations in different languages:
+Multilingual treebank: a set of trees with their
+linearizations in different languages:
> gr -number=2 | tree_bank
@@ -1523,12 +1318,16 @@ translations in different languages:
There is also an XML format for treebanks and a set of commands
suitable for regression testing; see help tb for more details.
-
+
+
+
+
Translation session
-If translation is what you want to do with a set of grammars, a convenient
-way to do it is to open a translation_session = ts. In this session,
+translation_session = ts:
you can translate between all the languages that are in scope.
+
+
A dot . terminates the translation session.
@@ -1546,14 +1345,15 @@ A dot . terminates the translation session.
>
-
+
+
+
+
Translation quiz
-This is a simple language exercise that can be automatically
-generated from a multilingual grammar. The system generates a set of
-random sentences, displays them in one language, and checks the user's
-answer given in another language. The command translation_quiz = tq
-makes this in a subshell of GF.
+translation_quiz = tq:
+generate random sentences, display them in one language, and check the user's
+answer given in another language.
> translation_quiz FoodEng FoodIta
@@ -1577,25 +1377,23 @@ makes this in a subshell of GF.
this fish is expensive
-You can also generate a list of translation exercises and save it in a
-file for later use, by the command translation_list = tl
+Off-line list of translation exercises: translation_list = tl
> translation_list -number=25 FoodEng FoodIta | write_file transl.txt
+
-The number flag gives the number of sentences generated.
+
-
+
Multilingual syntax editing
-Any multilingual grammar can be used in the graphical syntax editor, which is
-opened by the shell
-command gfeditor followed by the names of the grammar files.
-Thus
+Any multilingual grammar can be used in the graphical syntax editor, opened
+from Unix shell:
% gfeditor FoodEng.gf FoodIta.gf
@@ -1604,34 +1402,26 @@ Thus
opens the editor for the two Food grammars.
-The editor supports commands for manipulating an abstract syntax tree.
-The process is started by choosing a category from the "New" menu.
-Choosing Phrase creates a new tree of type Phrase. A new tree
-is in general completely unknown: it consists of a metavariable
-?1. However, since the category Phrase in Food has
-only one possible constructor, Is, the tree is readily
-given the form Is ?1 ?2. Here is what the editor looks like at
-this stage:
+First choose a category from the "New" menu, e.g. Phrase:
-
+
-Editing goes on by refinements, i.e. choices of constructors from
-the menu, until no metavariables remain. Here is a tree resulting from the
-current editing session:
+Then make refinements: choose of constructors from
+the menu, until no metavariables (question marks) remain:
-
+
-Editing can be continued even when the tree is finished. The user can shift
-the focus to some of the subtrees by clicking at it or the corresponding
-part of a linearization. In the picture, the focus is on "fish".
-Since there are no metavariables,
-the menu shows no refinements, but some other possible actions:
+
+
+
+Editing can be continued even when the tree is finished. The user can
-In addition to menu-based editing, the tool supports refinement by parsing, -which is accessible by middle-clicking in the tree or in the linearization field. +Also: refinement by parsing: middle-click +in the tree or in the linearization field.
Exercise. Construct the sentence
this very expensive cheese is very very delicious
and its Italian translation by using gfeditor.
-Readers not familar with context-free grammars, also known as BNF grammars, can -skip this section. Those that are familar with them will find here the exact -relation between GF and context-free grammars. We will moreover show how -the BNF format can be used as input to the GF program; it is often more -concise than GF proper, but also more restricted in expressive power. +
- + +
The grammar FoodEng could be written in a BNF format as follows:
@@ -1678,51 +1464,22 @@ The grammar FoodEng could be written in a BNF format as follows:
Warm. Quality ::= "warm" ;
-In this format, each rule is prefixed by a label that gives
-the constructor function GF gives in its fun rules. In fact,
-each context-free rule is a fusion of a fun and a lin rule:
-it states simultaneously that
-
- fun Is : Item -> Quality -> Phrase --
- lin Is item quality = {s = item.s ++ "is" ++ quality.s}
-
-
-The translation from BNF to GF described above is in fact used in
-the GF system to convert BNF grammars into GF. BNF files are recognized
-by the file name suffix .cf; thus the grammar above can be
-put into a file named food.cf and read into GF by
+The GF system can convert BNF grammars into GF. BNF files are recognized
+by the file name suffix .cf:
> import food.cf
-
-
-
-Even though we managed to write FoodEng in the context-free format,
-we cannot do this for GF grammars in general. It is enough to try this
-with FoodIta at the same time as FoodEng,
-we lose an important aspect of multilinguality:
-that the order of constituents is defined only in concrete syntax.
-Thus we could not use context-free FoodEng and FoodIta in a multilingual
-grammar that supports translation via common abstract syntax: the
-qualification function QKind has different types in the two
-grammars.
+It creates separate abstract and concrete modules.
-In general terms, the separation of concrete and abstract syntax allows + +
+ ++Separating concrete and abstract syntax allows three deviations from context-free grammar:
-The third property is the one that definitely shows that GF is
-stronger than context-free: GF can define the copy language
-{x x | x <- (a|b)*}, which is known not to be context-free.
-The other properties have more to do with the kind of trees that
-the grammar can associate with strings: permutation is important
-in multilingual grammars, and suppression is exploited in grammars
-where trees carry some hidden semantic information (see the sixth chapter
-below).
+Exercise. Define the non-context-free
+copy language {x x | x <- (a|b)*} in GF.
-Of course, context-free grammars are also restricted from the -grammar engineering point of view. They give no support to -modules, functions, and parameters, which are so central -for the productivity of GF. Despite the initial conciseness -of context-free grammars, GF can easily produce grammars where -30 lines of GF code would need hundreds of lines of -context-free grammar code to produce; see exercises -here and here. +
--Exercise. GF can also interpret unlabelled BNF grammars, by -creating labels automatically. The right-hand sides of BNF rules -can moreover be disjunctions, e.g. -
-- Quality ::= "fresh" | "Italian" | "very" Quality ; --
-Experiment with this format in GF, possibly with a grammar that -you import from some other source, such as a programming language -document. -
-
-Exercise. Define the copy language {x x | x <- (a|b)*} in GF.
-
-GF uses suffixes to recognize different file formats. The most -important ones are: +GF uses suffixes to recognize different file formats:
.gf
@@ -1779,8 +1506,7 @@ important ones are:
-When you import FoodEng.gf, you see the target files being
-generated:
+Importing generates target from source:
> i FoodEng.gf
@@ -1788,15 +1514,10 @@ generated:
- compiling FoodEng.gf... wrote file FoodEng.gfc 20 msec
-You also see that the GF program does not only read the file
-FoodEng.gf, but also all other files that it
-depends on --- in this case, Food.gf.
+The GFC format (="GF Canonical") is the "machine code" of GF.
-For each file that is compiled, a .gfc file
-is generated. The GFC format (="GF Canonical") is the
-"machine code" of GF, which is faster to process than
-GF source files. When reading a module, GF decides whether
+When reading a module, GF decides whether
to use an existing .gfc file or to generate
a new one, by looking at modification times.
gfo, "GF object".
+ +
+
Exercise. What happens when you import FoodEng.gf for
a second time? Try this in different situations:
Food.gf.
+ +
+-When writing a grammar, you have to type lots of -characters. You have probably -done this by the copy-and-paste method, which is a universally -available way to avoid repeating work. -
--However, there is a more elegant way to avoid repeating work than -the copy-and-paste -method. The golden rule of functional programming says that -
--A function separates the shared parts of different computations from the -changing parts, its arguments, or parameters. -In functional programming languages, such as -Haskell, it is possible to share much more -code with functions than in languages such as C and Java, because -of higher-order functions (functions that takes functions as arguments). -
- +
-GF is a functional programming language, not only in the sense that
-the abstract syntax is a system of functions (fun), but also because
-functional programming can be used when defining concrete syntax. This is
-done by using a new form of judgement, with the keyword oper (for
+The golden rule of functional programmin:
+
+Whenever you find yourself programming by copy-and-paste, write a function instead. +
+
+Functions in concrete syntax are defined using the keyword oper (for
operation), distinct from fun for the sake of clarity.
-Here is a simple example of an operation:
+
+Example:
oper ss : Str -> {s : Str} = \x -> {s = x} ;
The operation can be applied to an argument, and GF will -compute the application into a value. For instance, +compute the value:
ss "boy" ===> {s = "boy"}
-We use the symbol === to indicate how an expression is
-computed into a value; this symbol is not a part of GF.
+The symbol ===> will be used for computation.
-Thus an oper judgement includes the name of the defined operation,
-its type, and an expression defining it. As for the syntax of the defining
-expression, notice the lambda abstraction form \x -> t of
-the function. It reads: function with variable x and function body
-t. Any occurrence of x in t is said to be bound in t.
+
+Notice the lambda abstraction form +
+\x -> t
++This is read: +
+For lambda abstraction with multiple arguments, we have the shorthand
@@ -1883,28 +1598,21 @@ For lambda abstraction with multiple arguments, we have the shorthand \x,y -> t === \x -> \y -> t-The notation we have used for linearization rules, where -variables are bound on the left-hand side, is actually syntactic +Linearization rules actually use syntactic sugar for abstraction:
lin f x = t === lin f = \x -> t
-
++ +
+-Operator definitions can be included in a concrete syntax. -But they are usually not really tied to a particular -set of linearization rules. -They should rather be seen as resources -usable in many concrete syntaxes. -
-
The resource module type is used to package
-oper definitions into reusable resources. Here is
-an example, with a handful of operations to manipulate
-strings and records.
+oper definitions into reusable resources.
resource StringOper = {
@@ -1916,15 +1624,14 @@ strings and records.
}
-
++ +
+
Any number of resource modules can be
-opened in a concrete syntax, which
-makes definitions contained
-in the resource usable in the concrete syntax. Here is
-an example, where the resource StringOper is
-opened in a new version of FoodEng.
+opened in a concrete syntax.
concrete FoodEng of Food = open StringOper in {
@@ -1951,52 +1658,34 @@ opened in a new version of FoodEng.
-Exercise. Use the same string operations to write FoodIta
-more concisely.
+
-GF, like Haskell, permits partial application of -functions. An example of this is the rule +The rule
lin This k = prefix "this" k ;
-which can be written more concisely +can be written more concisely
lin This = prefix "this" ;
-The first form is perhaps more intuitive to write -but, once you get used to partial application, you will appreciate its -conciseness and elegance. The logic of partial application -is known as currying, with a reference to Haskell B. Curry. -The idea is that any n-place function can be seen as a 1-place -function whose value is an n-1 -place function. Thus -
-- oper prefix : Str -> SS -> SS ; --
-can be used as a 1-place function that takes a Str into a
-function SS -> SS. The expected linearization of This is exactly
-a function of such a type, operating on an argument of type Kind
-whose linearization is of type SS. Thus we can define the
-linearization directly as prefix "this".
+Part of the art in functional programming:
+decide the order of arguments in a function,
+so that partial application can be used as much as possible.
-An important part of the art of functional programming is to decide the order
-of arguments in a function, so that partial application can be used as much
-as possible. For instance, of the operation prefix we know that it
-will be typically applied to linearization variables with constant strings.
-This is the reason to put the Str argument before the SS argument --- not
-the prefixity. A postfix function would have exactly the same order of arguments.
+For instance, prefix is typically applied to
+linearization variables with constant strings. Hence we
+put the Str argument before the SS argument.
Exercise. Define an operation infix analogous to prefix,
@@ -2006,21 +1695,19 @@ such that it allows you to write
lin Is = infix "is" ;
+ +
+
-To test a resource module independently, you must import it
-with the flag -retain, which tells GF to retain oper definitions
-in the memory; the usual behaviour is that oper definitions
-are just applied to compile linearization rules
-(this is called inlining) and then thrown away.
+Import with the flag -retain,
> import -retain StringOper.gf
-The command compute_concrete = cc computes any expression
-formed by operations and other GF constructs. For example,
+Compute the value with compute_concrete = cc,
> compute_concrete prefix "in" (ss "addition")
@@ -2029,18 +1716,18 @@ formed by operations and other GF constructs. For example,
}
-
++ +
+
-The module system of GF makes it possible to write a new module that extends
-an old one. The syntax of extension is
-shown by the following example. We extend Food into MoreFood by
-adding a category of questions and two new functions.
+A new module can extend an old one:
abstract Morefood = Food ** {
@@ -2066,15 +1753,17 @@ be built for concrete syntaxes:
}
-The effect of extension is that all of the contents of the extended -and extending module are put together. We also say that the new -module inherits the contents of the old module. +The effect of extension: all of the contents of the extended +and extending modules are put together.
-At the same time as extending a module of the same type, a concrete
-syntax module may open resources. Since open takes effect in
-the module body and not in the extended module, its logical place
-in the module header is after the extend part:
+In other words: the new module inherits the contents of the old module.
+
+ +
++Simultaneous extension and opening:
concrete MorefoodIta of Morefood = FoodIta ** open StringOper in {
@@ -2086,16 +1775,26 @@ in the module header is after the extend part:
}
-Resource modules can extend other resource modules, in the -same way as modules of other types can extend modules of the -same type. Thus it is possible to build resource hierarchies. +Resource modules can extend other resource modules - thus it is +possible to build resource hierarchies.
- ++ +
+-Specialized vocabularies can be represented as small grammars that -only do "one thing" each. For instance, the following are grammars -for fruit and mushrooms +Extend several grammars at the same time: +
+
+ abstract Foodmarket = Food, Fruit, Mushroom ** {
+ fun
+ FruitKind : Fruit -> Kind ;
+ MushroomKind : Mushroom -> Kind ;
+ }
+
++where
abstract Fruit = {
@@ -2108,41 +1807,17 @@ for fruit and mushrooms
fun Cep, Agaric : Mushroom ;
}
--They can afterwards be combined into bigger grammars by using -multiple inheritance, i.e. extension of several grammars at the -same time: -
-
- abstract Foodmarket = Food, Fruit, Mushroom ** {
- fun
- FruitKind : Fruit -> Kind ;
- MushroomKind : Mushroom -> Kind ;
- }
-
-
-The main advantages with splitting a grammar to modules are
-reusability, separate compilation, and division of labour.
-Reusability means
-that one and the same module can be put into different uses; for instance,
-a module with mushroom names might be used in a mycological information system
-as well as in a restaurant phrasebook. Separate compilation means that a module
-once compiled into .gfc need not be compiled again unless changes have
-taken place.
-Division of labour means simply that programmers that are experts in
-special areas can work on modules belonging to those areas.
-
Exercise. Refactor Food by taking apart Wine into a special
Drink module.
+ +
+
-When you have created all the abstract syntaxes and
-one set of concrete syntaxes needed for Foodmarket,
-your grammar consists of eight GF modules. To see how their
-dependences look like, you can use the command
visualize_graph = vg,
@@ -2165,8 +1840,7 @@ The graph uses-Just as the
visualize_tree = vtcommand, the freely available tools -Ghostview and Graphviz are needed. As an alternative, you can again print +You can also print the graph into a.dotfile by using the commandprint_multi = pm:@@ -2174,194 +1848,46 @@ the graph into a- -.dotfile by using the commandprint_multi = > ! dot -Tpng Foodmarket.dot > Foodmarket.pngSummary of GF language features
- -Modules
-The general form of a module is -
- Moduletype M Of -where Moduletype is one of=(Extends**)? (openOpensin)? Body -abstract,concrete, andresource. - --If Moduletype is
-concrete, the Of-part has the formofA, -where A is the name of an abstract module. Otherwise it is empty. --The name of the module is given by the identifier M. -
--The optional Extends part is a comma-separated -list of module names, which have to be modules of -the same Moduletype. The contents of these modules are inherited by -M. This means that they are both usable in Body and exported by M, -i.e. inherited when M is inherited and available when M is opened. -(Exception:
-operandparamjudgements are not inherited from -concretemodules.) --The optional Opens part is a comma-separated -list of resource module names. The contents of these -modules are usable in the Body, but they are not exported. -
--Opening can be qualified, e.g. -
-- concrete C of A = open (P = Prelude) in ... ---This means that the names from
- -Preludeare only available in the form -P.name. This form of qualifying a name is always possible, and it can -be used to resolve name conflicts, which result when the same name is -declared in more than one module that is in scope. -Judgements
--The Body part consists of judgements. The judgement form table #secjment -is extended with the following forms: -
-
| form | -reading | -module type | -|
|---|---|---|---|
oper h : T = t |
-operation h of type T is defined as t | -resource, concrete | -|
param P = C1 | ... | Cn |
-parameter type P has constructors C1...Cn | -resource, concrete | -|
-The param judgement will be explained in the next chapter.
-
-The type part of an oper judgement can be omitted, if the type can be inferred
-by the GF compiler.
-
- oper hello = "hello" ++ "world" ; --
-As a rule, type inference works for all terms except lambda abstracts. -
-
-Lambda abstracts are expressions of the form \x -> t,
-where x is a variable bound in the expression t, which is the
-body of the lambda abstract. The type of the lambda abstract is
-A ->B, where A is the type of the variable x and
-B the type of the body t.
-
-For multiple lambda abstractions, there is a shorthand -
-- \x,y -> t === \x -> \y -> t --
-For lin judgements, there is the shorthand
-
- lin f x = t === lin f = \x -> t -- - -
-The variants construct of GF can be used to give a list of
-concrete syntax terms, of the same type, in free variation. For example,
-
- variants {["does not"] ; "doesn't"}
-
-
-A limiting case is the empty variant list variants {}.
+
-The .cf file format is used for context-free grammars, which are
-always interpretable as GF grammars. Files of this format consist of
-rules of the form
-
.)? Cat ::= RHS ;
-
-The Extended BNF format (EBNF) can also be used, in files suffixed .ebnf.
-This format does not allow user-written labels. The right-hand-side of a rule
-can contain everything that is possible in the .cf format, but also
-optional parts (p ?), sequences (p *) and non-empty sequences (p +).
-For example, the phrases in FoodEng could be recognized with the following
-EBNF grammar:
-
- Phrase ::=
- ("this" | "that") Quality* ("wine" | "cheese" | "fish") "is" Quality ;
- Quality ::=
- ("very"* ("fresh" | "warm" | "boring" | "Italian" | "expensive")) ;
-
-
-
-
-The default encoding is iso-latin-1. UTF-8 can be set by the flag coding=utf8
-in the grammar. The resource grammar libraries are in iso-latin-1, except Russian
-and Arabic, which are in UTF-8. The resources may be changed to UTF-8 in future.
-Letters in identifiers must currently be iso-latin-1.
-
-In this chapter, we will introduce the techniques needed for -describing the inflection of words, as well as the rules by -which correct word forms are selected in syntactic combinations. -These techniques are already needed in a very slight extension -of the Food grammar of the previous chapter. While explaining -how the linguistic problems are solved for English and Italian, -we also cover all the language constructs GF has for -defining concrete syntax. +Goals: +
++It is possible to skip this chapter and go directly +to the next, since the use of the GF Resource Grammar library +makes it unnecessary to use parameters: they +could be left to library implementors.
-It is in principle possible to skip this chapter and go directly -to the next, since the use of the GF Resource Grammar library -makes it unnecessary to use any more constructs of GF than we -have already covered: parameters could be left to library implementors. +
- +
-Suppose we want to say, with the vocabulary included in
-Food.gf, things like
+Plural forms are needed in things like
-The introduction of plural forms requires two things: +This requires two things:
Different languages have different types of inflection and agreement. -For instance, Italian has also agreement in gender (masculine vs. feminine). -In a multilingual grammar, -we want to express such differences between languages in the -concrete syntax while ignoring them in the abstract syntax.
+-To be able to do all this, we need one new judgement form -and some new expression forms. -We also need to generalize linearization types -from strings to more complex types. +In a multilingual grammar, +we want to ignore such distinctions in abstract syntax.
Exercise. Make a list of the possible forms that nouns, adjectives, and verbs can have in some languages that you know.
- ++ +
+We define the parameter type of number in English by -using a new form of judgement: +a new form of judgement:
param Number = Sg | Pl ;
This judgement defines the parameter type Number by listing
-its two constructors, Sg and Pl (common shorthands for
-singular and plural).
+its two constructors, Sg and Pl
+(singular and plural).
-To state that Kind expressions in English have a linearization
-depending on number, we replace the linearization type {s : Str}
-with a type where the s field is a table depending on number:
+We give Kind a linearization type that has a table depending on number:
lincat Kind = {s : Number => Str} ;
-The table type Number => Str is in many respects similar to
-a function type (Number -> Str). The main difference is that the
-argument type of a table type must always be a parameter type. This means
-that the argument-value pairs can be listed in a finite table. The following
-example shows such a table:
+The table type Number => Str is similar a function type
+(Number -> Str).
+
+Difference: the argument must be a parameter type. Then +the argument-value pairs can be listed in a finite table. +
++ +
++Here is a table:
lin Cheese = {
@@ -2424,42 +1956,49 @@ example shows such a table:
} ;
-The table consists of branches, where a pattern on the
-left of the arrow => is assigned a value on the right.
+The table has branches, with a pattern on the
+left of the arrow => and a value on the right.
-The application of a table to a parameter is done by the selection
-operator !, which is computed by pattern matching: it returns
+The application of a table is done by the selection operator !.
+
+It which is computed by pattern matching: return the value from the first branch whose pattern matches the -selection argument. For instance, +argument. For instance,
table {Sg => "cheese" ; Pl => "cheeses"} ! Pl
===> "cheeses"
+
-As syntactic sugar for table selections, we can define the -case expressions, which are common in functional programming and also -handy to use in GF. + +
++Case expressions are syntactic sugar:
case e of {...} === table {...} ! e
-A parameter type can have any number of constructors, and these can -also take arguments from other parameter types. For instance, an accurate -type system for English verbs (except be) is + +
++Constructors can take arguments from other parameter types. +
++Example: forms of English verbs (except be):
param VerbForm = VPresent Number | VPast | VPastPart | VPresPart ;
-This system expresses accurately the fact that only the present tense has -number variation. (Agreement also requires variation in person, but -this can be defined in syntax rules, by picking the singular form for third person -singular subjects and the plural forms for all others). As an example of -a table, here are the forms of the verb drink: +Fact expressed: only present tense has number variation. +
++Example table: the forms of the verb drink:
table {
@@ -2479,21 +2018,21 @@ you know. Now take some of the results and implement them by
using parameter type definitions and tables. Write them into a resource
module, which you can test by using the command compute_concrete.
-
+
+
+
+
Inflection tables and paradigms
-All English common nouns are inflected for number, most of them in the
-same way: the plural form is obtained from the singular by adding the
-ending s. This rule is an example of
-a paradigm --- a formula telling how a class of words is inflected.
+A morphological paradigm is a formula telling how a class of
+words is inflected.
From the GF point of view, a paradigm is a function that takes
-a lemma --- also known as a dictionary form or a citation form --- and
-returns an inflection
-table of desired type. Paradigms are not functions in the sense of the
-fun judgements of abstract syntax (which operate on trees and not
-on strings), but operations defined in oper judgements.
+a lemma (dictionary form, citation form) and
+returns an inflection table.
+
+
The following operation defines the regular noun paradigm of English:
@@ -2505,15 +2044,17 @@ The following operation defines the regular noun paradigm of English:
} ;
-The gluing operator + tells that
-the string held in the variable dog and the ending "s"
-are written together to form one token. Thus, for instance,
+The gluing operator + glues strings to one token:
(regNoun "cheese").s ! Pl ===> "cheese" + "s" ===> "cheeses"
+
-A more complex example are regular verbs:
+
+
+
+A more complex example: regular verbs,
oper regVerb : Str -> {s : VerbForm => Str} = \talk -> {
@@ -2526,70 +2067,68 @@ A more complex example are regular verbs:
} ;
-Notice how a catch-all case for the past tense and the past participle
-is expressed by using a wild card pattern _. Here again, pattern matching
-tries all patterns in order until it finds a matching pattern;
-and it is the wild card that is the first match for both VPast and
-VPastPart.
+The catch-all case for the past tense and the past participle
+uses a wild card pattern _.
-Exercise. Identify cases in which the regNoun paradigm does not
+
+
+
+Exercises on morphology
+
+- Identify cases in which the
regNoun paradigm does not
apply in English, and implement some alternative paradigms.
-
-
-Exercise. Implement some regular paradigms for other languages you have
+
+ - Implement some regular paradigms for other languages you have
considered in earlier exercises.
+
+
+
+
-
+
Using parameters in concrete syntax
-We can now enrich the concrete syntax definitions to
-comprise morphology. This will permit a more radical
-variation between languages (e.g. English and Italian)
-than just the use of different words. In general,
-parameters and linearization types are different in
-different languages --- but this does not prevent using a
-the common abstract syntax.
+Purpose: a more radical
+variation between languages
+than just the use of different words and word orders.
-We consider a grammar Foods, which is similar to
-Food, with the addition two rules for forming plural items:
+We add to the grammar Food two rules for forming plural items:
fun These, Those : Kind -> Item ;
-We also add a noun which in Italian has the feminine case; all nouns in
-Food were carefully chosen to be masculine!
+We also add a noun which in Italian has the feminine case:
fun Pizza : Kind ;
-This noun will force us to deal with gender in the Italian grammar,
-which is what we need for the grammar to scale up for larger applications.
+This will force us to deal with gender-
-
+
+
+
+
Agreement
-In the English Foods grammar, we need just one type of parameters:
-Number as defined above. The phrase-forming rule
+In English, the phrase-forming rule
fun Is : Item -> Quality -> Phrase ;
-is affected by the number because of subject-verb agreement.
-In English, agreement says that the verb of a sentence
-must be inflected in the number of the subject. Thus we will linearize
+is affected by the number because of subject-verb agreement:
+the verb of a sentence must be inflected in the number of the subject,
Is (This Pizza) Warm ===> "this pizza is warm"
Is (These Pizza) Warm ===> "these pizzas are warm"
-Here it is the copula, i.e. the verb be that is affected. We define
-the copula as the operation
+It is the copula (the verb be) that is affected:
oper copula : Number -> Str = \n ->
@@ -2599,50 +2138,42 @@ the copula as the operation
} ;
-We don't need to inflect the copula for person and tense in this grammar.
-
-
-The form of the copula in a sentence depends on the
-subject of the sentence, i.e. the item
-that is qualified. This means that an Item must have such a number to provide.
-The obvious way to guarantee this is by including a number field in
-the linearization type:
+The subject Item must have such a number to provide to the copula:
lincat Item = {s : Str ; n : Number} ;
-Now we can write precisely the Is rule that expresses agreement:
+Now we can write
lin Is item qual = {s = item.s ++ copula item.n ++ qual.s} ;
+
-The copula receives the number that it needs from the subject item.
+
-
+
Determiners
-Let us turn to Item subjects and see how they receive their
-numbers. The two rules
+How does an Item subject receive its number? The rules
fun This, These : Kind -> Item ;
-form Items from Kinds by adding determiners, either
-this or these. The determiners
-require different numbers of their Kind arguments: This
-requires the singular (this pizza) and These the plural
-(these pizzas). The Kind is the same in both cases: Pizza.
-Thus a Kind must have both singular and plural forms.
-The obvious way to express this is by using a table:
+add determiners, either this or these, which
+require different this pizza vs.
+these pizzas.
+
+
+Thus Kind must have both singular and plural forms:
lincat Kind = {s : Number => Str} ;
-The linearization rules for This and These can now be written
+We can write
lin This kind = {
@@ -2655,14 +2186,12 @@ The linearization rules for This and These can now be
n = Pl
} ;
+
-The grammatical relation between the determiner and the noun is similar to
-agreement, but due to some differences into which we don't go here
-it is often called government.
+
-Since the same pattern for determination is used four times in
-the FoodsEng grammar, we codify it as an operation,
+To avoid copy-and-paste, we can factor out the pattern of determination,
oper det :
@@ -2673,19 +2202,14 @@ the FoodsEng grammar, we codify it as an operation,
} ;
-Now we can write, for instance,
+Now we can write
lin This = det Sg "this" ;
lin These = det Pl "these" ;
-Notice the order of arguments that permits partial
-application (here).
-
-
-In a more lexicalized grammar, determiners would be made into a
-category of their own and given an inherent number:
+In a more lexicalized grammar, determiners would be a category:
lincat Det = {s : Str ; n : Number} ;
@@ -2695,47 +2219,38 @@ category of their own and given an inherent number:
n = det.n
} ;
+
-Linguistically motivated grammars, such as the GF resource grammars,
-usually favour lexicalized treatments of words; see here below.
-Notice that the fields of the record in Det are precisely the two
-arguments needed in the det operation.
+
-
+
Parametric vs. inherent features
-Kinds, as in general common nouns in English, have both singular
-and plural forms; what form is chosen is determined by the construction
-in which the noun is used. We say that the number is a
-parametric feature of nouns. In GF, parametric features
-appear as argument types of tables in linearization types.
+Kinds have number as a parametric feature: both singular and plural
+can be formed,
lincat Kind = {s : Number => Str} ;
-Items, as in general noun phrases in English, don't
-have variation in number. The number is instead an inherent feature,
-which the noun phrase passes to the verb. In GF, inherent features
-appear as record fields in linearization types.
+Items have number as an inherent feature: they are inherently either
+singular or plural,
lincat Item = {s : Str ; n : Number} ;
-A category can have both parametric and inherent features. As we will see
-in the Italian Foods grammar, nouns have parametric number and
-inherent gender:
+Italian Kind will have parametric number and inherent gender:
lincat Kind = {s : Number => Str ; g : Gender} ;
+
-Nothing prevents the same parameter type from appearing both
-as parametric and inherent feature, or the appearance of several inherent
-features of the same type, etc. Determining the linearization types
-of categories is one of the most crucial steps in the design of a GF
-grammar. These two conditions must be in balance:
+
+
+
+Questions to ask when designing parameters:
-Grammar books and dictionaries give good advice on existence; for instance, -an Italian dictionary has entries such as +Dictionaries give good advice:
-The distinction between parametric and inherent features can be stated in -object-oriented programming terms: a linearization type is like a class, -which has a method for linearization and also some attributes. -In this class, the parametric features appear as arguments to the -linearization method, whereas the inherent features appear as attributes. +For words, inherent features are usually given as lexical information.
-For words, inherent features are usually given ad hoc as lexical information. -For combinations, they are typically inherited from some part of the construction. -For instance, qualified noun constructs in Italian inherit their gender from noun part -(called the head of the construction in linguistics): +For combinations, they are inherited from some part of the construction +(typically the one called the head). Italian modification:
lin QKind qual kind =
@@ -2775,69 +2281,23 @@ For instance, qualified noun constructs in Italian inherit their gender from nou
} ;
-This rule uses a local definition (also known as a let expression) to
-avoid computing kind.g twice, and also to express the linguistic
-generalization that it is the same gender that is both passed to
-the adjective and inherited by the construct.
-The parametric number feature is in this rule passed to both the noun and
-the adjective. In the table, a variable pattern is used to match
-any possible number. Variables introduced in patterns are in scope in
-the right-hand sides of corresponding branches. Again, it is good to
-use a variable to express the linguistic generalization that the number
-is passed to the parts, rather than expand the table into Sg and Pl
-branches.
+Notice
let expression)
+n
+-Sometimes the puzzle of making agreement and government work in a grammar has -several solutions. For instance, precedence in programming languages can -be equivalently described by a parametric or an inherent feature -(see here below). +
--In natural language applications that use the resource grammar library, -all parameters are hidden from the user, who thereby does not need to bother -about them. The only thing that she has to think about is what linguistic -categories are given as linearization types to each semantic category. -
-
-For instance, the GF resource grammar library has a category NP of
-noun phrases, AP of adjectival phrases, and Cl of sentence-like clauses.
-In the implementation of Foods here, we will define
-
- lincat Phrase = Cl ; Item = NP ; Quality = AP ; --
-To express that an item has a quality, we will use a resource function -
-- mkCl : NP -> AP -> Cl ; --
-in the linearization rule: -
-- lin Is = mkCl ; --
-In this way, we have no need to think about parameters and agreement.
-the fifth chapter will show a complete implementation of Foods by the
-resource grammar, port it to many new languages, and extend it with
-many new constructs.
-
-We repeat some of the rules above by showing the entire
-module FoodsEng, equipped with parameters. The parameters and
-operations are, for the sake of brevity, included in the same module
-and not in a separate resource. However, some string operations
-from the library Prelude are used.
+We use some string operations from the library Prelude are used.
- --# -path=.:prelude
-
- concrete FoodsEng of Foods = open Prelude in {
+ concrete FoodsEng of Foods = open Prelude in {
lincat
S, Quality = SS ;
@@ -2862,7 +2322,12 @@ from the library Prelude are used.
Expensive = ss "expensive" ;
Delicious = ss "delicious" ;
Boring = ss "boring" ;
-
+
+
++ +
+
param
Number = Sg | Pl ;
@@ -2887,95 +2352,59 @@ from the library Prelude are used.
} ;
}
-
-To find the Prelude library --- or in general,
-GF files located in other directories, a path directive is needed
-either on the command line or as the first line of
-the topmost file compiled.
-The paths in the path list are separated by colons (:), and every item
-is interpreted primarily relative to the current directory and, secondarily,
-to the value of GF_LIB_PATH (GF library path). Hence it is a
-good idea to make GF_LIB_PATH to point into your GF/lib/ whenever
-you start working in GF. For instance, in the Bash shell this is done by
-
- % export GF_LIB_PATH=<the location of GF/lib in your file system> -- +
+ +
+-Let us try to extend the English noun paradigms so that we can +Let us extend the English noun paradigms so that we can deal with all nouns, not just the regular ones. The goal is to -provide a morphology module that is maximally easy to use when -words are added to the lexicon. In fact, we can think of a -division of labour where a linguistically trained grammarian -writes a morphology and hands it over to the lexicon writer -who knows much less about the rules of inflection. +provide a morphology module that makes it easy to +add words to a lexicon.
-In passing, we will introduce some new GF constructs: local definitions, -regular expression patterns, and operation overloading. +
- +-To start with, it is useful to perform data abstraction from the type -of nouns by writing a constructor operation, a worst-case function: +We perform data abstraction from the type +of nouns by writing a a worst-case function:
+ oper Noun : Type = {s : Number => Str} ;
+
oper mkNoun : Str -> Str -> Noun = \x,y -> {
s = table {
Sg => x ;
Pl => y
}
} ;
-
--This presupposes that we have defined -
-
- oper Noun : Type = {s : Number => Str} ;
-
-
-Using mkNoun, we can define
-
- lin Mouse = mkNoun "mouse" "mice" ; --
-and -
-
+
oper regNoun : Str -> Noun = \x -> mkNoun x (x + "s") ;
-instead of writing the inflection tables explicitly. +Then we can define +
++ lincat N = Noun ; + lin Mouse = mkNoun "mouse" "mice" ; + lin House = regNoun "house" ; ++
+where the underlying types are not seen.
-Nouns like mouse-mice, are so irregular that -it hardly makes sense to see them as instances of a -paradigm that forms the plural from the singular form. -But in general, as we will see, there can be different -regular patterns in a language. +
-The grammar engineering advantage of worst-case functions is that
-the author of the resource module may change the definitions of
-Noun and mkNoun, and still retain the
-interface (i.e. the system of type signatures) that makes it
-correct to use these functions in concrete modules. In programming
-terms, Noun is then treated as an abstract datatype:
-its definition is not available, but only an indirect way of constructing
-its objects.
-
-A case where a change of the Noun type could
-actually happen is if we introduces case (nominative or
-genitive) in the noun inflection:
+We are free to change the undelying definitions, e.g.
+add case (nominative or genitive) to noun inflection:
param Case = Nom | Gen ;
@@ -3008,29 +2437,33 @@ But up from this level, we can retain the old definitions
lin Mouse = mkNoun "mouse" "mice" ;
oper regNoun : Str -> Noun = \x -> mkNoun x (x + "s") ;
+
-which will just compute to different values now. +
In the last definition of mkNoun, we used a case expression
-on the last character of the plural form to decide if the genitive
-should be formed with an ' (as in dogs-dogs') or with
-'s (as in mice-mice's). The expression last y
-uses the Prelude operation
+on the last character of the plural, as well as the Prelude
+operation
last : Str -> Str ;
+returning the string consisting of the last character. +
+The case expression uses pattern matching over strings, which is supported in GF, alongside with pattern matching over parameters.
- ++ +
+-Between the completely regular dog-dogs and the completely -irregular mouse-mice, there are some +The regular dog-dogs paradigm has predictable variations:
-One way to deal with them would be to provide alternative paradigms: +We could provide alternative paradigms:
noun_y : Str -> Noun = \fly -> mkNoun fly (init fly + "ies") ;
noun_s : Str -> Noun = \bus -> mkNoun bus (bus + "es") ;
-The Prelude function init drops the last character of a token.
-But this solution has some drawbacks:
+(The Prelude function init drops the last character of a token.)
+
+Drawbacks:
-To help the lexicon builder in this task, the morphology programmer -can put some intelligence in the regular noun paradigm. The easiest -way to express this in GF is by the use of regular expression patterns: + +
++Better solution: a smart paradigm:
regNoun : Str -> Noun = \w ->
@@ -3076,9 +2512,7 @@ way to express this in GF is by the use of regular expression patterns:
mkNoun w ws
-In this definition, we have used a local definition just in order to -structure the code, even though there is no multiple evaluation to eliminate. -In the case expression itself, we have used +GF has regular expression patterns:
| Q
@@ -3091,125 +2525,123 @@ the suffix "oo" prevents bamboo from matching the suffix
"o".
-Exercise. The same rules that form plural nouns in English also + +
+ +regNoun so that the analysis needed to build s-forms
is factored out as a separate oper, which is shared with
regVerb.
-
--Exercise. Extend the verb paradigms to cover all verb forms +
+-Exercise. Implement the German Umlaut operation on word stems. +
++
-In the sixth chapter, we will introduce dependent function types, where -the value type depends on the argument. For this end, we need a notation +In Lesson 5, dependent function types need a notation that binds a variable to the argument type, as in
switchOff : (k : Kind) -> Action k
-Function types without -variables are actually a shorthand notation: writing +Function types without variables are actually a shorthand:
PredVP : NP -> VP -> S
-is shorthand for +means
PredVP : (x : NP) -> (y : VP) -> S
-or any other naming of the variables. Actually the use of variables -sometimes shortens the code, since they can share a type: +or any other naming of the variables. +
++ +
++Sometimes variables shorten the code, since they can share a type:
octuple : (x,y,z,u,v,w,s,t : Str) -> Str
-If a bound variable is not used, it can here, as elsewhere in GF, be replaced by -a wildcard: +If a bound variable is not used, it can be replaced by a wildcard:
octuple : (_,_,_,_,_,_,_,_ : Str) -> Str
-A good practice for functions with many arguments of the same type -is to indicate the number of arguments: +A good practice is to indicate the number of arguments:
octuple : (x1,_,_,_,_,_,_,x8 : Str) -> Str
-One can also use heuristic variable names to document what -information each argument is expected to provide. -This is very handy in the types of inflection paradigms: +For inflection paradigms, it is handy to use heuristic variable names, +looking like the expected forms:
mkNoun : (mouse,mice : Str) -> Noun
++ +
-In grammars intended as libraries, it is useful to separate oparation
-definitions from their type signatures. The user is only interested
-in the type, whereas the definition is kept for the implementor and
-the maintainer. This is possible by using separate oper fragments
-for the two parts:
+In librarues, it is useful to group type signatures separately from
+definitions. It is possible to divide an oper judgement,
oper regNoun : Str -> Noun ;
oper regNoun s = mkNoun s (s + "s") ;
-The type checker combines the two into one oper judgement to see
-if the definition matches the type. Notice that, in this syntax, it
-is moreover possible to bind the argument variables on the left hand side
-instead of using lambda abstration.
+and put the parts in different places.
-In the library module, the type signatures are typically placed in
-the beginning and the definitions in the end. A more radical separation
-can be achieved by using the interface and instance module types
-(see here): the type signatures are placed in the interface
-and the definitions in the instance.
+With the interface and instance module types
+(see here): the parts can even be put to different files.
+
+
-Large libraries, such as the GF Resource Grammar Library, may define -hundreds of names. This can be unpractical -for both the library author and the user: the author has to invent longer -and longer names which are not always intuitive, -and the author has to learn or at least be able to find all these names. -A solution to this problem, adopted by languages such as C++, -is overloading: one and the same name can be used for several functions. -When such a name is used, the -compiler performs overload resolution to find out which of -the possible functions is meant. Overload resolution is based on -the types of the functions: all functions that -have the same name must have different types. +Overloading: different functions can be given the same name, as e.g. in C++.
-In C++, functions with the same name can be scattered everywhere in the program.
-In GF, they must be grouped together in overload groups. Here is an example
-of an overload group, giving the different ways to define nouns in English:
+The compiler performs overload resolution, which works as long as the
+functions have different types.
+
+In GF, the functions must be grouped together in overload groups.
+
+Example: different ways to define nouns in English:
oper mkN : overload {
@@ -3218,16 +2650,12 @@ of an overload group, giving the different ways to define nouns in English:
}
-Intuitively, the function comes very close to the way in which -regular and irregular words are given in most dictionaries. If the +Cf. dictionaries: ff the word is regular, just one form is needed. If it is irregular, -more forms are given. There is no need to use explicit paradigm -names. +more forms are given.
-The mkN example gives only the possible types of the overloaded
-operation. Their definitions can be given separately, possibly in another module.
-Here is a definition of the above overload group:
+The definition can be given separately, or at the same time, as the types:
oper mkN = overload {
@@ -3236,32 +2664,24 @@ Here is a definition of the above overload group:
}
-Notice that the types of the branches must be repeated so that they can be -associated with proper definitions; the order of the branches has no -significance. -
-Exercise. Design a system of English verb paradigms presented by an overload group.
++ +
-Even though morphology is in GF
-mostly used as an auxiliary for syntax, it
-can also be useful on its own right. The command morpho_analyse = ma
-can be used to read a text and return for each word the analyses that
-it has in the current concrete syntax.
+The command morpho_analyse = ma
+can be used to read a text and return for each word its analyses
+(in the current grammar):
> read_file bible.txt | morpho_analyse
-In the same way as translation exercises, morphological exercises can
-be generated, by the command morpho_quiz = mq. Usually,
-the category is then set to some lexical category. For instance,
-French irregular verbs in the resource grammar library can be trained as
-follows:
+The command morpho_quiz = mq generates inflection exercises.
% gf -path=alltenses:prelude $GF_LIB_PATH/alltenses/IrregFre.gfc
@@ -3278,15 +2698,14 @@ follows:
Score 0/1
-Just like translation exercises, a list of morphological exercises can be generated
-off-line and saved in a
-file for later use, by the command morpho_list = ml
+To create a list for later use, use the command morpho_list = ml
> morpho_list -number=25 -cat=V | write_file exx.txt
+
-The number flag gives the number of exercises generated.
+
number flag gives the number of exercises generated.
-We conclude the parametrization of the Food grammar by presenting an -Italian variant, now complete with parameters, inflection, and -agreement. -
--The header part is similar to English: -
-
- --# -path=.:prelude
-
- concrete FoodsIta of Foods = open Prelude in {
-
-Parameters include not only number but also gender.
+ concrete FoodsIta of Foods = open Prelude in {
+
param
Number = Sg | Pl ;
Gender = Masc | Fem ;
Qualities are inflected for gender and number, whereas kinds -have a parametric number (as in English) and an inherent gender. -Items have an inherent number (as in English) but also gender. +have a parametric number and an inherent gender. +Items have an inherent number and gender.
lincat
@@ -3326,9 +2734,12 @@ Items have an inherent number (as in English) but also gender.
Kind = {s : Number => Str ; g : Gender} ;
Item = {s : Str ; g : Gender ; n : Number} ;
+
-A Quality is expressed by an adjective, which in Italian has one form for each -gender-number combination. + +
++A Quality is an adjective, with one form for each gender-number combination.
oper
@@ -3347,8 +2758,7 @@ gender-number combination.
} ;
-The very common case of regular adjectives works by adding -endings to the stem. +Regular adjectives work by adding endings to the stem.
regAdj : Str -> {s : Gender => Number => Str} = \nero ->
@@ -3357,10 +2767,11 @@ endings to the stem.
-For noun inflection, there are several paradigms; since only two forms -are ever needed, we will just give them explicitly (the resource grammar -library also has a paradigm that takes the singular form and infers the -plural and the gender from it). + +
++For noun inflection, we are happy to give the two forms and the gender +explicitly:
noun : Str -> Str -> Gender -> {s : Number => Str ; g : Gender} =
@@ -3373,7 +2784,7 @@ plural and the gender from it).
} ;
-As in FoodEng, we need only number variation for the copula.
+We need only number variation for the copula.
copula : Number -> Str =
@@ -3382,10 +2793,12 @@ As in FoodEng, we need only number variation for the copula.
Pl => "sono"
} ;
+
++ +
Determination is more complex than in English, because of gender: -it uses separate determiner forms for the two genders, and selects -one of them as function of the noun determined.
det : Number -> Str -> Str -> {s : Number => Str ; g : Gender} ->
@@ -3396,8 +2809,12 @@ one of them as function of the noun determined.
n = n
} ;
+
-Here is, finally, the complete set of linearization rules. + +
++The complete set of linearization rules:
lin
@@ -3426,77 +2843,70 @@ Here is, finally, the complete set of linearization rules.
-Exercise. Experiment with multilingual generation and translation in the
-Foods grammars.
-
-Exercise. Add items, qualities, and determiners to the grammar, and try to get -their inflection and inherent features right. -
-
-Exercise. Write a concrete syntax of Food for a language of your choice,
-now aiming for complete grammatical correctness by the use of parameters.
-
-Exercise. Measure the size of the context-free grammar corresponding to
-FoodsIta. You can do this by printing the grammar in the context-free format
-(print_grammar -printer=cfg) and counting the lines.
+
Foods grammars.
+
+Food for a language of your choice,
+now aiming for complete grammatical correctness by the use of parameters.
+
+FoodsIta. You can do this by printing the grammar in the context-free format
+(print_grammar -printer=cfg) and counting the lines.
++ +
+-A linearization type may contain more strings than one. -An example of where this is useful are English particle -verbs, such as switch off. The linearization of -a sentence may place the object between the verb and the particle: -he switched it off. +A linearization record may contain more strings than one, and those +strings can be put apart in linearization.
-The following judgement defines transitive verbs as -discontinuous constituents, i.e. as having a linearization -type with two strings and not just one. +Example: English particle +verbs, (switch off). The object can appear between: +
++he switched it off +
++The verb switch off is called a +discontinuous constituents. +
++We can define transitive verbs and their combinations as follows:
lincat TV = {s : Number => Str ; part : Str} ;
-
--In the abstract syntax, we can now have a rule that combines a subject and an -object item with a transitive verb to form a sentence: -
-
+
fun AppTV : Item -> TV -> Item -> Phrase ;
-
--The linearization rule places the object between the two parts of the verb: -
-
+
lin AppTV subj tv obj =
{s = subj.s ++ tv.s ! subj.n ++ obj.s ++ tv.part} ;
-
-There is no restriction in the number of discontinuous constituents
-(or other fields) a lincat may contain. The only condition is that
-the fields must be built from records, tables,
-parameters, and Str, but not functions.
-
-Notice that the parsing and linearization commands only give accurate
-results for categories whose linearization type has a unique Str
-valued field labelled s. Therefore, discontinuous constituents
-are not a good idea in top-level categories accessed by the users
-of a grammar application.
-
Exercise. Define the language a^n b^n c^n in GF, i.e.
any number of a's followed by the same number of b's and
the same number of c's. This language is not context-free,
but can be defined in GF by using discontinuous constituents.
+ +
+-A common difficulty in GF are the conditions under which tokens -can be created. Tokens are created in the following ways: +Tokens are created in the following ways:
"foo"
@@ -3506,10 +2916,9 @@ can be created. Tokens are created in the following ways:
-The general principle is that -tokens must be known at compile time. This means that the above operations -may not have run-time variables in their arguments. Run-time variables, in -turn, are the variables that stand for function arguments in linearization rules. +Since tokens must be known at compile time, +the above operations may not be applied to run-time variables +(i.e. variables that stand for function arguments in linearization rules).
Hence it is not legal to write
@@ -3530,200 +2939,39 @@ is incorrect with regNoun as defined here<
variable is eventually sent to string pattern matching and gluing.
-Writing tokens together without a space is an often-wanted behaviour, for instance, -with punctuation marks. Thus one might try to write + +
++How to write tokens together without a space?
lin Question p = {s = p + "?"} ;
-which is incorrect. The way to go is to use an unlexer that creates correct spacing
-after linearization. Correspondingly, a lexer that e.g. analyses "warm?" into
-to tokens is needed before parsing. This can be done by using flags:
+is incorrect.
+
+The way to go is to use an unlexer that creates correct spacing +after linearization. +
+
+Correspondingly, a lexer that e.g. analyses "warm?" into
+to tokens is needed before parsing. Both can be given in a grammar
+by using flags:
flags lexer=text ; unlexer=text ;
-works in the desired way for English text. More on lexers and unlexers will be -told here. +More on lexers and unlexers will be told here. +
++
- --A judgement of the form -
param P = C1 X1 | ... | Cn Xn
-
-In addition to types defined in param judgements, also
-records of parameter types are parameter types. Their values are records
-of corresponding field values.
-
-Moreover, the type Ints n is a parameter type for any positive
-integer n, and its values are 0, ..., n-1.
-
-A table type P => T must have a parameter type P as
-its argument type. The normal form of an object of this type is a table
-
table { V1 => t1 ; ... ; Vm => tm }
--Tables with only one branch are a common special case. -GF provides syntactic sugar for writing one-branch tables concisely: -
-
- \\P,...,Q => t === table {P => ... table {Q => t} ...}
-
-
-
-
-We will list all forms of patterns that can be used in table branches.
-the following are available for any parameter types, as well
-as for the types Int and Str
-
{ r1 = P1 ; ... ; r1 = P1 }
- matches any record that has values of the corresponding fields.
- and binds the union of all variables bound in the subpatterns Pi
-_ matches any value
-| Q matches anything that
- either P or Q matches; bindings must be the same in both
--P matches anything that P does not match;
- no bindings are returned
-@ P matches whatever value P matches and
- binds x to this value; also the bindings in P are returned
-
-The following patterns are only available for the type Str:
-
"s", matches the same string
-+ Q matches any string that consists
- of a prefix matching P and a suffix matching Q;
- the union of bindings is returned
-* matches any string that can be decomposed
- into strings that match P; no bindings are returned
-
-The following pattern is only available for the types Int and Ints n:
-
214, matches the same integer
-
-Pattern matching is performed in the order in which the branches
-appear in the table: the branch of the first matching pattern is followed.
-The type checker reject sets of patterns that are not exhaustive, and
-warns for completely overshadowed patterns.
-To guarantee exhaustivity when the infinite types Int and Str are
-used as argument types, the last pattern must be a "catch-all" variable
-or wild card.
-
-It follows from the definition of record pattern matching -that it can utilize partial records: the branch -
-
- {g = Fem} => t
-
-
-in a table of type {g : Gender ; n : Number} => T means the same as
-
- {g = Fem ; n = _} => t
-
--Variables in regular expression patterns -are always bound to the first match, which is the first -in the sequence of binding lists. For example: -
-x + "e" + y matches "peter" with x = "p", y = "ter"
-x + "er"* matches "burgerer" with ``x = "burg"
-
-Judgements of the oper form can introduce overloaded functions.
-The syntax is record-like, but all fields must have the same
-name and different types.
-
- oper mkN = overload {
- mkN : (dog : Str) -> Noun = regNoun ;
- mkN : (mouse,mice : Str) -> Noun = mkNoun ;
- }
-
--To give just the type of an overloaded operation, the record type -syntax is used. -
-
- oper mkN : overload {
- mkN : (dog : Str) -> Noun ; -- regular nouns
- mkN : (mouse,mice : Str) -> Noun ; -- irregular nouns
- }
-
--Overloading is not possible in other forms of judgement. -
- -
-Local definitions ("let expressions") can appear in groups:
-
- oper regNoun : Str -> Noun = \vino -> - let - vin : Str = init vino ; - o = last vino - in - ... --
-The type can be omitted if it can be inferred. Later definitions may -refer to earlier ones. -
- --The rest of the GF language constructs are presented for the sake -of completeness. They will not be used in the rest of this tutorial. -
+
-Record types and records can be extended with new fields. For instance,
-in German it is natural to see transitive verbs as verbs with a case, which
-is usually accusative or dative, and is passed to the object of the verb.
The symbol ** is used for both record types and record objects.
@@ -3732,27 +2980,23 @@ The symbol ** is used for both record types and record objects.
lin Follow = regVerb "folgen" ** {c = Dative} ;
-To extend a record type or a record with a field whose label it -already has is a type error. It is also an error to extend a type or -object that is not a record. -
-
-A record type T is a subtype of another one R, if T has
-all the fields of R and possibly other fields. For instance,
-an extension of a record type is always a subtype of it.
-If T is a subtype of R, then R is a supertype of T.
+TV becomes a subtype of Verb.
If T is a subtype of R, an object of T can be used whenever an object of R is required. -For instance, a transitive verb can be used whenever a verb is required.
-Covariance means that a function returning a record T as value can +Covariance: a function returning a record T as value can also be used to return a value of a supertype R. -Contravariance means that a function taking an R as argument +
++Contravariance: a function taking an R as argument can also be applied to any object of a subtype T.
++ +
Product types and tuples are syntactic sugar for record types and records: @@ -3763,25 +3007,13 @@ Product types and tuples are syntactic sugar for record types and records:
Thus the labels p1, p2,... are hard-coded.
-As patterns, tuples are translated to record patterns in the
-same way as tuples to records; partial patterns make it
-possible to write, slightly surprisingly,
- case <g,n,p> of {
- <Fem> => t
- ...
- }
-
-
++ +
-Sometimes a token has different forms depending on the token
-that follows. An example is the English indefinite article,
-which is an if a vowel follows, a otherwise.
-Which form is chosen can only be decided at run time, i.e.
-when a string is actually build. GF has a special construct for
-such tokens, the pre construct exemplified in
+English indefinite article:
oper artIndef : Str =
@@ -3794,65 +3026,34 @@ Thus
artIndef ++ "cheese" ---> "a" ++ "cheese"
artIndef ++ "apple" ---> "an" ++ "apple"
--This very example does not work in all situations: the prefix -u has no general rules, and some problematic words are -euphemism, one-eyed, n-gram. Since the branches are matched in -order, it is possible to write -
-
- oper artIndef : Str =
- pre {"a" ;
- "a" / strs {"eu" ; "one"} ;
- "an" / strs {"a" ; "e" ; "i" ; "o" ; "n-"}
- } ;
-
--Somewhat illogically, the default value is given as the first element in the list. -
+Prefix-dependent choice may be deprecated in GF version 3.
- -+ +
+ +
-In this chapter, we will take a look at the GF resource grammar library.
-We will use the library to implement the Foods grammar of the
-previous chapter
-and port it to some new languages. Some new concepts of GF's module system
-are also introduced, most notably the technique of parametrized modules,
-which has become an important "design pattern" for multilingual grammars.
+Goals:
+ +
+-The GF Resource Grammar Library contains grammar rules for -10 languages. In addition, 2 languages are available as yet incomplete -implementations, and a few more are under construction. The purpose -of the library is to define the low-level morphological and syntactic -rules of languages, and thereby enable application programmers -to concentrate on the semantic and stylistic -aspects of their grammars. The guiding principle is that -
-The intended level of application grammarians -is that of a skilled programmer with -a practical knowledge of the target languages, but without -theoretical knowledge about their grammars. -Such a combination of -skills is typical of programmers who, for instance, want to localize -language software to new languages. -
--The current resource languages are +The current 12 resource languages are
Arabic (incomplete)
@@ -3870,43 +3071,46 @@ The current resource languages are
-The first three letters (Eng etc) are used in grammar module names.
-We use the three-letter codes for languages from the ISO 639 standard.
+The first three letters (Eng etc) are used in grammar module names
+(ISO 639 standard).
-The incomplete Arabic and Catalan implementations are -sufficient for use in some applications; they both contain, amoung other -things, complete inflectional morphology. +
- +-So far we have looked at grammars from a semantic point of view: -a grammar defines a system of meanings (specified in the abstract syntax) and -tells how they are expressed in some language (as specified in the concrete syntax). -In resource grammars, as often in the linguistic tradition, the goal is more modest: -to specify the grammatically correct combinations of words, whatever their -meanings are. With this more modest goal, it is possible to achieve a much +Semantic grammars (up to now in this tutorial): +a grammar defines a system of meanings (abstract syntax) and +tells how they are expressed(concrete syntax). +
++Resource grammars (as usual in linguistic tradition): +a grammar specifies the grammatically correct combinations of words, +whatever their meanings are. +
++With resource grammars, we can achieve a wider coverage than with semantic grammars.
-Given the focus on words and their combinations, -the resource grammar has two kinds of categories and two kinds of rules: + +
+ ++A resource grammar has two kinds of categories and two kinds of rules:
-Some grammar formalisms make a formal distinction between -the lexical and syntactic -components; sometimes it is necessary to use separate formalisms for these -two kinds of rules. GF has no such restrictions. -Nevertheless, it has turned out -to be a good discipline to maintain a distinction between -the lexical and syntactic components in the resource grammar. This fits -also well with what is needed in applications: while syntactic structures -are more or less the same across applications, vocabularies can be -very different. +GE makes no formal distinction between these two kinds.
- ++But it is a good discipline to follow. +
++ +
+-Within lexical categories, there is a further classification -into closed and open categories. The definining property -of closed categories is that the -words in them can easily be enumerated; it is very seldom that any -new words are introduced in them. In general, closed categories -contain structural words, also known as function words. -Examples of closed categories are +Two kinds of lexical categories:
+- QuantSg ; -- singular quantifier e.g. "this" - QuantPl ; -- plural quantifier e.g. "those" - AdA ; -- adadjective e.g. "very" + Conj ; -- conjunction e.g. "and" + QuantSg ; -- singular quantifier e.g. "this" + QuantPl ; -- plural quantifier e.g. "this"-
-We have already used words of all these categories in the Food
-examples; they have just not been assigned a category, but
-treated as syncategorematic. In GF, a syncategoramatic
-word is one that is introduced in a linearization rule of
-some construction alongside with some other expressions that
-are combined; there is no abstract syntax tree for that word
-alone. Thus in the rules
-
- fun That : Kind -> Item ;
- lin That k = {"that" ++ k.s} ;
+ N ; -- noun e.g. "pizza"
+ A ; -- adjective e.g. "good"
+ V ; -- verb e.g. "sleep"
+ -the word that is syncategoramatic. In linguistically motivated -grammars, syncategorematic words are avoided, whereas in -semantically motivated grammars, structural words are typically treated -as syncategoramatic. This is partly so because the function expressed -by a structural word in one language is often expressed by some other -means than an individual word in another. For instance, the definite -article the is a determiner word in English, whereas Swedish expresses -determination by inflecting the determined noun: the wine is vinet -in Swedish. +
--As for open categories, we will start with these two: -
-- N ; -- noun e.g. "pizza" - A ; -- adjective e.g. "good" --
-Later in this chapter we will also need verbs of different kinds. -
--Note. Having adadjectives as a closed category is not quite right, because -one can form adadjectives from adjectives: incredibly warm. -
- +
-The words of closed categories can be listed once and for all in a
-library. In the first example, the Foods grammar of the previous section,
-we will use the following structural words from the Syntax module:
+Closed classes: module Syntax. In the Foods grammar, we need
this_QuantSg, that_QuantSg : QuantSg ;
@@ -3993,50 +3171,63 @@ we will use the following structural words from the Syntax module:
very_AdA : AdA ;
-The naming convention for lexical rules is that we use a word followed by -the category. In this way we can for instance distinguish the quantifier -that from the conjunction that. +Naming convention: word followed by the category (so we can +distinguish the quantifier that from the conjunction that).
-Open lexical categories have no objects in Syntax. Such objects
-will be built as they are needed in applications. The abstract
-syntax of words in applications is already familiar, e.g.
+Open classes have no objects in Syntax. Words are
+built as they are needed in applications: if we have
fun Wine : Kind ;
-The concrete syntax can be given directly, e.g. +we will define
lin Wine = mkN "wine" ;
-by using the morphological paradigm library ParadigmsEng.
-However, there are some advantages in giving the concrete syntax
-indirectly, via the creation of a resource lexicon. In this lexicon,
-there will be entries such as
+where we use mkN from ParadigmsEng:
+
+ +
+ ++Alternative concrete syntax for +
++ fun Wine : Kind ; ++
+is to provide a resource lexicon, which contains definitions such as
oper wine_N : N = mkN "wine" ;
-which can then be used in the linearization rules, +so that we can write
lin Wine = wine_N ;
-One advantage of this indirect method is that each new word gives -an addition to a reusable resource lexicon, instead of just doing -the job of implementing the application. Another advantage will -be shown here: the possibility to write functors over -lexicon interfaces. +Advantages:
- ++ +
+
-There are just four phrasal categories needed in the first application:
+In Foods, we need just four phrasal categories:
Cl ; -- clause e.g. "this pizza is good"
@@ -4045,17 +3236,19 @@ There are just four phrasal categories needed in the first application:
AP ; -- adjectival phrase e.g. "very warm"
-Clauses are, roughly, the same as declarative sentences; we will
-define in here a sentence S as a clause that has a fixed tense.
-The distinction between common nouns and noun phrases is that common nouns
-cannot generally be used alone as subjects (?dog sleeps),
-whereas noun phrases can (the dog sleeps).
-Noun phrases can be built from common nouns by adding determiners,
-such as quantifiers; but there are also other kinds of noun phrases, e.g.
-pronouns.
+Clauses are similar to sentences (S), but without a
+fixed tense and mood; see here for how they relate.
-The syntactic combinations we need are the following: +Common nouns are made into noun phrases by adding determiners. +
++ +
+ ++We need the following combinations:
mkCl : NP -> AP -> Cl ; -- e.g. "this pizza is very warm"
@@ -4065,21 +3258,26 @@ The syntactic combinations we need are the following:
mkAP : AdA -> AP -> AP ; -- e.g. "very warm"
-To start building phrases, we need rules of lexical insertion, which -form phrases from single words: +We also need lexical insertion, to form phrases from single words:
mkCN : N -> NP ;
mkAP : A -> AP ;
-Notice that all (or, as many as possible) operations in the resource library
-have the name mkC, where C is the value category of the operation.
-This means of course heavy overloading. For instance, the current library
-(version 1.2) has no less than 23 operations named mkNP!
+Naming convention: to construct a C, use a function mkC.
-Now the sentence
+Heavy overloading: the current library
+(version 1.2) has 23 operations named mkNP!
+
+ +
+ ++The sentence
-The task we are facing now is to define the concrete syntax of Foods so that
+The task now: to define the concrete syntax of Foods so that
this syntactic tree gives the value of linearizing the semantic tree
Is (These (QKind (Very Warm) Pizza)) Italian
-
++ +
+-The resource library API is divided into language-specific -and language-independent parts. To put it roughly, +Language-specific and language-independent parts - roughly,
SyntaxL for each language L
-SyntaxL has the same types and
+ functions for all languages L
+ParadigmsL has partly
different types and functions
- for different languages.
- Its name is ParadigmsL for each language L
+ for different languages L
-A full documentation of the API is available on-line in the
-resource synopsis.
-For the examples of this chapter, we will only need a
-fragment of the full API. The fragment needed for Foods has
-already been introduced, but let us summarize the descriptions
-by giving tables of the same form as used in the resource synopsis.
+Full API documentation on-line: the resource synopsis,
-Thus we will make use of the following categories from the module Syntax.
+digitalgrammars.com/gf/lib/resource/doc/synopsis.html
+ +
+ +| Category | @@ -4165,7 +3363,7 @@ Thus we will make use of the following categories from the module|||||
|---|---|---|---|---|---|
QuantPl |
plural quantifier | -these | +this | ||
A |
@@ -4179,10 +3377,11 @@ Thus we will make use of the following categories from the module
-We will use the following syntax rules from Syntax.
+
| Function | @@ -4226,10 +3425,11 @@ We will use the following syntax rules from
|---|
-We will use the following structural words from Syntax.
+
| Function | @@ -4263,9 +3463,13 @@ We will use the following structural words from
|---|
-For English, we will use the following part of ParadigmsEng.
+
+
+From ParadigmsEng:
-For Italian, we need just the following part of ParadigmsIta
-(Exercise). The "smart" paradigms will take care of variations
-such as formaggio-formaggi, and also infer the genders
-correctly.
+From ParadigmsIta:
-For German, we will use the following part of ParadigmsGer.
+
+
+From ParadigmsGer:
-For Finnish, we only need the smart regular paradigms:
+From ParadigmsFin:
-Exercise. Try out the morphological paradigms in different languages. Do + +
+ ++1. Try out the morphological paradigms in different languages. Do as follows:
@@ -4381,33 +3588,47 @@ as follows:
> cc mkA "gut" "besser" "beste"
-
++ +
+
-We work with the abstract syntax Foods from the fourth chapter, and
-build first an English implementation. Now we can do it without
-thinking about inflection and agreement, by just picking appropriate
+We assume the abstract syntax Foods from Lesson 3.
+
+We don't need to think about inflection and agreement, but just pick functions from the resource grammar library.
-The concrete syntax opens SyntaxEng and ParadigmsEng
-to get access to the resource libraries needed. In order to find
-the libraries, a path directive is prepended. It contains
-two resource subdirectories --- present and prelude ---
-which are found relative to the environment variable GF_LIB_PATH.
-It also contains the current directory . and the directory ../foods,
-in which Foods.gf resides.
+We need a path with
+
.
+../foods, in which Foods.gf resides.
+present, which is relative to the
+ environment variable GF_LIB_PATH
++Thus the beginning of the module is
--# -path=.:../foods:present:prelude
concrete FoodsEng of Foods = open SyntaxEng,ParadigmsEng in {
+
-As linearization types, we will use clauses for Phrase, noun phrases
+
+
+As linearization types, we use clauses for Phrase, noun phrases
for Item, common nouns for Kind, and adjectival phrases for Quality.
@@ -4418,9 +3639,7 @@ forItem, common nouns forKind, and adjectival phrase Quality = AP ;
-These types fit perfectly with the way we have used the categories -in the application; hence -the combination rules we need almost write themselves automatically: +Now the combination rules we need almost write themselves automatically:
lin
@@ -4432,11 +3651,18 @@ the combination rules we need almost write themselves automatically:
QKind quality kind = mkCN quality kind ;
Very quality = mkAP very_AdA quality ;
+
-We write the lexical part of the grammar by using resource paradigms directly. -Notice that we have to apply the lexical insertion rules to get type-correct -linearizations. Notice also that we need to use the two-place noun paradigm for -fish, but everythins else is regular. + +
+ ++We use resource paradigms and lexical insertion rules. +
++The two-place noun paradigm is needed only once, for +fish - everythins else is regular.
Wine = mkCN (mkN "wine") ;
@@ -4453,31 +3679,47 @@ linearizations. Notice also that we need to use the two-place noun paradigm for
-Exercise. Compile the grammar FoodsEng and generate
+
+
+1. Compile the grammar FoodsEng and generate
and parse some sentences.
-Exercise. Write a concrete syntax of Foods for Italian
+2. Write a concrete syntax of Foods for Italian
or some other language included in the resource library. You can
compare the results with the hand-written
grammars presented earlier in this tutorial.
+ +
+
-If you did the exercise of writing a concrete syntax of Foods for some other
-language, you probably noticed that much of the code looks exactly the same
-as for English. The reason for this is that the Syntax API is the
-same for all languages. This is in turn possible because
-all languages (at least those in the resource package)
-implement the same syntactic structures. Moreover, languages tend to use the
-syntactic structures in similar ways, even though this is not exceptionless.
-But usually, it is only the lexical parts of a concrete syntax that
-we need to write anew for a new language. Thus, to port a grammar to
-a new language, you
+If you write a concrete syntax of Foods for some other
+language, much of the code will look exactly the same
+as for English. This is because
+
Syntax API is the same for all languages (because
+ all languages in the resource package do implement the same
+ syntactic structures)
++But lexical rules are more language-dependent. +
++Thus, to port a grammar to a new language, you
-Now, programming by copy-and-paste is not worthy of a functional programmer!
-So, can we write a function that takes care of the shared parts of grammar modules?
-Yes, we can. It is not a function in the fun or oper sense, but
-a function operating on modules, called a functor. This construct
-is familiar from the functional programming
-languages ML and OCaml, but it does not
-exist in Haskell. It also bears some resemblance to templates in C++.
-Functors are also known as parametrized modules.
+Can we avoid this programming by copy-and-paste?
+
+ +
+ ++Functors familiar from the functional programming languages ML and OCaml, +also known as parametrized modules.
In GF, a functor is a module that opens one or more interfaces.
-An interface is a module similar to a resource, but it only
-contains the types of opers, not their definitions. You can think
-of an interface as a kind of a record type. The oper names are the
-labels of this record type. The corresponding record is called an
-instance of the interface.
-Thus a functor is a module-level function taking instances as
-arguments and producing modules as values.
-Let us now write a functor implementation of the Food grammar.
-Consider its module header first:
+An interface is a module similar to a resource, but it only
+contains the types of opers, not (necessarily) their definitions.
+
+Syntax for functors: add the keyword incomplete. We will use the header
incomplete concrete FoodsI of Foods = open Syntax, LexFoods in
-A functor is distinguished from an ordinary module by the leading
-keyword incomplete.
-
-In the functor-function analogy, FoodsI would be presented as a function
-with the following type signature:
+where
- FoodsI : - instance of Syntax -> instance of LexFoods -> concrete of Foods + interface Syntax -- the resource grammar interface + interface LexFoods -- the domain lexicon interface
-It takes as arguments instances of two interfaces: -
-Syntax, the resource grammar interface
-LexFoods, the domain-specific lexicon interface
-
-Functors opening Syntax and a domain lexicon interface are in fact
-so typical in GF applications, that this structure could be called
-a design pattern
-for GF grammars. What makes this pattern so useful is, again, that
-languages tend to use the same syntactic structures and only differ in words.
-
-We will show the exact syntax of interfaces and instances in next Section. -Here it is enough to know that we have -
-SyntaxGer, an instance of Syntax
-LexFoodsGer, an instance of LexFoods
--Then we can complete the German implementation by "applying" the functor: +When we moreover have
- FoodI SyntaxGer LexFoodsGer : concrete of Foods + instance SyntaxEng of Syntax -- the English resource grammar + instance LexFoodsEng of LexFoods -- the English domain lexicon
-The GF syntax for doing so is +we can write a functor instantiation,
concrete FoodsGer of Foods = FoodsI with
(Syntax = SyntaxGer),
(LexFoods = LexFoodsGer) ;
+
-Notice that this is the whole module, not just a header of it.
-The module body is received from FoodsI, by instantiating the
-interface constants with their definitions given in the German
-instances. A module of this form, characterized by the keyword with, is
-called a functor instantiation.
-
-Here is the complete code for the functor FoodsI:
+
- --# -path=.:../foods:present:prelude
+ --# -path=.:../foods:present
incomplete concrete FoodsI of Foods = open Syntax, LexFoods in {
lincat
@@ -4602,14 +3810,14 @@ Here is the complete code for the functor FoodsI:
}
-
-+ +
+ +
-Let us now define the LexFoods interface:
-
interface LexFoods = open Syntax in {
oper
@@ -4625,14 +3833,12 @@ Let us now define the LexFoods interface:
boring_A : A ;
}
+
-In this interface, only lexical items are declared. In general, an
-interface can declare any functions and also types. The Syntax
-interface does so.
-
-Here is a German instance of the interface. +
+ +
instance LexFoodsGer of LexFoods = open SyntaxGer, ParadigmsGer in {
oper
@@ -4648,16 +3854,12 @@ Here is a German instance of the interface.
boring_A = mkA "langweilig" ;
}
+
-Notice that when an interface opens an interface, such as Syntax,
-here, then its instance has to open an instance of it. But the instance
-may also open some other resources --- very typically, like here,
-a domain lexicon instance opens a Paradigms module.
-
-Just to complete the picture, we repeat the German functor instantiation
-for FoodsI, this time with a path directive that makes it compilable.
+
--# -path=.:../foods:present:prelude
@@ -4667,16 +3869,12 @@ for FoodsI, this time with a path directive that makes it compilabl
-Exercise. Compile and test FoodsGer.
+
-Exercise. Refactor FoodsEng into a functor instantiation.
-
-Once we have an application grammar defined by using a functor, -adding a new language is simple. Just two modules need to be written: +Just two modules are needed:
The functor instantiation is completely mechanical to write. -Here is one for Finnish:
-- --# -path=.:../foods:present:prelude - - concrete FoodsFin of Foods = FoodsI with - (Syntax = SyntaxFin), - (LexFoods = LexFoodsFin) ; -
The domain lexicon instance requires some knowledge of the words of the -language: what words are used for which concepts, how the words are -inflected, plus features such as genders. Here is a lexicon instance for -Finnish: +language: +
++ +
+ ++Lexicon instance
instance LexFoodsFin of LexFoods = open SyntaxFin, ParadigmsFin in {
@@ -4715,67 +3917,57 @@ Finnish:
boring_A = mkA "tylsä" ;
}
++Functor instantiation +
++ --# -path=.:../foods:present:prelude + + concrete FoodsFin of Foods = FoodsI with + (Syntax = SyntaxFin), + (LexFoods = LexFoodsFin) ; +
-Exercise. Instantiate the functor FoodsI to some language of
+
+
+This can be seen as a design pattern for multilingual grammars: +
++ concrete DomainL* + + instance LexDomainL instance SyntaxL* + + incomplete concrete DomainI + / | \ + interface LexDomain abstract Domain interface Syntax* ++
+Modules marked with * are either given in the library, or trivial.
+
+Of the hand-written modules, only LexDomainL is language-dependent.
+
+ +
+ +
+1. Compile and test FoodsGer.
+
+2. Refactor FoodsEng into a functor instantiation.
+
+3. Instantiate the functor FoodsI to some language of
your choice.
-One purpose with the resource grammars was stated to be a division -of labour between linguists and application grammarians. We can now -reflect on what this means more precisely, by asking ourselves what -skills are required of grammarians working on different components. -
--Building a GF application starts from the abstract syntax. Writing -an abstract syntax requires -
--If the concrete syntax is written by using a functor, the programmer -has to decide what parts of the implementation are put to the interface -and what parts are shared in the functor. This requires -
--Instantiating a ready-made functor to a new language is less demanding. -It requires essentially -
--Notice that none of these tasks requires the use of GF records, tables, -or parameters. Thus only a small fragment of GF is needed; the rest of -GF is only relevant for those who write the libraries. Essentially, -all the machinery introduced in the fourth chapter is unnecessary! -
--Of course, grammar writing is not always just straightforward usage of libraries. -For example, GF can be used for other languages than just those in the -libraries --- for both natural and formal languages. A knowledge of records -and tables can, unfortunately, also be needed for understanding GF's error -messages. -
--Exercise. Design a small grammar that can be used for controlling +4. Design a small grammar that can be used for controlling an MP3 player. The grammar should be able to recognize commands such as play this song, with the following variations:
@@ -4791,7 +3983,8 @@ The implementation goes in the following phases:
-A functor implementation using the resource Syntax interface
-works well as long as all concepts are expressed by using the same structures
-in all languages. If this is not the case, the deviant linearization can
-be made into a parameter and moved to the domain lexicon interface.
+
+
+Problem: a functor only works when all languages use the resource Syntax
+in the same way.
-The Foods grammar works so well that we have to
-take a contrived example: assume that English has
+Example (contrived): assume that English has
no word for Pizza, but has to use the paraphrase Italian pie.
-This paraphrase is no longer a noun N, but a complex phrase
-in the category CN. An obvious way to solve this problem is
-to change interface LexFoods so that the constant declared for
-Pizza gets a new type:
+This is no longer a noun N, but a complex phrase
+in the category CN.
+
+Possible solution: change interface the LexFoods with
oper pizza_CN : CN ;
-But this solution is unstable: we may end up changing the interface -and the function with each new language, and we must every time also -change the interface instances for the old languages to maintain -type correctness. +Problem with this solution: +
++ +
+ ++A module may inherit just a selection of names.
-A better solution is to use restricted inheritance: the English
-instantiation inherits the functor implementation except for the
-constant Pizza. This is how we write:
+Example: the FoodMarket example "Rsecarchitecture:
+
+ abstract Foodmarket = Food, Fruit [Peach], Mushroom - [Agaric] ++
+Here, from Fruit we include Peach only, and from Mushroom
+we exclude Agaric.
+
+A concrete syntax of Foodmarket must make the analogous restrictions.
+
+ +
+ +
+The English instantiation inherits the functor
+implementation except for the constant Pizza. This constant
+is defined in the body instead:
--# -path=.:../foods:present:prelude
@@ -4841,26 +4066,18 @@ constant Pizza. This is how we write:
lin Pizza = mkCN (mkA "Italian") (mkN "pie") ;
}
+
-Restricted inheritance is available for all inherited modules. One can for
-instance exclude some mushrooms and pick up just some fruit in
-the FoodMarket example "Rsecarchitecture:
+
- abstract Foodmarket = Food, Fruit [Peach], Mushroom - [Agaric] --
-A concrete syntax of Foodmarket must then have the same inheritance
-restrictions, in order to be well-typed with respect to the abstract syntax.
-
-The alert reader has certainly noticed an analogy between abstract
-and concrete, on the one hand, and interface and instance,
-on the other. Why are these two pairs of module types kept separate
-at all? There is, in fact, a very close correspondence between
-judgements in the two kinds of modules:
+Abstract syntax modules can be used as interfaces,
+and concrete syntaxes as their instances.
+
+The following correspondencies are then applied:
cat C <---> oper C : Type
@@ -4871,156 +4088,63 @@ judgements in the two kinds of modules:
lin f = t <---> oper f : A = t
+
-But there are also some differences: +
-abstract and concrete modules define top-level grammars, i.e.
- grammars that can be used for parsing and linearization; this is because
-concrete modules are restricted to a subset
- of those available in interface, instance, and resource
-param judgements have no counterparts in top-level grammars
--The term that can be used for interfaces, instances, and resources is -resource-level grammars. -From these explanations and the above translations it follows that top-level -grammars are, in a sense, a special case of resource-level grammars. -
--Thus, indeed, abstract syntax modules can be used like interfaces, and concrete syntaxes -as their instances. The use of top-level grammars as resources -is called grammar reuse. Whether a library module is a top-level or a -resource-level module is mostly invisible to application programmers -(see the Summary here -for an exception to this). The GF resource grammar -library itself is in fact built in two layers: -
--Both the ground -resource and the surface resource can be used by application programmers, -but it is the surface resource that we use in this book. Because of overloading, -it has much fewer function names and also flatter trees. For instance, the clause -
- mkCl - (mkNP these_QuantPl - (mkCN (mkAP very_AdA (mkAP warm_A)) (mkCN pizza_CN))) - (mkAP italian_AP) --
-has in the ground resource the much more complex tree -
-- PredVP - (DetCN (DetPl (PlQuant this_Quant) NoNum NoOrd) - (AdjCN (AdAP very_AdA (PositA warm_A)) (UseN pizza_N))) - (UseComp (CompAP (PositA italian_A))) --
-The main advantage of using the ground resource is that the trees can then be found -by using the parser, as shown in the next section. Otherwise, the overloaded surface -resource constants are much easier to use. -
--Needless to say, once a library has been defined in some way, it is easy to -build layers of derived libraries on top of it, by using grammar reuse -and, in the case of multilingual libraries, functors. This is indeed how -the surface resource has been implemented: as a functored parametrized on -the abstract syntax of the ground resource. -
- +
-In addition to reading the
-resource synopsis, you
-can find resource function combinations by using the parser. This
-is so because the resource library is in the end implemented as
-a top-level abstract-concrete grammar, on which parsing
-and linearization work.
-
-Unfortunately, currently (GF 2.8) -only English and the Scandinavian languages can be -parsed within acceptable computer resource limits when the full -resource is used. -
--To look for a syntax tree in the overload API by parsing, do like this: +To look for a syntax tree in the overload API by parsing:
- % gf -path=alltenses:prelude $GF_LIB_PATH/alltenses/OverLangEng.gfc
+ % gf $GF_LIB_PATH/alltenses/OverLangEng.gfc
> p -cat=S -overload "this grammar is too big"
mkS (mkCl (mkNP this_QuantSg grammar_N) (mkAP too_AdA big_A))
-The -overload option given to the parser is a directive to find the
+The -overload option finds the
shallowest overloaded term that matches the parse tree.
-To view linearizations in all languages by parsing from English: +
-- % gf $GF_LIB_PATH/alltenses/langs.gfcm - - > p -cat=S -lang=LangEng "this grammar is too big" | tb - UseCl TPres ASimul PPos (PredVP (DetCN (DetSg (SgQuant this_Quant) - NoOrd) (UseN grammar_N)) (UseComp (CompAP (AdAP too_AdA (PositA big_A))))) - Den här grammatiken är för stor - Esta gramática es demasiado grande - (Cyrillic: eta grammatika govorit des'at' jazykov) - Denne grammatikken er for stor - Questa grammatica è troppo grande - Diese Grammatik ist zu groß - Cette grammaire est trop grande - Tämä kielioppi on liian suuri - This grammar is too big - Denne grammatik er for stor -+ +
-This method shows the unambiguous ground resource functions and not -the overloaded ones. It uses a precompiled grammar package of the GFCM or GFCC -format; see the eighth chapter for more information on this. -
--Unfortunately, the Russian grammar uses at the moment a different -character encoding than the rest and is therefore not displayed correctly -in a terminal window. However, the GF syntax editor does display all -examples correctly --- again, using the ground resource: +Open the editor with a precompiled resource package:
% gfeditor $GF_LIB_PATH/alltenses/langs.gfcm
-When you have constructed the tree, you will see the following screen: +Constructed a tree resulting in the following screen:
-
+
-Exercise. Find the resource grammar translations for the following
-English phrases (parse in the category Phr). You can first try to
+
+
+1. Find the resource grammar terms for the following
+English phrases (in the category Phr). You can first try to
build the terms manually.
@@ -5035,316 +4159,36 @@ build the terms manually.
which languages did you want to speak
- -
-Now that we know how to find information in the resource grammar,
-we can easily extend the Foods fragment considerably. We shall enable
-the following new expressions:
-
-Since we don't want to change the already existing Foods module,
-we build an extension of it, ExtFoods:
-
- abstract ExtFoods = Foods ** {
-
- flags startcat=Move ;
-
- cat
- Move ; -- dialogue move: declarative, question, or imperative
- Verb ; -- transitive verb
- Guest ; -- guest in restaurant
- GuestKind ; -- type of guest
-
- fun
- MAssert : Phrase -> Move ; -- This pizza is warm.
- MDeny : Phrase -> Move ; -- This pizza isn't warm.
- MAsk : Phrase -> Move ; -- Is this pizza warm?
-
- PVerb : Guest -> Verb -> Item -> Phrase ; -- we eat this pizza
- PVerbWant : Guest -> Verb -> Item -> Phrase ; -- we want to eat this pizza
-
- WhichVerb :
- Kind -> Guest -> Verb -> Move ; -- Which pizza do you eat?
- WhichVerbWant :
- Kind -> Guest -> Verb -> Move ; -- Which pizza do you want to eat?
- WhichIs : Kind -> Quality -> Move ; -- Which wine is Italian?
-
- Do : Verb -> Item -> Move ; -- Pay this wine!
- DoPlease : Verb -> Item -> Move ; -- Pay this wine please!
-
- I, You, We : Guest ;
-
- GThis, GThat, GThese, GThose : GuestKind -> Guest ;
-
- Eat, Drink, Pay : Verb ;
-
- Lady, Gentleman : GuestKind ;
- }
-
-
-The concrete syntax is implemented by a functor that extends the
-already defined functor FoodsI.
-
- incomplete concrete ExtFoodsI of ExtFoods =
- FoodsI ** open Syntax, LexFoods in {
-
- flags lexer=text ; unlexer=text ;
-
--The flags set up a lexer and unlexer that can deal with sentence-initial -capital letters and proper spacing with punctuation (see here -for more information on lexers and unlexers). -
- --If we look at the resource documentation, we find several categories -that are above the clause level and can thus host different kinds -of dialogue moves: -
-| Category | -Explanation | -Example | -|
|---|---|---|---|
Text |
-text consisting of phrases | -He is here. Why? | -|
Phr |
-phrase in a text | -but be quiet please | -|
Utt |
-sentence, question, word... | -be quiet | -|
S |
-declarative sentence | -she lived here | -|
QS |
-question | -where did she live | -|
Imp |
-imperative | -look at this | -|
QCl |
-question clause, with all tenses | -why does she walk | -|
-We also find that only the category Text contains punctuation marks.
-So we choose this as the linearization type of Move. The other types
-are quite obvious.
-
- lincat - Move = Text ; - Verb = V2 ; - Guest = NP ; - GuestKind = CN ; --
-The category V2 of two-place verbs includes both
-transitive verbs that take direct objects (e.g. we watch him)
-and verbs that take other kinds of complements, often with
-prepositions (we look at him). In a multilingual grammar, it is
-not guaranteed that transitive verbs are transitive in all languages,
-so the more general notion of two-place verb is more appropriate.
-
-Now we need to find constructors that combine the new categories in
-appropriate ways. To form a text from a clause, we first make it into
-a sentence with mkS, and then apply mkText:
-
- lin MAssert p = mkText (mkS p) ; --
-The function mkS has in the resource synopsis been given the type
-
- mkS : (Tense) -> (Ant) -> (Pol) -> Cl -> S --
-Parentheses around type names do not make any difference for the GF compiler,
-but in the synopsis notation they indicate optionality: any of the
-optional arguments can be omitted, and there is an instance of mkS
-available. For each optional type, it uses the default value for that
-type, which for the polarity Pol is positive i.e. unnegated.
-To build a negative sentence, we use an explicit polarity constructor:
-
- MDeny p = mkText (mkS negativePol p) ; --
-Of course, we could have used positivePol in the first rule, instead of
-relying on the default. (The types Tense and Ant will be explained
-here.)
-
-Phrases can be made into question sentences, which in turn can be -made into texts in a similar way as sentences; the default -punctuation mark is not the full stop but the question mark. -
-- MAsk p = mkText (mkQS p) ; --
-There is an mkCl instance that directly builds a clause from a noun phrase,
-a two-place verb, and another noun phrase.
-
- PVerb = mkCl ; --
-The auxiliary verb want requires a verb phrase (VP) as its complement. It
-can be built from a two-place verb and its noun phrase complement.
-
- PVerbWant guest verb item = mkCl guest want_VV (mkVP verb item) ; --
-The interrogative determiner (IDet) which can be combined with
-a common noun to form an interrogative phrase (IP). This IP can then
-be used as a subject in a question clause (QCl), which in turn is
-made into a QS and finally to a Text.
-
- WhichIs kind quality = - mkText (mkQS (mkQCl (mkIP whichSg_IDet kind) (mkVP quality))) ; --
-When interrogative phrases are used as objects, the resource library
-uses a category named Slash of
-objectless sentences. The name cames from the slash categories of the
-GPSG grammar formalism
-(Gazdar & al. 1985). Slashes can be formed from subjects and two-place verbs,
-also with an intervening auxiliary verb.
-
- WhichVerb kind guest verb = - mkText (mkQS (mkQCl (mkIP whichSg_IDet kind) - (mkSlash guest verb))) ; - WhichVerbWant kind guest verb = - mkText (mkQS (mkQCl (mkIP whichSg_IDet kind) - (mkSlash guest want_VV verb))) ; --
-Finally, we form the imperative (Imp) of a transitive verb
-and its object. We make it into a polite form utterance, and finally
-into a Text with an exclamation mark.
-
- Do verb item = - mkText - (mkPhr (mkUtt politeImpForm (mkImp verb item))) exclMarkPunct ; - DoPlease verb item = - mkText - (mkPhr (mkUtt politeImpForm (mkImp verb item)) please_Voc) - exclMarkPunct ; --
-The rest of the concrete syntax is straightforward use of structural words, -
-- I = mkNP i_Pron ; - You = mkNP youPol_Pron ; - We = mkNP we_Pron ; - GThis = mkNP this_QuantSg ; - GThat = mkNP that_QuantSg ; - GThese = mkNP these_QuantPl ; - GThose = mkNP those_QuantPl ; --
-and of the food lexicon, -
-- Eat = eat_V2 ; - Drink = drink_V2 ; - Pay = pay_V2 ; - Lady = lady_N ; - Gentleman = gentleman_N ; - } --
-Notice that we have no reason to build an extension of LexFoods, but we just
-add words to the old one. Since LexFoods instances are resource modules,
-the superfluous definitions that they contain have no effect on the
-modules that just open them, and thus the smaller Foods grammars
-don't suffer from the additions we make.
-
-Exercise. Port the ExtFoods grammars to some new languages, building
-on the Foods implementations from previous sections, and using the functor
-defined in this section.
-
-When compiling the ExtFoods grammars, we have used the path
+In Foods grammars, we have used the path
- --# -path=.:../foods:present:prelude + --# -path=.:../foods:present
-where the library subdirectory present refers to a restricted version
-of the resource that covers only the present tense of verbs and sentences.
-Having this version available is motivatad by efficiency reasons: tenses
-produce in many languages a manifold of forms and combinations, which
-multiply the size of the grammar; at the same time, many applications,
-both technical ones and spoken dialogues, only need the present tense.
+The library subdirectory present is a restricted version
+of the resource, with only present tense of verbs and sentences.
-But it is easy change the grammars so that they admit of the full set -of tenses. It is enough to change the path to +By just changing the path, we get all tenses:
- --# -path=.:../foods:alltenses:prelude + --# -path=.:../foods:alltenses
-and recompile the grammars from source (flag -src); the libraries are
-not recompiled, because their sources cannot be found on the path list.
-Then it is possible to see all the tenses of
-phrases, by using the -all flag in linearization:
+Now we can see all the tenses of phrases, by using the -all flag
+in linearization:
- > gr -cat=Phrase | l -all
+ > gr | l -all
This wine is delicious
Is this wine delicious
This wine isn't delicious
@@ -5395,251 +4239,90 @@ phrases, by using the -all flag in linearization:
Would this wine not have been delicious
-In addition to tenses, the linearization writes all parametric -variations --- polarity and word order (direct vs. inverted) --- as -well as the variation between contracted and full negation words. -Of course, the list is even longer in languages that have more +We also see +
++The list is even longer in languages that have more tenses and moods, e.g. the Romance languages.
-In the ExtFoods grammar, tenses never find their way to the
-top level of Moves. Therefore it is useless to carry around
-the clause and verb tenses given in the alltenses set of libraries.
-But with the library, it is easy to add tenses to Moves. For
-instance, one can add the rules
+
- fun MAssertFut : Phrase -> Move ; -- I will pay this wine - fun MAssertPastPerf : Phrase -> Move ; -- I had paid that wine - lin MAssertFut p = mkText (mkS futureTense p) ; - lin MAssertPastPerf p = mkText (mkS pastTense anteriorAnt p) ; --
-Comparison with MAssert above shows that the absence of the tense
-and anteriority features defaults to present simultaneous tenses.
-
-Exercise. Measure the size of the context-free grammar corresponding to
-some concrete syntax of ExtFoods with all tenses.
-You can do this by printing the grammar in the context-free format
-(print_grammar -printer=cfg) and counting the lines.
-
-An interface module (interface I) is like a resource module,
-the difference being that it does not need to give definitions in
-its oper and param judgements. Definitions are, however,
-allowed, and they may use constants that appear undefined in the
-module. For example, here is an interface for predication, which
-is parametrized on NP case and agreement features, and on the constituent
-order:
-
- interface Predication = {
- param
- Case ;
- Agreement ;
- oper
- subject : Case ;
- object : Case ;
- order : (verb,subj,obj : String) -> String ;
-
- NP : Type = {s : Case => Str ; a : Agreement} ;
- TV : Type = {s : Agreement => Str} ;
-
- sentence : TV -> NP -> NP -> {s : Str} = \verb,subj,obj -> {
- s = order (verb ! subj.a) (subj ! subject) (obj ! object) ;
- }
-
-
-An instance module (instance J of I) is also like a
-resource, but it is compiled in union with the interface that it
-is an instance of. This means that the definitions given in the
-instance are type-checked with respect to the types given in the
-interface. Moreover, overwriting types or definitions given in the interface
-is not allowed. But it is legal for an instance to contain definitions
-not included in the corresponding interface. Here is an instance of
-Predication, suitable for languages like English.
-
- instance PredicationSimpleSVO of Predication = {
- param
- Case = Nom | Acc | Gen ;
- Agreement = Agr Number Person ;
-
- -- two new types
- Number = Sg | Pl ;
- Person = P1 | P2 | P3 ;
-
- oper
- subject = Nom ;
- object = Acc ;
- order = \verb,subj,obj -> subj ++ verb ++ obj ;
-
- -- the rest of the definitions don't need repetition
- }
-
-
-
--Abstract syntax modules can be used like interfaces, and concrete syntaxes -as their instances. The following translations then take place: -
-- cat C ---> oper C : Type - - fun f : A ---> oper f : A* - - lincat C = T ---> oper C : Type = T' - - lin f = t ---> oper f : A* = t' --
-This translation is called grammar reuse. It uses a homomorphism -from abstract types and terms to the concrete types and terms. For the -sake of more type safety, the types are not exactly the same. Currently -(GF 2.8), the type T' formed from the linearization type T of -a category C is T extended with a dummy lock field. Thus -
-
- lincat C = T ---> oper C = T ** {lock_C : {}}
-
--and the linearization terms are lifted correspondingly. The user of -a GF library should never see any lock fields; when they appear in -the compiler's warnings, they indicate that some library category is -constructed improperly by a user program. -
- -
-A parametrized module, aka. an incomplete module, or a
-functor, is any module that opens an interface (or
-an abstract). Several interfaces may be opened by one
-functor. The module header must be prefixed by the word incomplete.
-Here is a typical example, using the resource Syntax and
-a domain specific lexicon:
-
- incomplete concrete DomainI of Domain = open Syntax, Lex in {...} ;
-
--A functor instantiation is a module that inherits a functor and -provides an instance to each of its open interfaces. Here is an example: -
-- concrete DomainSwe of Domain = DomainI with - (Syntax = SyntaxSwe), - (Lex = LexSwe) ; -- - -
-A module of any type can make restricted inheritance, which is -either exclusion or inclusion: -
-- module M = A[f,g], B-[k] ** ... --
-A concrete syntax given to an abstract syntax that uses restricted inheritance -must make the corresponding restrictions. In addition, the concrete syntax can -make its own restrictions in order to redefine inherited linearization types and -rules. -
--Overriding old definitions without explicit restrictions is not allowed. -
- --While the concrete syntax constructs of GF have been already -covered, there is much more that can be done in the abstract -syntax. The techniques of dependent types and -higher order abstract syntax are introduced in this chapter, -which thereby concludes the presentation of the GF language. +Goals:
+-Many of the examples in this chapter are somewhat less close to -applications than the ones shown before. Moreover, the tools for -embedded grammars in the eighth chapter do not yet fully support dependent -types and higher order abstract syntax. +
- --In this chapter, we will show how -to encode advanced semantic concepts in an abstract syntax. -We use concepts inherited from type theory. Type theory -is the basis of many systems known as logical frameworks, which are -used for representing mathematical theorems and their proofs on a computer. -In fact, GF has a logical framework as its proper part: -this part is the abstract syntax. -
-
-In a logical framework, the formalization of a mathematical theory
-is a set of type and function declarations. The following is an example
-of such a theory, represented as an abstract module in GF.
-
- abstract Arithm = {
- cat
- Prop ; -- proposition
- Nat ; -- natural number
- fun
- Zero : Nat ; -- 0
- Succ : Nat -> Nat ; -- the successor of x
- Even : Nat -> Prop ; -- x is even
- And : Prop -> Prop -> Prop ; -- A and B
- }
-
--This example does not show any new type-theoretical constructs yet, but -it could nevertheless be used as a part of a proof system for arithmetic. -
-
-Exercise. Give a concrete syntax of Arithm, preferably
-by using the resource library.
-
-Dependent types are a characteristic feature of GF, -inherited from the constructive type theory of Martin-Löf and -distinguishing GF from most other grammar formalisms and -functional programming languages. +Problem: to express conditions of semantic well-formedness. +
++Example: a voice command system for a "smart house" wants to +eliminate meaningless commands. +
++Thus we want to restrict particular actions to +particular devices - we can dim a light, but we cannot +dim a fan. +
++The following example is borrowed from the +Regulus Book (Rayner & al. 2006).
-Dependent types can be used for stating stronger -conditions of well-formedness than ordinary types. A simple example is a "smart house" system, which -defines voice commands for household appliances. This example -is borrowed from the -Regulus Book -(Rayner & al. 2006). +defines voice commands for household appliances.
-One who enters a smart house can use a spoken Command to dim lights, switch
-on the fan, etc. For Devices of each Kind, there is a set of
-Actions that can be performed on them; thus one can dim the lights but
- not the fan, for example. These dependencies can be expressed
-by making the type Action dependent on Kind. We express these
-dependencies in cat declarations by attaching argument types to
-categories:
+
+
+Ontology: +
++Abstract syntax formalizing this:
cat
@@ -5647,25 +4330,32 @@ categories:
Kind ;
Device Kind ; -- argument type Kind
Action Kind ;
+ fun
+ CAction : (k : Kind) -> Action k -> Device k -> Command ;
-The crucial use of the dependencies is made in the rule for forming commands:
+Device and Action are both dependent types.
- fun CAction : (k : Kind) -> Action k -> Device k -> Command ; -
-In other words: an action and a device can be combined into a command only
-if they are of the same Kind k. If we have the functions
+
+
+Assume the kinds light and fan,
- DKindOne : (k : Kind) -> Device k ; -- the light
-
light, fan : Kind ;
dim : Action light ;
-we can form the syntax tree +Given a kind, k, you can form the device the k. +
++ DKindOne : (k : Kind) -> Device k ; -- the light ++
+Now we can form the syntax tree
CAction light dim (DKindOne light)
@@ -5678,21 +4368,21 @@ but we cannot form the trees
CAction fan dim (DKindOne light)
CAction fan dim (DKindOne fan)
+
-Linearization rules are written as usual: the concrete syntax does not -know if a category is a dependent type. In English, one could write as follows: + +
+ ++Concrete syntax does not know if a category is a dependent type.
lincat Action = {s : Str} ;
lin CAction _ act dev = {s = act.s ++ dev.s} ;
-Notice that the argument for Kind does not appear in the linearization;
-therefore it is good practice to make this clear by
-using a wild card for it, rather than a real
-variable.
-As we will show,
-the type checker can reconstruct the kind from the dev argument.
+Notice that the Kind argument is suppressed in linearization.
Parsing with dependent types is performed in two phases: @@ -5703,8 +4393,7 @@ Parsing with dependent types is performed in two phases:
-If you just parse in the usual way, you don't enter the second phase, and
-the kind argument is not found:
+By just doing the first phase, the kind argument is not found:
> parse "dim the light"
@@ -5718,16 +4407,18 @@ Moreover, type-incorrect commands are not rejected:
CAction ? dim (DKindOne fan)
-The question mark ? is a metavariable, and is returned by the parser
+The term ? is a metavariable, returned by the parser
for any subtree that is suppressed by a linearization rule.
-These are exactly the same kind of metavariables as were used here
+These are the same kind of metavariables as were used here
to mark incomplete parts of trees in the syntax editor.
-To get rid of metavariables, we must feed the parse result into the
-second phase of solving them. The solve process uses the dependent
-type checker to restore the values of the metavariables. It is invoked by
-the command put_tree = pt with the flag -transform=solve:
+
+
+Use the command put_tree = pt with the flag -transform=solve:
> parse "dim the light" | put_tree -transform=solve
@@ -5742,87 +4433,29 @@ The solve process may fail, in which case no tree is returned:
-Exercise. Write an abstract syntax module with above contents
-and an appropriate English concrete syntax. Try to parse the commands
-dim the light and dim the fan, with and without solve filtering.
+
-Exercise. Perform random and exhaustive generation, with and without
-solve filtering.
-
-Exercise. Add some device kinds and actions to the grammar. -
- +
-Sometimes an action can be performed on all kinds of devices. It would be
-possible to introduce separate fun constants for each kind-action pair,
-but this would be tedious. Instead, one can use polymorphic actions,
-i.e. actions that take a Kind as an argument and produce an Action
-for that Kind:
+Sometimes an action can be performed on all kinds of devices.
+
+This is represented as a function that takes a Kind as an argument
+and produce an Action for that Kind:
fun switchOn, switchOff : (k : Kind) -> Action k ;
-Functions that are not polymorphic are monomorphic. However, the -dichotomy into monomorphism and full polymorphism is not always sufficient -for good semantic modelling: very typically, some actions are defined -for a proper subset of devices, but not just one. For instance, both doors and -windows can be opened, whereas lights cannot. -We will return to this problem by introducing the -concept of restricted polymorphism later, -after a section on proof objects. +Functions of this kind are called polymorphic.
-Exercise. The grammar ExtFoods here permits the
-formation of phrases such as we drink this fish and we eat this wine.
-A way to prevent them is to distinguish between eatable and drinkable food items.
-Another, related problem is that there is some duplicated code
-due to a category distinction between guests and food items, for instance,
-two constructors for the determiner this. This problem can also
-be solved by dependent types. Rewrite the abstract syntax in Foods and
-ExtFoods by using such a type system, and also update the concrete syntaxes.
-If you do this right, you only have to change the functor modules
-FoodsI and ExtFoodsI in the concrete syntax.
-
-The functional fragment of GF -terms and types comprises function types, applications, lambda -abstracts, constants, and variables. This fragment is the same in -abstract and concrete syntax. In particular, -dependent types are also available in concrete syntax. -We have not made use of them yet, -but we will now look at one example of how they -can be used. -
--Those readers who are familiar with functional programming languages -like ML and Haskell, may already have missed polymorphic -functions. For instance, Haskell programmers have access to -the functions -
-- const :: a -> b -> a - const c _ = c - - flip :: (a -> b -> c) -> b -> a -> c - flip f y x = f x y --
-which can be used for any given types a,b, and c.
-
-The GF counterpart of polymorphic functions are monomorphic -functions with explicit type variables --- a techniques that we already -used in abstract syntax for modelling actions that can be performed -on all kinds of devices. Thus the above definitions can be written +We can use this kind of polymorphism in concrete syntax as well, +to express Haskell-type library functions:
oper const :(a,b : Type) -> a -> b -> a =
@@ -5831,26 +4464,35 @@ on all kinds of devices. Thus the above definitions can be written
oper flip : (a,b,c : Type) -> (a -> b ->c) -> b -> a -> c =
\_,_,_,f,x,y -> f y x ;
+
-When the operations are used, the type checker requires
-them to be equipped with all their arguments; this may be a nuisance
-for a Haskell or ML programmer. They have not been used very much,
-except in the Coordination module of the resource library.
+
+1. Write an abstract syntax module with above contents
+and an appropriate English concrete syntax. Try to parse the commands
+dim the light and dim the fan, with and without solve filtering.
+
+2. Perform random and exhaustive generation, with and without
+solve filtering.
+
+3. Add some device kinds and actions to the grammar. +
++ +
+-Perhaps the most well-known idea in constructive type theory is -the Curry-Howard isomorphism, also known as the -propositions as types principle. Its earliest formulations -were attempts to give semantics to the logical systems of -propositional and predicate calculus. In this section, we will consider -a more elementary example, showing how the notion of proof is useful -outside mathematics, as well. +Curry-Howard isomorphism = propositions as types principle: +a proposition is a type of proofs (= proof objects).
-We use the already shown category of unary (also known as Peano-style) -natural numbers: +Example: define the less than proposition for natural numbers,
cat Nat ;
@@ -5858,12 +4500,8 @@ natural numbers:
fun Succ : Nat -> Nat ;
-The successor function Succ generates an infinite
-sequence of natural numbers, beginning from Zero.
-
-We then define what it means for a number x to be less than -a number y. Our definition is based on two axioms: +Define inductively what it means for a number x to be less than +a number y:
Zero is less than Succ y for any y.
@@ -5871,8 +4509,8 @@ a number y. Our definition is based on two axioms:
-The most straightforward way of expressing these axioms in type theory
-is with a dependent type Less x y, and two functions constructing
+Expressing these axioms in type theory
+with a dependent type Less x y and two functions constructing
its objects:
@@ -5881,71 +4519,26 @@ its objects:
fun lessS : (x,y : Nat) -> Less x y -> Less (Succ x) (Succ y) ;
-Objects formed by lessZ and lessS are
-called proof objects: they establish the truth of certain
-mathematical propositions.
-For instance, the fact that 2 is less that
-4 has the proof object
+Example: the fact that 2 is less that 4 has the proof object
lessS (Succ Zero) (Succ (Succ (Succ Zero)))
(lessS Zero (Succ (Succ Zero)) (lessZ (Succ Zero)))
-
--whose type is -
-- Less (Succ (Succ Zero)) (Succ (Succ (Succ (Succ Zero)))) --
-which is the formalization of the proposition that 2 is less than 4. -
--GF grammars can be used to provide a semantic control of -well-formedness of expressions. We have already seen examples of this: -the grammar of well-formed actions on household devices. By introducing proof objects -we have now added an even more powerful technique of expressing semantic conditions. -
--A simple example of the use of proof objects is the definition of -well-formed time spans: a time span is expected to be from an earlier to -a later time: -
-- from 3 to 8 --
-is thus well-formed, whereas -
-- from 8 to 3 --
-is not. The following rules for spans impose this condition
-by using the Less predicate:
-
- cat Span ; - fun span : (m,n : Nat) -> Less m n -> Span ; + : Less (Succ (Succ Zero)) (Succ (Succ (Succ (Succ Zero))))
-Exercise. Write an abstract and concrete syntax with the -concepts of this section, and experiment with it in GF. +
--Exercise. Define the notions of "even" and "odd" in terms -of proof objects. Hint. You need one function for proving -that 0 is even, and two other functions for propagating the -properties. -
- +-Another possible application of proof objects is proof-carrying documents: -to be semantically well-formed, the abstract syntax of a document must contain a proof -of some property, although the proof is not shown in the concrete document. -Think, for instance, of small documents describing flight connections: +Idea: to be semantically well-formed, the abstract syntax of a document +must contain a proof of some property, +although the proof is not shown in the concrete document. +
++Example: documents describing flight connections:
To fly from Gothenburg to Prague, first take LH3043 to Frankfurt, then OK0537 to Prague. @@ -5963,28 +4556,28 @@ The well-formedness of this text is partly expressible by dependent typing: OK0537 : Flight Frankfurt Prague ;
-This rules out texts saying take OK0537 from Gothenburg to Prague. -However, there is a -further condition saying that it must be possible to -change from LH3043 to OK0537 in Frankfurt. -This can be modelled as a proof object of a suitable type, -which is required by the constructor -that connects flights. +To extend the conditions to flight connections, we introduce a category +of proofs that a change is possible:
- cat - IsPossible (x,y,z : City)(Flight x y)(Flight y z) ; - fun - Connect : (x,y,z : City) -> - (u : Flight x y) -> (v : Flight y z) -> - IsPossible x y z u v -> Flight x z ; + cat IsPossible (x,y,z : City)(Flight x y)(Flight y z) ; ++
+A legal connection is formed by the function +
++ fun Connect : (x,y,z : City) -> + (u : Flight x y) -> (v : Flight y z) -> + IsPossible x y z u v -> Flight x z ;- +
+ +
+
-In the first version of the smart house grammar Smart,
-all Actions were either of
+Above, all Actions were either of
-The notion of class can be expressed in abstract syntax -by using the Curry-Howard isomorphism as follows: +The notion of class uses the Curry-Howard isomorphism as follows:
-Here is an example with switching and dimming. The classes are called
-switchable and dimmable.
+
+
+We modify the smart house grammar:
cat
@@ -6021,41 +4617,32 @@ Here is an example with switching and dimming. The classes are called
dim : (k : Kind) -> Dimmable k -> Action k ;
-One advantage of this formalization is that classes for new -actions can be added incrementally. +Classes for new actions can be added incrementally.
-Exercise. Write a new version of the Smart grammar with
-classes, and test it in GF.
+
-Exercise. Add some actions, kinds, and classes to the grammar. -Try to port the grammar to a new language. You will probably find -out that restricted polymorphism works differently in different languages. -For instance, in Finnish not only doors but also TVs and radios -can be "opened", which means switching them on. -
- +Mathematical notation and programming languages have -expressions that bind variables. For instance, -a universally quantifier proposition +expressions that bind variables. +
++Example: universal quantifier formula
(All x)B(x)
-consists of the binding (All x) of the variable x,
-and the body B(x), where the variable x can have
-bound occurrences.
+The variable x has a binding (All x), and
+occurs bound in the body B(x).
-Variable bindings appear in informal mathematical language as well, for -instance, +Examples from informal mathematical language:
for all x, x is equal to x
@@ -6065,87 +4652,87 @@ instance,
Let x be a natural number. Assume that x is even. Then x + 3 is odd.
+
-In type theory, variable-binding expression forms can be formalized -as functions that take functions as arguments. The universal -quantifier is defined + +
+ ++Abstract syntax can use functions as arguments:
+ cat Ind ; Prop ;
fun All : (Ind -> Prop) -> Prop
where Ind is the type of individuals and Prop,
-the type of propositions. If we have, for instance, the equality predicate
+the type of propositions.
+
+Let us add an equality predicate
fun Eq : Ind -> Ind -> Prop
-we may form the tree +Now we can form the tree
All (\x -> Eq x x)
-which corresponds to the ordinary notation +which we want to relate to the ordinary notation
- (All x)(x = x). + (All x)(x = x)
-An abstract syntax where trees have functions as arguments, as in
-the two examples above, has turned out to be precisely the right
-thing for the semantics and computer implementation of
-variable-binding expressions. The advantage lies in the fact that
-only one variable-binding expression form is needed, the lambda abstract
-\x -> b, and all other bindings can be reduced to it.
-This makes it easier to implement mathematical theories and reason
-about them, since variable binding is tricky to implement and
-to reason about. The idea of using functions as arguments of
-syntactic constructors is known as higher-order abstract syntax.
+In higher-order abstract syntax (HOAS), all variable bindings are
+expressed using higher-order syntactic constructors.
-The question now arises: how to define linearization rules -for variable-binding expressions? -Let us first consider universal quantification, + +
+ ++HOAS has proved to be useful in the semantics and computer implementation of +variable-binding expressions. +
++How do we relate HOAS to the concrete syntax?
-- fun All : (Ind -> Prop) -> Prop -
In GF, we write
+ fun All : (Ind -> Prop) -> Prop
lin All B = {s = "(" ++ "All" ++ B.$0 ++ ")" ++ B.s}
-to obtain the form shown above.
-This linearization rule brings in a new GF concept --- the $0
-field of B containing a bound variable symbol.
-The general rule is that, if an argument type of a function is
-itself a function type A -> C, the linearization type of
+General rule: if an argument type of a fun function is
+a function type A -> C, the linearization type of
this argument is the linearization type of C
-together with a new field $0 : Str. In the linearization rule
-for All, the argument B thus has the linearization
-type
+together with a new field $0 : Str.
+
+The argument B thus has the linearization type
- {$0 : Str ; s : Str},
+ {s : Str ; $0 : Str},
-since the linearization type of Prop is
+If there are more bindings, we add $1, $2, etc.
- {s : Str}
-
-In other words, the linearization of a function -consists of a linearization of the body together with a -field for a linearization of the bound variable. -Those familiar with type theory or lambda calculus -should notice that GF requires trees to be in -eta-expanded form in order for this to make sense: -for any function of type + +
+ ++To make sense of linearization, syntax trees must be +eta-expanded: for any function of type
A -> B
@@ -6158,9 +4745,6 @@ an eta-expanded syntax tree has the form
where b : B under the assumption x : A.
-It is in this form that an expression can be analysed
-as having a bound variable and a body, which can be put into
-a linearization record.
Given the linearization rule @@ -6169,7 +4753,7 @@ Given the linearization rule lin Eq a b = {s = "(" ++ a.s ++ "=" ++ b.s ++ ")"}
-the linearization of +the linearization of the tree
\x -> Eq x x
@@ -6181,358 +4765,216 @@ is the record
{$0 = "x", s = ["( x = x )"]}
-Thus we can compute the linearization of the formula, +Then we can compute the linearization of the formula,
All (\x -> Eq x x) --> {s = "[( All x ) ( x = x )]"}.
-But how did we get the linearization of the variable x
-into the string "x"? GF grammars have no rules for
-this: it is just hard-wired in GF that variable symbols are
-linearized into the same strings that represent them in
-the print-out of the abstract syntax.
+The linearization of the variable x is,
+"automagically", the string "x".
-To be able to parse variable symbols, however, GF needs to know what -to look for (instead of e.g. trying to parse any -string as a variable). What strings are parsed as variable symbols -is defined in the lexical analysis part of GF parsing + +
+ ++GF needs to know what strings are parsed as variable symbols. +
++This is defined in a special lexer,
> p -cat=Prop -lexer=codevars "(All x)(x = x)"
All (\x -> Eq x x)
-(see more details on lexers here). If several variables are bound in the
-same argument, the labels are $0, $1, $2, etc.
+More details on lexers here.
-Exercise. Write an abstract syntax of the whole + +
+ ++1. Write an abstract syntax of the whole predicate calculus, with the connectives "and", "or", "implies", and "not", and the quantifiers "exists" and "for all". Use higher-order functions to guarantee that unbounded variables do not occur.
-Exercise. Write a concrete syntax for your favourite +2. Write a concrete syntax for your favourite notation of predicate calculus. Use Latex as target language if you want nice output. You can also try producing boolean expressions of some programming language. Use as many parenthesis as you need to guarantee non-ambiguity.
- ++ +
+
-Just like any functional programming language, abstract syntax in
-GF has declarations of functions, telling what the type of a function is.
-But we have not yet shown how to compute
-these functions: all we can do is provide them with arguments
-and linearize the resulting terms.
-Since our main interest is the well-formedness of expressions,
-this has not yet bothered
-us very much. As we will see, however, computation does play a role
-even in the well-formedness of expressions when dependent types are
-present.
+The fun judgements of GF are declarations of functions, giving their types.
-GF has a form of judgement for semantic definitions,
-marked by the key word def. At its simplest, it is just
-the definition of one constant, e.g.
+Can we compute fun functions?
+
+Mostly we are not interested, since functions are seen as constructors, +i.e. data forms - as usual with +
++ fun Zero : Nat ; + fun Succ : Nat -> Nat ; ++
+But it is also possible to give semantic definitions to functions.
+The key word is def:
fun one : Nat ;
def one = Succ Zero ;
-
-
-Notice a def definition can only be given to names declared by
-fun judgements in the same module; it is not possible to define
-an inherited name.
-
-We can also define a function with arguments, -
-
+
fun twice : Nat -> Nat ;
def twice x = plus x x ;
-
--which is still a special case of the most general notion of -definition, that of a group of pattern equations: -
-
+
fun plus : Nat -> Nat -> Nat ;
def
plus x Zero = x ;
plus x (Succ y) = Succ (Sum x y) ;
+
-To compute a term is, as in functional programming languages, -simply to follow a chain of reductions until no definition -can be applied. For instance, we compute + +
+ ++Computation: follow a chain of definition until no definition +can be applied,
- Sum one one -->
- Sum (Succ Zero) (Succ Zero) -->
- Succ (sum (Succ Zero) Zero) -->
+ plus one one -->
+ plus (Succ Zero) (Succ Zero) -->
+ Succ (plus (Succ Zero) Zero) -->
Succ (Succ Zero)
-Computation in GF is performed with the pt command and the
+Computation in GF is performed with the put_term command and the
compute transformation, e.g.
- > p -tr "1 + 1" | pt -transform=compute -tr | l
- sum one one
+ > parse -tr "1 + 1" | put_term -transform=compute -tr | l
+ plus one one
Succ (Succ Zero)
s(s(0))
-The def definitions of a grammar induce a notion of
-definitional equality among trees: two trees are
-definitionally equal if they compute into the same tree.
-Thus, trivially, all trees in a chain of computation
-(such as the one above) are definitionally equal to each other.
-In general, there can be infinitely many definitionally equal trees.
+
+
+Two trees are definitionally equal if they compute into the same tree.
-An important property of definitional equality is that it is
-extensional, i.e. has to do with the sameness of semantic value.
-Linearization, on the other hand, is an intensional operation,
-i.e. has to do with the sameness of expression. This means that
-def definitions are not evaluated as linearization steps.
-Intensionality is a crucial property of linearization, since we want
-to use it for things like tracing a chain of evaluation.
-For instance, each of the steps of the computation above
-has a different linearization into standard arithmetic notation:
+Definitional equality does not guarantee sameness of linearization:
- 1 + 1 - s(0) + s(0) - s(s(0) + 0) - s(s(0)) + plus one one ===> 1 + 1 + Succ (Succ Zero) ===> s(s(0))
-In most programming languages, the operations that can be performed on -expressions are extensional, i.e. give equal values to equal arguments. -But GF has both extensional and intensional operations. -Type checking is extensional: -in the type theory with dependent types, types may depend on terms, -and types depending on definitionally equal terms are -equal types. For instance, +The main use of this concept is in type checking: sameness of types. +
++Thus e.g. the following types are equal
Less Zero one
Less Zero (Succ Zero))
-are equal types. Hence, any tree that type checks as a proof that -1 is odd also type checks as a proof that the successor of 0 is odd. -(Recall, in this connection, that the -arguments a category depends on never play any role -in the linearization of trees of that category, -nor in the definition of the linearization type.) +so that an object of one also is an object of the other.
-When pattern matching is performed with def equations, it is
-crucial to distinguish between constructors and other functions
-(cf. here on pattern matching in concrete syntax).
-GF has a judgement form data to tell that a category has
+
+
+The judgement form data tells that a category has
certain functions as constructors:
data Nat = Succ | Zero ;
-Unlike in Haskell and ML, new constructors can be added to
-a type with new data judgements. The type signatures of constructors
-are given separately, in ordinary fun judgements.
-One can also write directly
-
- data Succ : Nat -> Nat ; --
-which is syntactic sugar for the pair of judgements +The type signatures of constructors are given separately,
+ fun Zero : Nat ;
fun Succ : Nat -> Nat ;
- data Nat = Succ ;
-If we did not mark Zero as data, the definition
+There is also a shorthand:
- fun isZero : Nat -> Bool ; - def isZero Zero = True ; - def isZero _ = False ; + data Succ : Nat -> Nat ; === fun Succ : Nat -> Nat ; + data Nat = Succ ;
-would return True for all arguments, because the pattern Zero
-would be treated as a variable and it would hence match all values!
-This is a common pitfall in GF.
+Notice: in def definitions, identifier patterns not
+marked as data will be treated as variables.
-Exercise. Implement an interpreter of a small functional programming + +
+ ++1. Implement an interpreter of a small functional programming language with natural numbers, lists, pairs, lambdas, etc. Use higher-order -abstract syntax with semantic definitions. As onject language, use +abstract syntax with semantic definitions. As concrete syntax, use your favourite programming language.
- -
-We have generalized the cat judgement form and introduced two new forms
-for abstract syntax:
-
| form | -reading | -|
|---|---|---|
cat C G |
-C is a category in context G | -|
def f P1 ... Pn = t |
-function f applied to P1...Pn has value t | -|
data C = C1 | ... | Cn |
-category C has constructors C1...Cn | -|
-The context in the cat judgement has the form
-
- (x1 : T1) ... (xn : Tn) --
-where the types T1 ... Tn may be increasingly dependent. To form a -type, C must be equipped with arguments of each type in the -context, satisfying the dependencies. As syntactic sugar, we have -
-- T G === (x : T) G --
-if x does not occur in G. The linearization type definition of a
-category does not mention the context.
+2. There is no termination checking for def definitions.
+Construct an examples that makes type checking loop.
+Type checking can be invoked with put_term -transform=solve.
-In def judgements, the arguments P1...Pn can be constructor and
-variable patterns as well as wild cards, and the binding and
-evaluation rules are the same as here.
+
-A data judgement states that the names on the right-hand side are constructors
-of the category on the left-hand side. The precise types of the constructors are
-given in the fun judgements introducing them; the value type of a constructor
-of C must be of the form C a1 ... am. As syntactic sugar,
-
- data f : A1 ... An -> C a1 ... am === - fun f : A1 ... An -> C a1 ... am ; data C = f ; -- - -
-A dependent function type has the form -
-- (x : A) -> B --
-where B depends on a variable x of type A. We have the -following syntactic sugar: -
-- (x,y : A) -> B === (x : A) -> (y : A) -> B - - (_ : A) -> B === (x : A) -> B if B does not depend on x - - A -> B === (_ : A) -> B --
-A fun function in abstract syntax may have function types as
-argument types. This is called higher-order abstract syntax.
-The linearization of an argument
-
- \z0, ..., zn -> b : (x0 : A1) -> ... -> (xn : An) -> B --
-if formed from the linearization of b* of b by adding -fields that hold the variable symbols: -
-
- b* ** {$0 = "z0" ; ... ; $n = "zn"}
-
--If an argument function is itself a higher-order function, its -bound variables cannot be reached in linearization. Thus, in a sense, -the higher-order abstract syntax of GF is just second-orde abstract syntax. -
--A syntax tree is a well-typed term in beta-eta normal form, which -means that -
-
-Terms that are not in this form may occur as arguments of dependent types
-and in def judgements, but they cannot be linearized.
-
-In this chapter, we will write a grammar for arithmetic expressions as known -from school mathematics and many programming languages. We will see how to -define precedences in GF, how to include built-in integers in grammars, and -how to deal with spaces between tokens in desired ways. As an alternative concrete -syntax, we will generate code for a JVM-like stack machine. We will conclude -by extending the language with variable declarations and assignments, which -are handled in a type-safe way by using higher-order abstract syntax. +Goals:
+-To write grammars for formal languages is usually less challenging than for -natural languages. There are standard tools for this task, such as the YACC -family of parser generators. Using GF would be overkill for many projects, -and come with a penalty in efficiency. However, it is still worth while to -look at this task. A typical application of GF are natural-language interfaces -to formal systems: in such applications, the translation between natural and -formal language can be defined as a multilingual grammar. The use of higher-order -abstract syntax, together with dependent types, provides a way to define a -complete compiler in GF. +
- -
-We want to write a grammar for what is usually called expressions
-in programming languages. The expressions are built from integers by
-the binary operations of addition, subtraction, multiplication, and
-division. The abstract syntax is easy to write. We call it Calculator,
-since it can be used as the basis of a calculator.
+We construct a calculator with addition, subtraction, multiplication, and
+division of integers.
abstract Calculator = {
@@ -6545,9 +4987,9 @@ since it can be used as the basis of a calculator.
}
-Notice the use of the category Int. It is a built-in category of
-integers. Its syntax trees are denoted by integer literals, which are
-sequences of digits. For instance,
+The category Int is a built-in category of
+integers. Its syntax trees integer literals, i.e.
+sequences of digits:
5457455814608954681 : Int
@@ -6556,35 +4998,15 @@ sequences of digits. For instance,
These are the only objects of type Int:
grammars are not allowed to declare functions with Int as value type.
-
+
+
+
+
Concrete syntax: a simple approach
-Arithmetic expressions should be unambiguous. If we write
-
-
- 2 + 3 * 4
-
-
-it should be parsed as one, but not both, of
-
-
- EPlus (EInt 2) (ETimes (EInt 3) (EInt 4))
- ETimes (EPlus (EInt 2) (EInt 3)) (EInt 4)
-
-
-Under normal conventions, the former is chosen, because
-multiplication has higher precedence than addition.
-If we want to express the latter tree, we have to use
-parentheses:
-
-
- (2 + 3) * 4
-
-
-However, it is not completely trivial to decide when to use
-parentheses and when not. We will therefore begin with a
+We begin with a
concrete syntax that always uses parentheses around binary
-operator applications.
+operator applications:
concrete CalculatorP of Calculator = {
@@ -6604,533 +5026,74 @@ operator applications.
}
-Now we will obtain
+Now we have
> linearize EPlus (EInt 2) (ETimes (EInt 3) (EInt 4))
( 2 + ( 3 * 4 ) )
-The first problem, even more urgent than superfluous parentheses, is
-to get rid of superfluous spaces and to recognize integer literals
-in the parser.
+First problems:
-
++ +
+The input of parsing in GF is not just a string, but a list of -tokens. By default, a list of tokens is obtained from a string -by analysing it into words, which means chunks separated by -spaces. Thus for instance +tokens, returned by a lexer. +
++The default lexer in GF returns chunks separated by spaces:
- "(12 + (3 * 4))" + "(12 + (3 * 4))" ===> "(12", "+", "(3". "*". "4))"
-is split into the tokens -
-- "(12", "+", "(3". "*". "4))" --
-The parser then tries to find each of these tokens among the terminals
-of the grammar, i.e. among the strings that can appear in linearizations.
-In our example, only the tokens "+" and "*" can be found, and
-parsing therefore fails.
-
-The proper way to split the above string into tokens would be +The proper way would be
"(", "12", "+", "(", "3", "*", "4", ")", ")"
-Moreover, the tokens "12", "3", and "4" should not be sought
-among the terminals in the grammar, but treated as integer tokens, which
-are defined outside the grammar. Since GF aims to be fully general, such
-conventions are not built in: it must be possible for a grammar to have
-tokens such as "12" and "12)". Therefore, GF has a way to select
-a lexer, a function that splits strings into tokens and classifies
-them into terminals, literalts, etc.
+Moreover, the tokens "12", "3", and "4" should be recognized as
+integer literals - they cannot be found in the grammar.
-A lexer can be given as a flag to the parsing command: +We choose a proper with a flag:
> parse -cat=Exp -lexer=codelit "(2 + (3 * 4))"
EPlus (EInt 2) (ETimes (EInt 3) (EInt 4))
-Since the lexer is usually a part of the language specification, it -makes sense to put it in the concrete syntax by using the judgement +We could also put the flag into the grammar (concrete syntax):
flags lexer = codelit ;
-The problem of getting correct spacing after linearization is likewise solved -by an unlexer: +In linearization, we use a corresponding unlexer:
> l -unlexer=code EPlus (EInt 2) (ETimes (EInt 3) (EInt 4))
(2 + (3 * 4))
--Also this flag is usually put into the concrete syntax file. -
--The lexers and unlexers that are available in the GF system can be -seen by -
-- > help -lexer - > help -unlexer --
-A table of the most common lexers and unlexers is given in the Summary -section 7.8. -
- --Here is a summary of the usual -precedence rules in mathematics and programming languages: -
-1 + 2 + 3 means the same as (1 + 2) + 3.
--One way of dealing with precedences in compiler books is by dividing expressions -into three categories: -
--The context-free grammar, also taking care of associativity, is the following: -
-
- Exp ::= Exp "+" Term | Exp "-" Term | Term ;
- Term ::= Term "*" Fact | Term "/" Fact | Fact ;
- Fact ::= Int | "(" Exp ")" ;
-
--A compiler, however, does not want to make a semantic distinction between the -three categories. Nor does it want to build syntax trees with the -coercions that enable the use of a higher level expressions on a lower, and -encode the use of parentheses. In compiler tools such as YACC, building abstract -syntax trees is performed as a semantic action. For instance, if the parser -recognizes an expression in parentheses, the action is to return only the -expression, without encoding the parentheses. -
-
-In GF, semantic actions could be encoded by using def definitions introduced
-here. But there is a more straightforward way of thinking about
-precedences: we introduce a parameter for precedence, and treat it as
-an inherent feature of expressions:
-
- oper
- param Prec = Ints 2 ;
- TermPrec : Type = {s : Str ; p : Prec} ;
-
- mkPrec : Prec -> Str -> TermPrec = \p,s -> {s = s ; p = p} ;
-
- lincat
- Exp = TermPrec ;
-
-
-This example shows another way to use built-in integers in GF:
-the type Ints 2 is a parameter type, whose values are the integers
-0,1,2. These are the three precedence levels we need. The main idea
-is to compare the inherent precedence of an expression with the context
-in which it is used. If the precedence is higher than or equal to
-the expected, then
-no parentheses are needed. Otherwise they are. We encode this rule in
-the operation
-
- oper usePrec : TermPrec -> Prec -> Str = \x,p ->
- case lessPrec x.p p of {
- True => "(" x.s ")" ;
- False => x.s
- } ;
-
--With this operation, we can build another one, that can be used for -defining left-associative infix expressions: -
-- infixl : Prec -> Str -> (_,_ : TermPrec) -> TermPrec = \p,f,x,y -> - mkPrec p (usePrec x p ++ f ++ usePrec y (nextPrec p)) ; --
-Constant-like expressions (the highest level) can be built simply by -
-- constant : Str -> TermPrec = mkPrec 2 ; --
-All these operations can be found in the library module lib/prelude/Formal,
-so we don't have to define them in our own code. Also the auxiliary operations
-nextPrec and lessPrec used in their definitions are defined there.
-The library has 5 levels instead of 3.
-
-Now we can express the whole concrete syntax of Calculator compactly:
-
- concrete CalculatorC of Calculator = open Formal, Prelude in {
-
- flags lexer = codelit ; unlexer = code ; startcat = Exp ;
-
- lincat Exp = TermPrec ;
-
- lin
- EPlus = infixl 0 "+" ;
- EMinus = infixl 0 "-" ;
- ETimes = infixl 1 "*" ;
- EDiv = infixl 1 "/" ;
- EInt i = constant i.s ;
- }
-
-
-Let us just take one more look at the operation usePrec, which decides whether
-to put parentheses around a term or not. The case where parentheses are not needed
-around a string was defined as the string itself.
-However, this would imply that superfluous parentheses
-are never correct. A more liberal grammar is obtained by using the operation
-
- parenthOpt : Str -> Str = \s -> variants {s ; "(" ++ s ++ ")"} ;
-
-
-which is actually used in the Formal library.
-But even in this way, we can only allow one pair of superfluous parentheses.
-Thus the parameter-based grammar has not quite reached the goal
-of implementing the same language as the expression-term-factor grammar.
-But it has the advantage of eliminating precedence distinctions from the
-abstract syntax.
-
-Exercise. Define non-associative and right-associative infix operations
-analogous to infixl.
-
-Exercise. Add a constructor that puts parentheses around expressions
-to raise their precedence, but that is eliminated by a def definition.
-Test parsing with and without a pipe to pt -transform=compute.
-
-The classical use of grammars of programming languages is in compilers, -which translate one language into another. Typically the source language of -a compiler is a high-level language and the target language is a machine -language. The hub of a compiler is abstract syntax: the front end of -the compiler parses source language strings into abstract syntax trees, and -the back end linearizes these trees into the target language. This processing -model is of course what GF uses for natural language translation; the main -difference is that, in GF, the compiler could run in the opposite direction as -well, that is, function as a decompiler. (In full-size compilers, the -abstract syntax is also transformed by several layers of semantic analysis -and optimizations, before the target code is generated; this can destroy -reversibility and hence decompilation.) -
-
-More for the sake of illustration
-than as a serious compiler, let us write a concrete
-syntax of Calculator that generates machine code similar to JVM (Java Virtual
-Machine). JVM is a so-called stack machine, whose code follows the
-postfix notation, also known as reverse Polish notation. Thus the
-expression
-
- 2 + 3 * 4 --
-is translated to -
-- iconst 2 : iconst 3 ; iconst 4 ; imul ; iadd --
-The linearization rules are not difficult to give: -
-
- lin
- EPlus = postfix "iadd" ;
- EMinus = postfix "isub" ;
- ETimes = postfix "imul" ;
- EDiv = postfix "idiv" ;
- EInt i = ss ("iconst" ++ i.s) ;
- oper
- postfix : Str -> SS -> SS -> SS = \op,x,y ->
- ss (x.s ++ ";" ++ y.s ++ ";" ++ op) ;
-
-
--Natural languages have sometimes difficulties in expressing mathematical -formulas unambiguously, because they have no universal device of parentheses. -For arithmetic formulas, a solution exists: -
-- 2 + 3 * 4 --
-can be expressed -
-- the sum of 2 and the product of 3 and 4 --
-However, this format is very verbose and unnatural, and becomes -impossible to understand when the complexity of expressions grows. -Fortunately, spoken language -has a very nice way of using pauses for disambiguation. This device was -introduced by Torbjörn Lager (personal communication, 2003) -as an input mechanism to a calculator dialogue -system; it seems to correspond very closely to how we actually speak when we -want to communicate arithmetic expressions. Another application would be as -a part of a programming assistant that reads aloud code. -
-
-The idea is that, after every completed operation, there is a pause. Try this
-by speaking aloud the following lines, making a pause instead of pronouncing the
-word PAUSE:
-
- 2 plus 3 times 4 PAUSE - 2 plus 3 PAUSE times 4 PAUSE --
-A grammar implementing this convention is again simple to write: -
-- lin - EPlus = infix "plus" ; - EMinus = infix "minus" ; - ETimes = infix "times" ; - EDiv = infix ["divided by"] ; - EInt i = i ; - oper - infix : Str -> SS -> SS -> SS = \op,x,y -> - ss (x.s ++ op ++ y.s ++ "PAUSE") ; --
-Intuitively, a pause is taken to give the hearer time to compute an -intermediate result. -
--Exercise. Is the pause-based grammar unambiguous? Test with random examples! -
- -
-A useful extension of arithmetic expressions is a straight code programming
-language. The programs of this language are assignments of the form x = exp,
-which assign expressions to variables. Expressions can moreover contain variables
-that have been given values in previous assignments.
-
-In this language, we use two new categories: programs and variables. -A program is a sequence of assignments, where a variable is given a value. -Logically, we want to distinguish initializations from other assignments: -these are the assignments where a variable is given a value for the first time. -Just like in C-like languages, -we prefix an initializing assignment with the type of the variable. -Here is an example of a piece of code written in the language: -
-- int x = 2 + 3 ; - int y = x + 1 ; - x = x + 9 * y ; --
-We define programs by the following constructors: -
-- fun - PEmpty : Prog ; - PInit : Exp -> (Var -> Prog) -> Prog ; - PAss : Var -> Exp -> Prog -> Prog ; --
-The interesting constructor is PInit, which uses
-higher-order abstract syntax for making the initialized variable available in
-the continuation of the program. The abstract syntax tree for the above code
-is
-
- PInit (EPlus (EInt 2) (EInt 3)) (\x -> - PInit (EPlus (EVar x) (EInt 1)) (\y -> - PAss x (EPlus (EVar x) (ETimes (EInt 9) (EVar y))) - PEmpty)) --
-Since we want to prevent the use of uninitialized variables in programs, we
-don't give any constructors for Var! We just have a rule for using variables
-as expressions:
-
- fun EVar : Var -> Exp ; --
-The rest of the grammar is just the same as for arithmetic expressions
-here. The best way to implement it is perhaps by writing a
-module that extends the expression module. The most natural start category
-of the extension is Prog.
-
-Exercise. Extend the straight-code language to expressions of type float.
-To guarantee type safety, you can define a category Typ of types, and
-make Exp and Var dependent on Typ. Basic floating point expressions
-can be formed from literal of the built-in GF type Float. The arithmetic
-operations should be made polymorphic (as here).
-
-We can define a C-like concrete syntax by using GF's $ variables, as explained
-here.
-
-In a JVM-like syntax, we need two more instructions: iload x, which
-loads (pushes on the stack) the value of the variable x, and istore x,
-which stores the value of the currently topmost expression in the variable x.
-Thus the code for the example in the previous section is
-
- iconst 2 ; iconst 3 ; iadd ; istore x ; - iload x ; iconst 1 ; iadd ; istore y ; - iload x ; iconst 9 ; iload y ; imul ; iadd ; istore x ; --
-Those familiar with JVM will notice that we are using symbolic addresses, i.e. -variable names instead of integer offsets in the memory. Neither real JVM nor -our variant makes any distinction between the initialization and reassignment -of a variable. -
--Exercise. Finish the implementation of the -C-to-JVM compiler by extending the expression modules -to straight code programs. -
-
-Exercise. If you made the exercise of adding floating point numbers to
-the language, you can now cash out the main advantage of type checking
-for code generation: selecting type-correct JVM instructions. The floating
-point instructions are precisely the same as the integer one, except that
-the prefix is f instead of i, and that fconst takes floating
-point literals as arguments.
-
-In many applications, the task of GF is just linearization and parsing; -keeping track of bound variables and other semantic constraints is -the task of other parts of the program. For instance, if we want to -write a natural language interface that reads aloud C code, we can -quite as well use a context-free grammar of C, and leave it to the C -compiler to check that variables make sense. In such a program, we may -want to treat variables as Strings, i.e. to have a constructor -
-- fun VString : String -> Var ; --
-The built-in category String has as its values string literals,
-which are strings in double quotes. The lexer and unlexer codelit
-restore and remove the quotes; when the lexer finds a token that is
-neither a terminal in the grammar nor an integer literal, it sends
-it to the parser as a string literal.
-
-Exercise. Write a grammar for straight code without higher-order -abstract syntax. -
-
-Exercise. Extend the liberal straight code grammar to while loops and
-some other program constructs, and investigate if you can build a reasonable spoken
-language generator for this fragment.
-
-Since formal languages are syntactically simpler than natural languages, it -is no wonder that their grammars can be defined in GF. Some thought is needed -for dealing with precedences and spacing, but much of it is encoded in GF's -libraries and built-in lexers and unlexers. If the sole purpose of a grammar -is to implement a programming language, then the BNF Converter tool -(BNFC) is more appropriate than GF: -
www.cs.chalmers.se/~markus/BNFC/
--The most common applications of GF grammars of formal languages -are in natural-language interfaces of various kinds. -These systems don't usually need semantic control in GF abstract -syntax. However, the situation can be different if the interface also comprises -an interactive syntax editor, as in the GF-Key system -(Beckert & al. 2006, Burke & Johannisson 2005). -In that system, the editor is used for guiding programmers only to write -type-correct code. -
--The technique of continuations in modelling programming languages has recently -been applied to natural language, for processing anaphoric reference, -e.g. pronouns. It may be good to know that GF has the machinery available; -for the time being, however (GF 2.8), dependent types and -higher-order abstract syntax are not supported by the embedded GF implementations -in Haskell and Java. -
--Exercise. The book C programming language by Kernighan and Ritchie -(p. 123, 2nd edition, 1988) describes an English-like syntax for pointer and -array declarations, and a C program for translating between English and C. -The following example pair shows all the expression forms needed: -
-- char (*(*x[3])())[5] - - x: array[3] of pointer to function returning - pointer to array[5] of char --
-Implement these translations by a GF grammar. -
-
-Exercise. Design a natural-language interface to Unix command lines.
-It should be able to express verbally commands such as
-cat, cd, grep, ls, mv, rm, wc and also
-pipes built from them.
-
-Lexers are set by the flag lexer and unlexers by the flag unlexer.
+
| lexer | @@ -7166,7 +5129,6 @@ Lexers are set by the flag
|---|
| unlexer | @@ -7198,27 +5160,289 @@ Lexers are set by the flag
|---|
-There are three built-in types. Their syntax trees are literals of corresponding kinds: + +
+ ++Arithmetic expressions should be unambiguous. If we write +
++ 2 + 3 * 4 ++
+it should be parsed as one, but not both, of +
++ EPlus (EInt 2) (ETimes (EInt 3) (EInt 4)) + ETimes (EPlus (EInt 2) (EInt 3)) (EInt 4) ++
+We choose the former tree, because +multiplication has higher precedence than addition. +
++To express the latter tree, we have to use parentheses: +
++ (2 + 3) * 4 ++
+The usual precedence rules:
Int, with nonnegative integer literals e.g. 987031434
-Float, with nonnegative floating-point literals e.g. 907.219807
-String, with string literals e.g. "foo"
+1 + 2 + 3 means the same as (1 + 2) + 3.
-Their linearization type is uniformly {s : Str}.
+
+Precedence can be made into an inherent feature of expressions: +
+
+ oper
+ Prec : PType = Ints 2 ;
+ TermPrec : Type = {s : Str ; p : Prec} ;
+
+ mkPrec : Prec -> Str -> TermPrec = \p,s -> {s = s ; p = p} ;
+
+ lincat
+ Exp = TermPrec ;
+
+
+Notice Ints 2: a parameter type, whose values are the integers
+0,1,2.
+
+Using precedence levels: compare the inherent precedence of an +expression with the expected precedence. +
++This idea is encoded in the operation +
+
+ oper usePrec : TermPrec -> Prec -> Str = \x,p ->
+ case lessPrec x.p p of {
+ True => "(" x.s ")" ;
+ False => x.s
+ } ;
+
+
+(We use lessPrec from lib/prelude/Formal.)
+
+ +
+ ++We can define left-associative infix expressions: +
++ infixl : Prec -> Str -> (_,_ : TermPrec) -> TermPrec = \p,f,x,y -> + mkPrec p (usePrec x p ++ f ++ usePrec y (nextPrec p)) ; ++
+Constant-like expressions (the highest level): +
++ constant : Str -> TermPrec = mkPrec 2 ; ++
+All these operations can be found in lib/prelude/Formal,
+which has 5 levels.
+
+Now we can write the whole concrete syntax of Calculator compactly:
+
+ concrete CalculatorC of Calculator = open Formal, Prelude in {
+
+ flags lexer = codelit ; unlexer = code ; startcat = Exp ;
+
+ lincat Exp = TermPrec ;
+
+ lin
+ EPlus = infixl 0 "+" ;
+ EMinus = infixl 0 "-" ;
+ ETimes = infixl 1 "*" ;
+ EDiv = infixl 1 "/" ;
+ EInt i = constant i.s ;
+ }
+
+
++ +
+ +
+1. Define non-associative and right-associative infix operations
+analogous to infixl.
+
+2. Add a constructor that puts parentheses around expressions
+to raise their precedence, but that is eliminated by a def definition.
+Test parsing with and without a pipe to pt -transform=compute.
+
+ +
+ ++Translate arithmetic (infix) to JVM (postfix): +
++ 2 + 3 * 4 + + ===> + + iconst 2 : iconst 3 ; iconst 4 ; imul ; iadd ++
+Just give linearization rules for JVM: +
+
+ lin
+ EPlus = postfix "iadd" ;
+ EMinus = postfix "isub" ;
+ ETimes = postfix "imul" ;
+ EDiv = postfix "idiv" ;
+ EInt i = ss ("iconst" ++ i.s) ;
+ oper
+ postfix : Str -> SS -> SS -> SS = \op,x,y ->
+ ss (x.s ++ ";" ++ y.s ++ ";" ++ op) ;
+
+
++ +
+ ++A straight code programming language, with +initializations and assignments: +
++ int x = 2 + 3 ; + int y = x + 1 ; + x = x + 9 * y ; ++
+We define programs by the following constructors: +
++ fun + PEmpty : Prog ; + PInit : Exp -> (Var -> Prog) -> Prog ; + PAss : Var -> Exp -> Prog -> Prog ; ++
+PInit uses higher-order abstract syntax for making the
+initialized variable available in the continuation of the program.
+
+The abstract syntax tree for the above code is +
++ PInit (EPlus (EInt 2) (EInt 3)) (\x -> + PInit (EPlus (EVar x) (EInt 1)) (\y -> + PAss x (EPlus (EVar x) (ETimes (EInt 9) (EVar y))) + PEmpty)) ++
+No uninitialized variables are allowed - there are no constructors for Var!
+But we do have the rule
+
+ fun EVar : Var -> Exp ; ++
+The rest of the grammar is just the same as for arithmetic expressions
+here. The best way to implement it is perhaps by writing a
+module that extends the expression module. The most natural start category
+of the extension is Prog.
+
+ +
+ ++1. Define a C-like concrete syntax of the straight-code language. +
+
+2. Extend the straight-code language to expressions of type float.
+To guarantee type safety, you can define a category Typ of types, and
+make Exp and Var dependent on Typ. Basic floating point expressions
+can be formed from literal of the built-in GF type Float. The arithmetic
+operations should be made polymorphic (as here).
+
+3. Extend JVM generation to the straight-code language, using +two more instructions +
+iload x, which loads the value of the variable x
+istore x which stores a value to the variable x
++Thus the code for the example in the previous section is +
++ iconst 2 ; iconst 3 ; iadd ; istore x ; + iload x ; iconst 1 ; iadd ; istore y ; + iload x ; iconst 9 ; iload y ; imul ; iadd ; istore x ; ++ +
+4. If you made the exercise of adding floating point numbers to
+the language, you can now cash out the main advantage of type checking
+for code generation: selecting type-correct JVM instructions. The floating
+point instructions are precisely the same as the integer one, except that
+the prefix is f instead of i, and that fconst takes floating
+point literals as arguments.
+
+ +
+ ++Goals: +
++ +
+ +GF grammars can be used as parts of programs written in other programming languages. Haskell and Java. This facility is based on several components: @@ -7231,83 +5455,43 @@ This facility is based on several components:
-In this chapter, we will show the basic ways of producing such -embedded grammars and using them in Haskell, Java, and JavaScript programs. -We will build a simple example application in each language: +
--Moreover, we will use how grammar applications can be extended to -spoken language by generating language models for speech recognition -in various standard formats. -
- +-The portable format is called GFCC, "GF Canonical Compiled". A file -of this form can be produced from GF by the command +The portable format is called GFCC, "GF Canonical Compiled". +
++A GFCC file can be produced in GF by the command
> print_multi -printer=gfcc | write_file FILE.gfcc
-
-Files written in this format can also be imported in the GF system,
-which recognizes the suffix .gfcc and builds the multilingual
-grammar in memory.
-
This applies to GF version 3 and upwards. Older GF used a format suffixed
.gfcm.
At the moment of writing, also the Java interpreter still uses the GFCM format.
-GFCC is, in fact, the recommended format in +GFCC is the recommended format in which final grammar products are distributed, because they are stripped from superfluous information and can be started and applied faster than sets of separate modules.
Application programmers have never any need to read or modify GFCC files. -Also in this sense, they play the same role as machine code in -general-purpose programming. -
- --The interpreter is a kind of a miniature GF system, which can parse and -linearize with grammars. But it can only perform a subset of the commands of -the GF system. For instance, it -cannot compile source grammars into the GFCC format; the compiler is the most -heavy-weight component of the GF system, and should not be carried around -in end-user applications. -Since GFCC is much -simpler than source GF, building an interpreter is relatively easy. -Full-scale interpreters currently exist in Haskell and Java, and partial -ones in C++, JavaScript, and Prolog. We will in this chapter focus -on Haskell, Java, and JavaScript.
-Application programmers never need to read or modify the interpreter. -They only need to access it via its API. +GFCC thus plays the same role as machine code in +general-purpose programming (or bytecode in Java).
- --Readers unfamiliar with Haskell, or who just want to program in Java, can safely -skip this section. Everything will be repeated in the corresponding Java -section. However, seeing the Haskell code may still be helpful because -Haskell is in many ways closer to GF than Java is. In particular, recursive -types of syntax trees and pattern matching over them are very similar in -Haskell and GF, -but require a complex encoding with classes and visitors in Java. +
- -The Haskell API contains (among other things) the following types and functions:
@@ -7339,12 +5523,14 @@ This is the only module that needs to be imported in the Haskell application. It is available as a part of the GF distribution, in the filesrc/GF/GFCC/API.hs.
-
++ +
+Let us first build a stand-alone translator, which can translate in any multilingual grammar between any languages in the grammar. -The whole code for this translator is here:
module Main where @@ -7369,6 +5555,12 @@ To run the translator, first compile it by% ghc --make -o trans Translator.hs+ ++ +
+ +Producing GFCC for the translator
Then produce a GFCC file. For instance, the
Foodgrammar set can be compiled as follows: @@ -7387,11 +5579,6 @@ This produces the fileFood.gfcc(its name comes from the abstract % echo "pm -printer=gfcc | wf Food.gfcc" | gf FoodEng.gf FoodIta.gf
-Equivalently, the grammars could be read into GF shell and the pm command
-issued from there. But the unix command has the advantage that it can
-be put into a Makefile to automate the compilation of an application.
-
The Haskell library function interact makes the trans program work
like a Unix filter, which reads from standard input and writes to standard
output. Therefore it can be a part of a pipe and read and write files.
@@ -7404,16 +5591,15 @@ The simplest way to translate is to echo input to the program:
The result is given in all languages except the input language.
- -
-If the user wants to translate many expressions in a sequence, it
-is cumbersome to have to start the translator over and over again,
-because reading the grammar and building the parser always takes
-time. The translator of the previous section is easy to modify
-to enable this: just change interact in the main function to
-loop. It is not a standard Haskell function, so its definition has
-to be included:
+
+
+To avoid starting the translator over and over again:
+change interact in the main function to loop, defined as
+follows:
loop :: (String -> String) -> IO () @@ -7427,17 +5613,22 @@ to be included: The loop keeps on translating line by line until the input line isquit. - ++ +
+A question-answer system
The next application is also a translator, but it adds a -transfer component to the grammar. Transfer is a function that -takes the input syntax tree into some other syntax tree, which is -then linearized and shown back to the user. The transfer function we -are going to use is one that computes a question into an answer. +transfer component - a function that transforms syntax trees. +
++The transfer function we use is one that computes a question into an answer. +
+The program accepts simple questions about arithmetic and answers "yes" or "no" in the language in which the question was made:
@@ -7448,47 +5639,33 @@ The program accepts simple questions about arithmetic and answers Oui.
-The main change that is needed to the pure translator is to give
-the type of translate an extra argument: a transfer function.
+We change the pure translator by giving
+the translate function the transfer as an extra argument:
translate :: (Tree -> Tree) -> MultiGrammar -> String -> String
-You can think of ordinary translation as a special case where
+Ordinary translation as a special case where
transfer is the identity function (id in Haskell).
-Also the behaviour of returning the reply in different languages
-should be changed so that the reply is returned in the same language.
-Here is the complete definition of translate in the new form.
+To reply in the same language as the question:
translate tr gr = case parseAllLang gr (startCat gr) s of
(lg,t:_):_ -> linearize gr lg (tr t)
_ -> "NO PARSE"
+
-To complete the system, we have to define the transfer function.
-So, how can we write a function from from abstract syntax trees
-to abstract syntax trees? The embedded API does not make
-the constructors of the type Tree available for users. Even if it did, it would
-be quite complicated to use the type, and programs would be likely
-to produce trees that are ill-typed in GF and therefore cannot
-be linearized.
+
-The way to go in defining transfer is to use GF's tree constructors, that
-is, the fun functions, as if they were Haskell's data constructors.
-There is enough resemblance between GF and Haskell to make this possible
-in most cases. It is even possible in Java, as we shall see later.
-
-Thus every category of GF is translated into a Haskell datatype, where the -functions producing a value of that category are treated as constructors. -The translation is obtained by using the batch compiler with the command +To make it easy to define a transfer function, we export the +abstract syntax to a system of Haskell datatypes:
% gfc -haskell Food.gfcc
@@ -7511,15 +5688,16 @@ module named GSyntax.
-As an example, we take -the grammar we are going to use for queries. The abstract syntax is + +
+ ++Input: abstract syntax judgements
- abstract Math = {
-
- flags startcat = Question ;
-
- cat Answer ; Question ; Object ;
+ cat
+ Answer ; Question ; Object ;
fun
Even : Object -> Question ;
@@ -7529,10 +5707,9 @@ the grammar we are going to use for queries. The abstract syntax is
Yes : Answer ;
No : Answer ;
- }
-It is translated to the following system of datatypes: +Output: Haskell definitions
newtype GInt = GInt Integer @@ -7552,9 +5729,13 @@ It is translated to the following system of datatypes: All type and constructor names are prefixed with aGto prevent clashes.-Now it is possible to define functions from and to these datatype, in Haskell. + +
+ +The question-answer function
+Haskell's type checker guarantees that the functions are well-typed also with -respect to GF. Here is a question-to-answer function for this language: +respect to GF.
answer :: GQuestion -> GAnswer @@ -7570,27 +5751,20 @@ respect to GF. Here is a question-to-answer function for this language: test :: (Int -> Bool) -> GObject -> GAnswer test f x = if f (value x) then GYes else GNo+-So it is the function
+ +answerthat we want to apply as transfer. -The only problem is the type of this function: the parsing and -linearization method ofAPIwork withTrees and not -withGQuestions andGAnswers. +Converting between Haskell and GF trees
-Fortunately the Haskell translation of GF takes care of translating -between trees and the generated datatypes. This is done by using -a class with the required translation methods: +The
GSyntaxmodule also containsclass Gf a where gf :: a -> Tree fg :: Tree -> a ---The Haskell code generator also generates instances of these classes -for each datatype, for example, -
-+ instance Gf GQuestion where gf (GEven x1) = DTr [] (AC (CId "Even")) [gf x1] gf (GOdd x1) = DTr [] (AC (CId "Odd")) [gf x1] @@ -7603,10 +5777,7 @@ for each datatype, for example, _ -> error ("no Question " ++ show t)-Needless to say,
GSyntaxis a module that a programmer -never needs to look into, let alone change: it is enough to know that it -contains a systematic encoding and decoding between an abstract syntax -and Haskell datatypes, where +For the programmer, it is enougo to know:
G
@@ -7614,39 +5785,11 @@ and Haskell datatypes, where
fg translates from GF to Haskell
-Here is the complete code for the Haskell module TransferLoop.hs.
-
- module Main where - - import GF.GFCC.API - import TransferDef (transfer) - - main :: IO () - main = do - gr <- file2grammar "Math.gfcc" - loop (translate transfer gr) - - loop :: (String -> String) -> IO () - loop trans = do - s <- getLine - if s == "quit" then putStrLn "bye" else do - putStrLn $ trans s - loop trans - - translate :: (Tree -> Tree) -> MultiGrammar -> String -> String - translate tr gr = case parseAllLang gr (startCat gr) s of - (lg,t:_):_ -> linearize gr lg (tr t) - _ -> "NO PARSE" --
-This is the Main module, which just needs a function transfer from
-TransferDef in order to compile. In the current application, this module
-looks as follows:
+
module TransferDef where
@@ -7675,10 +5818,46 @@ looks as follows:
sieve (p:xs) = p : sieve [ n | n <- xs, n `mod` p > 0 ]
sieve [] = []
+
-This module, in turn, needs GSyntax to compile, and the main module
-needs Math.gfcc to run. To automate the production of the system,
-we write a Makefile as follows:
+
+
+Here is the complete code in the Haskell file TransferLoop.hs.
+
+ module Main where + + import GF.GFCC.API + import TransferDef (transfer) + + main :: IO () + main = do + gr <- file2grammar "Math.gfcc" + loop (translate transfer gr) + + loop :: (String -> String) -> IO () + loop trans = do + s <- getLine + if s == "quit" then putStrLn "bye" else do + putStrLn $ trans s + loop trans + + translate :: (Tree -> Tree) -> MultiGrammar -> String -> String + translate tr gr = case parseAllLang gr (startCat gr) s of + (lg,t:_):_ -> linearize gr lg (tr t) + _ -> "NO PARSE" ++ +
+ +
+ +
+To automate the production of the system, we write a Makefile as follows:
all:
@@ -7687,7 +5866,7 @@ we write a Makefile as follows:
strip math
-(Notice that the empty segments starting the command lines in a Makefile must be tabs.) +(The empty segments starting the command lines in a Makefile must be tabs.) Now we can compile the whole system by just typing
@@ -7700,11 +5879,6 @@ Then you can run it by typing
./math
-Well --- you will of course need some concrete syntaxes of Math in order
-to succeed. We have defined ours by using the resource functor design pattern,
-as explained here.
-
Just to summarize, the source of the application consists of the following files:
@@ -7715,40 +5889,29 @@ Just to summarize, the source of the application consists of the following files
TransferLoop.hs -- Haskell Main module
-
--When an API for GFCC in Java is available, -we will write the same applications in Java as -were written in Haskell above. Until then, we will -build another kind of application, which does not require -modification of generated Java code. +
--More information on embedded GF grammars in Java can be found in the document -
-- www.cs.chalmers.se/~bringert/gf/gf-java.html --
-by Björn Bringert. -
- -
A Java system needs many more files than a Haskell system.
-To get started, one can fetch the package gfc2java from
-
- www.cs.chalmers.se/~bringert/darcs/gfc2java/ --
-by using the Darcs version control system as described in the gf-java home page.
+To get started, fetch the package gfc2java from
-The gfc2java package contains a script build-translet, which can be applied
-to any .gfcm file to create a translet, a small translation GUI. Foor the Food
-grammars of the third chapter, we first create a file food.gfcm by
+www.cs.chalmers.se/~bringert/darcs/gfc2java/
+
+by using the Darcs version control system as described in this page. +
+
+The gfc2java package contains a script build-translet, which
+can be applied
+to any .gfcm file to create a translet, a small translation GUI.
+
+For the Food
+grammars of Lesson 2, we first create a file food.gfcm by
% echo "pm | wf food.gfcm" | gf FoodEng.gf FoodIta.gf
@@ -7769,14 +5932,20 @@ The resulting file translate-food.jar can be run with
The translet looks like this:
-
+
-
-Dialogue systems
-A question-answer system is a special case of a dialogue system, where the user and
-the computer communicate by writing or, even more properly, by speech. The gf-java
-homepage provides an example of a most simple dialogue system imaginable, where two
+
+
+
+Dialogue systems in Java
+
+A question-answer system is a special case of a dialogue system,
+where the user and
+the computer communicate by writing or, even more properly, by speech.
+The gf-java
+homepage provides an example of a most simple dialogue system imaginable,
+where two
the conversation has just two rules:
The conversation can be made in both English and Swedish; the user's initiative
decides which language the system replies in. Thus the structure is very similar
-to the math program here. The GF and
-Java sources of the program can be
+to the math program here.
+
+The GF and Java sources of the program can be found in
-- www.cs.chalmers.se/~bringert/darcs/simpledemo -+
+[www.cs.chalmers.se/~bringert/darcs/simpledemo http://www.cs.chalmers.se/~bringert/darcs/simpledemo]
+
again accessible with the Darcs version control system.
- ++ +
+
The standard way of using GF in speech recognition is by building
-grammar-based language models. To this end, GF comes with compilers
-into several formats that are used in speech recognition systems.
-One such format is GSL, used in the Nuance speech recognizer.
-It is produced from GF simply by printing a grammar with the flag
--printer=gsl. The following example uses the smart house grammar defined
-here.
+grammar-based language models.
+
+GF supports several formats, including +GSL, the formatused in the Nuance speech recognizer. +
+
+GSL is produced from GF by printing a grammar with the flag
+-printer=gsl.
+
+Example: GSL generated from the smart house grammar here.
> import -conversion=finite SmartEng.gf
@@ -7828,6 +6008,12 @@ It is produced from GF simply by printing a grammar with the flag
SmartEng_5 "fan"
SmartEng_6 "light"
+
++ +
+ +
Other formats available via the -printer flag include:
-printer flag include:
-
All currently available formats can be seen in gf with help -printer.
-We have used dependent types to control semantic well-formedness -in grammars. This is important in traditional type theory -applications such as proof assistants, where only mathematically -meaningful formulas should be constructed. But semantic filtering has -also proved important in speech recognition, because it reduces the -ambiguity of the results. -
--Now, GSL is a context-free format, so how does it cope with dependent types? -In general, dependent types can give rise to infinitely many basic types -(exercise!), whereas a context-free grammar can by definition only have -finitely many nonterminals. -
-
-This is where the flag -conversion=finite is needed in the import
-command. Its effect is to convert a GF grammar with dependent types to
-one without, so that each instance of a dependent type is replaced by
-an atomic type. This can then be used as a nonterminal in a context-free
-grammar. The finite conversion presupposes that every
-dependent type has only finitely many instances, which is in fact
-the case in the Smart grammar.
-
-Exercise. If you have access to the Nuance speech recognizer,
-test it with GF-generated language models for SmartEng. Do this
-both with and without -conversion=finite.
-
-Exercise. Construct an abstract syntax with infinitely many instances -of dependent types. -
- --An alternative to grammar-based language models are -statistical language models (SLMs). An SLM is -built from a corpus, i.e. a set of utterances. It specifies the -probability of each n-gram, i.e. sequence of n words. The -typical value of n is 2 (bigrams) or 3 (trigrams). -
--One advantage of SLMs over grammar-based models is that they are -robust, i.e. they can be used to recognize sequences that would -be out of the grammar or the corpus. Another advantage is that -an SLM can be built "for free" if a corpus is available. -
-
-However, collecting a corpus can require a lot of work, and writing
-a grammar can be less demanding, especially with tools such as GF or
-Regulus. This advantage of grammars can be combined with robustness
-by creating a back-up SLM from a synthesized corpus. This means
-simply that the grammar is used for generating such a corpus.
-In GF, this can be done with the generate_trees command.
-As with grammar-based models, the quality of the SLM is better
-if meaningless utterances are excluded from the corpus. Thus
-a good way to generate an SLM from a GF grammar is by using
-dependent types and filter the results through the type checker:
-
- > generate_trees | put_trees -transform=solve | linearize --
-The method of creating statistical language model from corpora synthesized -from GF grammars is applied and evaluated in (Jonson 2006). -
-
-Exercise. Measure the size of the corpus generated from
-SmartEng (defined here), with and without type checker filtering.
-