forked from GitHub/gf-core
Update doc/gf-help-full.txt (GF shell reference manual)
This commit is contained in:
@@ -453,7 +453,7 @@ sequences; see example.
|
|||||||
| ``-from_urdu`` | from unicode to GF Urdu transliteration
|
| ``-from_urdu`` | from unicode to GF Urdu transliteration
|
||||||
| ``-from_utf8`` | decode from utf8 (default)
|
| ``-from_utf8`` | decode from utf8 (default)
|
||||||
| ``-lexcode`` | code-like lexer
|
| ``-lexcode`` | code-like lexer
|
||||||
| ``-lexmixed`` | mixture of text and code (code between $...$)
|
| ``-lexmixed`` | mixture of text and code, as in LaTeX (code between $...$, \(...)\, \[...\])
|
||||||
| ``-lextext`` | text-like lexer
|
| ``-lextext`` | text-like lexer
|
||||||
| ``-to_amharic`` | from GF Amharic transliteration to unicode
|
| ``-to_amharic`` | from GF Amharic transliteration to unicode
|
||||||
| ``-to_ancientgreek`` | from GF ancient Greek transliteration to unicode
|
| ``-to_ancientgreek`` | from GF ancient Greek transliteration to unicode
|
||||||
@@ -473,7 +473,7 @@ sequences; see example.
|
|||||||
| ``-to_utf8`` | encode to utf8 (default)
|
| ``-to_utf8`` | encode to utf8 (default)
|
||||||
| ``-unchars`` | unlexer that puts no spaces between tokens
|
| ``-unchars`` | unlexer that puts no spaces between tokens
|
||||||
| ``-unlexcode`` | code-like unlexer
|
| ``-unlexcode`` | code-like unlexer
|
||||||
| ``-unlexmixed`` | mixture of text and code (code between $...$)
|
| ``-unlexmixed`` | mixture of text and code (code between $...$, \(...)\, \[...\])
|
||||||
| ``-unlextext`` | text-like unlexer
|
| ``-unlextext`` | text-like unlexer
|
||||||
| ``-unwords`` | unlexer that puts a single space between tokens (default)
|
| ``-unwords`` | unlexer that puts a single space between tokens (default)
|
||||||
| ``-words`` | lexer that assumes tokens separated by spaces (default)
|
| ``-words`` | lexer that assumes tokens separated by spaces (default)
|
||||||
@@ -526,7 +526,7 @@ trees where a function node is a metavariable.
|
|||||||
| ``-from_urdu`` | from unicode to GF Urdu transliteration
|
| ``-from_urdu`` | from unicode to GF Urdu transliteration
|
||||||
| ``-from_utf8`` | decode from utf8 (default)
|
| ``-from_utf8`` | decode from utf8 (default)
|
||||||
| ``-lexcode`` | code-like lexer
|
| ``-lexcode`` | code-like lexer
|
||||||
| ``-lexmixed`` | mixture of text and code (code between $...$)
|
| ``-lexmixed`` | mixture of text and code, as in LaTeX (code between $...$, \(...)\, \[...\])
|
||||||
| ``-lextext`` | text-like lexer
|
| ``-lextext`` | text-like lexer
|
||||||
| ``-to_amharic`` | from GF Amharic transliteration to unicode
|
| ``-to_amharic`` | from GF Amharic transliteration to unicode
|
||||||
| ``-to_ancientgreek`` | from GF ancient Greek transliteration to unicode
|
| ``-to_ancientgreek`` | from GF ancient Greek transliteration to unicode
|
||||||
@@ -546,7 +546,7 @@ trees where a function node is a metavariable.
|
|||||||
| ``-to_utf8`` | encode to utf8 (default)
|
| ``-to_utf8`` | encode to utf8 (default)
|
||||||
| ``-unchars`` | unlexer that puts no spaces between tokens
|
| ``-unchars`` | unlexer that puts no spaces between tokens
|
||||||
| ``-unlexcode`` | code-like unlexer
|
| ``-unlexcode`` | code-like unlexer
|
||||||
| ``-unlexmixed`` | mixture of text and code (code between $...$)
|
| ``-unlexmixed`` | mixture of text and code (code between $...$, \(...)\, \[...\])
|
||||||
| ``-unlextext`` | text-like unlexer
|
| ``-unlextext`` | text-like unlexer
|
||||||
| ``-unwords`` | unlexer that puts a single space between tokens (default)
|
| ``-unwords`` | unlexer that puts a single space between tokens (default)
|
||||||
| ``-words`` | lexer that assumes tokens separated by spaces (default)
|
| ``-words`` | lexer that assumes tokens separated by spaces (default)
|
||||||
@@ -747,6 +747,7 @@ To see transliteration tables, use command ut.
|
|||||||
- Syntax: ``ps OPT? STRING``
|
- Syntax: ``ps OPT? STRING``
|
||||||
- Options:
|
- Options:
|
||||||
|
|
||||||
|
| ``-lines`` | apply the operation separately to each input line, returning a list of lines
|
||||||
| ``-bind`` | bind tokens separated by Prelude.BIND, i.e. &+
|
| ``-bind`` | bind tokens separated by Prelude.BIND, i.e. &+
|
||||||
| ``-chars`` | lexer that makes every non-space character a token
|
| ``-chars`` | lexer that makes every non-space character a token
|
||||||
| ``-from_amharic`` | from unicode to GF Amharic transliteration
|
| ``-from_amharic`` | from unicode to GF Amharic transliteration
|
||||||
@@ -765,7 +766,7 @@ To see transliteration tables, use command ut.
|
|||||||
| ``-from_urdu`` | from unicode to GF Urdu transliteration
|
| ``-from_urdu`` | from unicode to GF Urdu transliteration
|
||||||
| ``-from_utf8`` | decode from utf8 (default)
|
| ``-from_utf8`` | decode from utf8 (default)
|
||||||
| ``-lexcode`` | code-like lexer
|
| ``-lexcode`` | code-like lexer
|
||||||
| ``-lexmixed`` | mixture of text and code (code between $...$)
|
| ``-lexmixed`` | mixture of text and code, as in LaTeX (code between $...$, \(...)\, \[...\])
|
||||||
| ``-lextext`` | text-like lexer
|
| ``-lextext`` | text-like lexer
|
||||||
| ``-to_amharic`` | from GF Amharic transliteration to unicode
|
| ``-to_amharic`` | from GF Amharic transliteration to unicode
|
||||||
| ``-to_ancientgreek`` | from GF ancient Greek transliteration to unicode
|
| ``-to_ancientgreek`` | from GF ancient Greek transliteration to unicode
|
||||||
@@ -785,7 +786,7 @@ To see transliteration tables, use command ut.
|
|||||||
| ``-to_utf8`` | encode to utf8 (default)
|
| ``-to_utf8`` | encode to utf8 (default)
|
||||||
| ``-unchars`` | unlexer that puts no spaces between tokens
|
| ``-unchars`` | unlexer that puts no spaces between tokens
|
||||||
| ``-unlexcode`` | code-like unlexer
|
| ``-unlexcode`` | code-like unlexer
|
||||||
| ``-unlexmixed`` | mixture of text and code (code between $...$)
|
| ``-unlexmixed`` | mixture of text and code (code between $...$, \(...)\, \[...\])
|
||||||
| ``-unlextext`` | text-like unlexer
|
| ``-unlextext`` | text-like unlexer
|
||||||
| ``-unwords`` | unlexer that puts a single space between tokens (default)
|
| ``-unwords`` | unlexer that puts a single space between tokens (default)
|
||||||
| ``-words`` | lexer that assumes tokens separated by spaces (default)
|
| ``-words`` | lexer that assumes tokens separated by spaces (default)
|
||||||
@@ -1059,22 +1060,6 @@ This command must be a line of its own, and thus cannot be a part of a pipe.
|
|||||||
#NORMAL
|
#NORMAL
|
||||||
|
|
||||||
|
|
||||||
#VSPACE
|
|
||||||
|
|
||||||
====t = tokenize====
|
|
||||||
#NOINDENT
|
|
||||||
``t`` = ``tokenize``: //Tokenize string using the vocabulary.//
|
|
||||||
|
|
||||||
#TINY
|
|
||||||
|
|
||||||
- Flags:
|
|
||||||
|
|
||||||
| ``-lang`` | The name of the concrete to use
|
|
||||||
|
|
||||||
|
|
||||||
#NORMAL
|
|
||||||
|
|
||||||
|
|
||||||
#VSPACE
|
#VSPACE
|
||||||
|
|
||||||
====tq = translation_quiz====
|
====tq = translation_quiz====
|
||||||
|
|||||||
Reference in New Issue
Block a user