forked from GitHub/gf-core
The problem is that lower case a with a grave accent is coded in UTF-8 as \195\160. Unicode character \160 is non-breaking space, so Haskell's words function will break a UTF-8 encoded string at this character. String literals in the .gfo file are UTF-8 encoded in generateModuleCode, just before the call to prGrammar (which uses compactPrint, which used words). The real solution would be to pretty-print the grammar to Unicode, and then encode as UTF-8. The problem with that is Latin-1 identifers. They are now kept in Latin-1 in the .gfo file, since Alex can't handle Unicode. The real solution to that would be to fix Alex to handle Unicode, but that is non-trivial. GHC interally uses a very hacky .x file to be able to lex UTF-8 source files. An alternative solution that doesn't address the weirdness of using two different encodings in the same .gfo as we do now, is to incorporate compactPrint into the grammar printer, to avoid having to do any postprocessing.
24 lines
711 B
Haskell
24 lines
711 B
Haskell
module GF.Infra.CompactPrint where
|
|
import Data.Char
|
|
|
|
compactPrint = compactPrintCustom keywordGF (const False)
|
|
|
|
compactPrintGFCC = compactPrintCustom (const False) keywordGFCC
|
|
|
|
-- FIXME: using words is not safe, since this is run on UTF-8 encoded data.
|
|
compactPrintCustom pre post = id -- dps . concat . map (spaceIf pre post) . words
|
|
|
|
dps = dropWhile isSpace
|
|
|
|
spaceIf pre post w = case w of
|
|
_ | pre w -> "\n" ++ w
|
|
_ | post w -> w ++ "\n"
|
|
c:_ | isAlpha c || isDigit c -> " " ++ w
|
|
'_':_ -> " " ++ w
|
|
_ -> w
|
|
|
|
keywordGF w = elem w ["cat","fun","lin","lincat","lindef","oper","param"]
|
|
keywordGFCC w =
|
|
last w == ';' ||
|
|
elem w ["flags","fun","cat","lin","oper","lincat","lindef","printname","param"]
|