leex
(parsetools)Lexical analyzer generator for Erlang
A regular expression based lexical analyzer generator for Erlang, similar to lex or flex.
Note!
The Leex module should be considered experimental as it will be subject to changes in future releases.
DATA TYPES
ErrorInfo = {ErrorLine,module(),error_descriptor()} ErrorLine = integer() Token = tuple()
Functions
file(FileName) -> ok | error
file(FileName, Options) -> ok | error
FileName = filename()Options = Option | [Option]Option =�-�see below�-FileReturn = {ok, Scannerfile} | {ok, Scannerfile, Warnings} | error | {error, Warnings, Errors}Scannerfile = filename()Warnings = Errors = [{filename(), [ErrorInfo]}]
Generates a lexical analyzer from the definition in the input
file. The input file has the extension .xrl. This is
added to the filename if it is not given. The resulting module
is the Xrl filename without the .xrl extension.
The current options are:
dfa_graphGenerates a .dot file which contains a
description of the DFA in a format which can be viewed with
Graphviz, www.graphviz.com.
{includefile,Includefile}Uses a specific or customised prologue file
instead of default
lib/parsetools/include/leexinc.hrl which is
otherwise included.
{report_errors, bool()}Causes errors to be printed as they occur. Default is
true.
{report_warnings, bool()}Causes warnings to be printed as they occur. Default is
true.
warnings_as_errorsCauses warnings to be treated as errors.
{report, bool()}This is a short form for both report_errors and
report_warnings.
{return_errors, bool()}If this flag is set, {error, Errors, Warnings}
is returned when there are errors. Default is false.
{return_warnings, bool()}If this flag is set, an extra field containing
Warnings is added to the tuple returned upon
success. Default is false.
{return, bool()}This is a short form for both return_errors and
return_warnings.
{scannerfile, Scannerfile}Scannerfile is the name of the file that
will contain the Erlang scanner code that is generated.
The default ("") is to add the extension
.erl to FileName stripped of the
.xrl extension.
{verbose, bool()}Outputs information from parsing the input file and generating the internal tables.
Any of the Boolean options can be set to true by
stating the name of the option. For example, verbose
is equivalent to {verbose, true}.
Leex will add the extension .hrl to the
Includefile name and the extension .erl to the
Scannerfile name, unless the extension is already
there.
format_error(ErrorInfo) -> Chars
Chars = [char() | Chars]
Returns a string which describes the error
ErrorInfo returned when there is an error in a
regular expression.
GENERATED SCANNER EXPORTS
The following functions are exported by the generated scanner.
Functions
string(String) -> StringRet
string(String, StartLine) -> StringRet
String = string()StringRet = {ok,Tokens,EndLine} | ErrorInfoTokens = [Token]EndLine = StartLine = integer()
Scans String and returns all the tokens in it, or an
error.
Note!
It is an error if not all of the characters in
String are consumed.
token(Cont, Chars) -> {more,Cont1} | {done,TokenRet,RestChars}
token(Cont, Chars, StartLine) -> {more,Cont1} | {done,TokenRet,RestChars}
Cont = [] | Cont1Cont1 = tuple()Chars = RestChars = string() | eofTokenRet = {ok, Token, EndLine} | {eof, EndLine} | ErrorInfoStartLine = EndLine = integer()
This is a re-entrant call to try and scan one token from
Chars. If there are enough characters in Chars
to either scan a token or detect an error then this will be
returned with {done,...}. Otherwise
{cont,Cont} will be returned where Cont is
used in the next call to token() with more characters
to try an scan the token. This is continued until a token
has been scanned. Cont is initially [].
It is not designed to be called directly by an application but used through the i/o system where it can typically be called in an application by:
io:request(InFile, {get_until,Prompt,Module,token,[Line]})
-> TokenRet
tokens(Cont, Chars) -> {more,Cont1} | {done,TokensRet,RestChars}
tokens(Cont, Chars, StartLine) -> {more,Cont1} | {done,TokensRet,RestChars}
Cont = [] | Cont1Cont1 = tuple()Chars = RestChars = string() | eofTokensRet = {ok, Tokens, EndLine} | {eof, EndLine} | ErrorInfoTokens = [Token]StartLine = EndLine = integer()
This is a re-entrant call to try and scan tokens from
Chars. If there are enough characters in Chars
to either scan tokens or detect an error then this will be
returned with {done,...}. Otherwise
{cont,Cont} will be returned where Cont is
used in the next call to tokens() with more
characters to try an scan the tokens. This is continued
until all tokens have been scanned. Cont is initially
[].
This functions differs from token in that it will
continue to scan tokens upto and including an
{end_token,Token} has been scanned (see next
section). It will then return all the tokens. This is
typically used for scanning grammars like Erlang where there
is an explicit end token, '.'. If no end token is
found then the whole file will be scanned and returned. If
an error occurs then all tokens upto and including the next
end token will be skipped.
It is not designed to be called directly by an application but used through the i/o system where it can typically be called in an application by:
io:request(InFile, {get_until,Prompt,Module,tokens,[Line]})
-> TokensRet
Input File Format
Erlang style comments starting with a % are allowed in
scanner files. A definition file has the following format:
<Header>
Definitions.
<Macro Definitions>
Rules.
<Token Rules>
Erlang code.
<Erlang code>
The "Definitions.", "Rules." and "Erlang code." headings are mandatory and must occur at the beginning of a source line. The <Header>, <Macro Definitions> and <Erlang code> sections may be empty but there must be at least one rule.
Macro definitions have the following format:
NAME = VALUE
and there must be spaces around =. Macros can be used in
the regular expressions of rules by writing {NAME}.
Note!
When macros are expanded in expressions the macro calls are replaced by the macro value without any form of quoting or enclosing in parentheses.
Rules have the following format:
<Regexp> : <Erlang code>.
The <Regexp> must occur at the start of a line and not
include any blanks; use \t and \s to include TAB
and SPACE characters in the regular expression. If <Regexp>
matches then the corresponding <Erlang code> is evaluated to
generate a token. With the Erlang code the following predefined
variables are available:
TokenCharsA list of the characters in the matched token.
TokenLenThe number of characters in the matched token.
TokenLineThe line number where the token occurred.
The code must return:
{token,Token}Return Token to the caller.
{end_token,Token}Return Token and is last token in a tokens call.
skip_tokenSkip this token completely.
{error,ErrString}An error in the token, ErrString is a string
describing the error.
It is also possible to push back characters into the input characters with the following returns:
{token,Token,PushBackList}{end_token,Token,PushBackList}{skip_token,PushBackList}
These have the same meanings as the normal returns but the
characters in PushBackList will be prepended to the input
characters and scanned for the next token. Note that pushing
back a newline will mean the line numbering will no longer be
correct.
Note!
Pushing back characters gives you unexpected possibilities to cause the scanner to loop!
The following example would match a simple Erlang integer or float and return a token which could be sent to the Erlang parser:
D = [0-9]
{D}+ :
{token,{integer,TokenLine,list_to_integer(TokenChars)}}.
{D}+\.{D}+((E|e)(\+|\-)?{D}+)? :
{token,{float,TokenLine,list_to_float(TokenChars)}}.
The Erlang code in the "Erlang code." section is written into the output file directly after the module declaration and predefined exports declaration so it is possible to add extra exports, define imports and other attributes which are then visible in the whole file.
Regular Expressions
The regular expressions allowed here is a subset of the set
found in egrep and in the AWK programming language, as
defined in the book, The AWK Programming Language, by A. V. Aho,
B. W. Kernighan, P. J. Weinberger. They are composed of the
following characters:
cMatches the non-metacharacter c.
\cMatches the escape sequence or literal character c.
.Matches any character.
^Matches the beginning of a string.
$Matches the end of a string.
[abc...]Character class, which matches any of the characters
abc.... Character ranges are specified by a pair of
characters separated by a -.
[^abc...]Negated character class, which matches any character
except abc....
r1 | r2Alternation. It matches either r1 or r2.
r1r2Concatenation. It matches r1 and then r2.
r+Matches one or more rs.
r*Matches zero or more rs.
r?Matches zero or one rs.
(r)Grouping. It matches r.
The escape sequences allowed are the same as for Erlang strings:
\bBackspace.
\fForm feed.
\nNewline (line feed).
\rCarriage return.
\tTab.
\eEscape.
\vVertical tab.
\sSpace.
\dDelete.
\dddThe octal value ddd.
\xhhThe hexadecimal value hh.
\x{h...}The hexadecimal value h....
\cAny other character literally, for example \\ for
backslash, \" for ".
The following examples define Erlang data types:
Atoms [a-z][0-9a-zA-Z_]*
Variables [A-Z_][0-9a-zA-Z_]*
Floats (\+|-)?[0-9]+\.[0-9]+((E|e)(\+|-)?[0-9]+)?
Note!
Anchoring a regular expression with ^ and $
is not implemented in the current version of Leex and just
generates a parse error.