Reference¶
This document describes all components of the tdparser package:
Exception classes¶
-
exception
tdparser.
Error
¶ This exception is the base class for all tdparser-related exceptions.
-
exception
tdparser.
ParserError
(Error)¶ This exception will be raised whenever an unexpected token is encountered in the flow of tokens.
-
exception
tdparser.
MissingTokensError
(ParserError)¶ This exception is raised when the parsing logic would expect more tokens than are available
-
exception
tdparser.
InvalidTokenError
(ParserError)¶ This exception is raised when an unexpected token is encountered while parsing the data flow.
Defining tokens¶
A token must inherit from the Token
class, and override a few elements depending on its role.
-
class
tdparser.
Token
¶ The base class for all tokens.
-
regexp
¶ Class attribute.
Optional regular expression (see
re
) describing text that should be lexed into this token class.Type: str
-
lbp
¶ Class attribute.
“Left binding power”. This integer describes the precedence of the token when it stands at the left of an expression.
Tokens with a higher binding power will absorb the next tokens in priority:
In
1 + 2 * 3 + 4
, if+
has a lbp of 10 and*
of 20, the2 * 3
part will be computed and its result passed as a right expression to the first+
.Type: int
-
nud
(self, context)¶ Compute the “Null denotation” of this token.
This method should only be overridden for tokens that may appear at the beginning of an expression.
For instance, a number, a variable name, the “-” sign when denoting “the opposite of the next expression”.
The
context
argument is theParser
currently running. This gives easy access to:- The next token in the flow (
Parser.current_token
) - The position in the flow of tokens (
Parser.current_pos
) - Retrieving the next sub-expression from the parser (
Parser.expression()
)
Parameters: context (tdparser.Parser) – The active Parser
Returns: The value this token evaluates to - The next token in the flow (
-
led
(self, left, context)¶ Compute the “Left denotation” of this token.
This method is called whenever a token appears to the right of another token within an expression — typically infix or postfix operators.
It receives two arguments:
left
is the value of the previous token or expression in the flowcontext
is the activeParser
instance, providing calls toParser.expression()
to fetch the next expression.
Parameters: - left – Whaterver the previous expression evaluated to
- context (tdparser.Parser) – The active
Parser
Returns: The value this token evaluates to
-
-
class
tdparser.
LeftParen
(Token)¶ A simple
Token
subclass matching an opening bracket,(
.When parsed, this will token will fetch the next subexpression, assert that this expression is followed by a
RightParen
token, and return the value of the fetched expression.
Parsing a flow of tokens¶
The actual parsing occurs in the Parser
class, which takes a flow of Token
.
Parsing is performed through the parse()
method, which will return the next
parsed expression.
-
class
tdparser.
Parser
¶ Handles parsing of a flow of tokens. Maintains a pointer to the current
Token
.-
current_pos
¶ Stores the current position within the token flow. Starts at 0.
Type: int
-
current_token
¶ The next
Token
to parse. When calling a token’snud()
orled()
, this attribute points to the next token, not the token whose method has been called.Type: Token
-
tokens
¶ Iterable of tokens to parse. Can be any kind of iterable — will only be walked once.
Type: iterable of Token
-
consume
(self, expect_class=None)¶ Consume the active
current_token
, and advance to the next token.If the
expect_class
is provided, this will ensure that thecurrent_token
matches that token class, and raise aInvalidTokenError
otherwise.Parameters: expect_class (tdparser.Token) – Optionnal Token
subclass that thecurrent_token
should be an instance ofReturns: the current_token
at the time of calling.
-
expression
(self, rbp=0)¶ Retrieve the next expression from the flow of tokens.
The
rbp
argument describes the “right binding power” of the calling token. This means that the parsing of the expression will stop at the first token whose left binding power is lower than this right binding power.This obscure definition describes the right precedence of a token. In other words, it means “fetch an expression, and stop whenever you meet an operator with a lower precedence”.
Example
In the
1 + 2 * 3 ** 4 + 5
, theled()
method of the*
token will callcontext.expression(20)
. This call will absorb the3 ** 4
part as a single expression, and stop when meeting the+
, whose left binding power, 10, is lower than 20.Parameters: rbp (int) – The (optional) right binding power to use when fetching the next subexpression.
-
parse
(self)¶ Compute the first expression from the flow of tokens.
-
Generating tokens from a string¶
The Parser
class works on an iterable of tokens
.
In order to retrieve those tokens, the simplest way is to use the Lexer
class.
-
class
tdparser.
Lexer
¶ This class handles converting a string into an iterable of
tokens
.Once initialized, a
Lexer
must be passed a set of tokens to handle.The lexer parses strings according to the following algorithm:
- Try each regexp in order for a match at the start of the string
- If none match:
- If the first character is a blank (see
blank_chars
), remove it from the beginning of the string and go back to step 1 - Otherwise, raise a
ValueError
.
- If the first character is a blank (see
- If more than one regexp match, keep the one with the longest match.
- Among those with the same, longest, match, keep the first registered one
- Instantiate the
Token
associated with that best regexp, passing its constructor the substring that was matched by the regexp - Yield that
Token
instance - Strip the matched substring from the text, and go back to step 1.
Note
The
Lexer
can be used as a standalone parser: the tokens passed toLexer.register_token()
are simply instantiated with the matching text as first argument.-
tokens
¶ A
TokenRegistry
holding the set of known tokens.Type: TokenRegistry
-
blank_chars
¶ An iterable of chars that should be considered as “blank” and thus not parsed into a
Token
.Type: iterable of str
-
register_token
(self, token_class[, regexp=None])¶ Registers a token class in the lexer (actually, in the
TokenRegistry
attokens
).There are two methods to provide the regular expression for token extraction:
- In the
regepx
parameter toregister_token()
- If that parameter isn’t provided, the
Lexer
will look for aregexp
string attribute on the providedtoken_class
.
Parameters: - token_class (tdparser.Token) – The
Token
subclass to add to the list of available tokens - regexp (str) – The regular expression to use when extracting tokens from
some text; if empty, the
regexp
attribute of thetoken_class
will be used instead.
- In the
-
register_tokens
(self, token_class[, token_class[, ...]])¶ Register a batch of
Token
subclasses. This is equivalent to callinglexer.register_token(token_class)
for each passedtoken_class
.The regular expression associated to each token must be set on its
regexp
attribute; no overrides are available with this method.Parameters: token_class (tdparser.Token) – token classes to register
-
lex
(self, text)¶ Read a text, and lex it, yielding
Token
instances.This will walk the text, eating chunks that can be paired to a
Token
through its associated regular expression.It will yield
Token
instances while parsing the text, and end with an instance of theEndToken
class as set in thelexer
‘send_token
attribute.Parameters: text (str) – The text to lex Returns: Iterable of Token
instances