This document describes all components of the tdparser package:

Exception classes

exception tdparser.Error

This exception is the base class for all tdparser-related exceptions.

exception tdparser.ParserError(Error)

This exception will be raised whenever an unexpected token is encountered in the flow of tokens.

exception tdparser.MissingTokensError(ParserError)

This exception is raised when the parsing logic would expect more tokens than are available

exception tdparser.InvalidTokenError(ParserError)

This exception is raised when an unexpected token is encountered while parsing the data flow.

Defining tokens

A token must inherit from the Token class, and override a few elements depending on its role.

class tdparser.Token

The base class for all tokens.


Class attribute.

Optional regular expression (see re) describing text that should be lexed into this token class.


Class attribute.

“Left binding power”. This integer describes the precedence of the token when it stands at the left of an expression.

Tokens with a higher binding power will absorb the next tokens in priority:

In 1 + 2 * 3 + 4, if + has a lbp of 10 and * of 20, the 2 * 3 part will be computed and its result passed as a right expression to the first +.


The text that matched regexp.

nud(self, context)

Compute the “Null denotation” of this token.

This method should only be overridden for tokens that may appear at the beginning of an expression.

For instance, a number, a variable name, the “-” sign when denoting “the opposite of the next expression”.

The context argument is the Parser currently running. This gives easy access to:

Parameters:context (tdparser.Parser) – The active Parser
Returns:The value this token evaluates to
led(self, left, context)

Compute the “Left denotation” of this token.

This method is called whenever a token appears to the right of another token within an expression — typically infix or postfix operators.

It receives two arguments:

  • left is the value of the previous token or expression in the flow
  • context is the active Parser instance, providing calls to Parser.expression() to fetch the next expression.
  • left – Whaterver the previous expression evaluated to
  • context (tdparser.Parser) – The active Parser

The value this token evaluates to

class tdparser.LeftParen(Token)

A simple Token subclass matching an opening bracket, (.

When parsed, this will token will fetch the next subexpression, assert that this expression is followed by a RightParen token, and return the value of the fetched expression.


The token class to expect at the end of the subexpression. This simplifies writing similar “bracket” tokens with different opening/closing signs.

class tdparser.RightParen(Token)

A simple, passive Token (returns no value).

Used by the LeftParen token to check that the sub-expression was properly enclosed in left/right brackets.

class tdparser.EndToken(Token)

This specific Token marks the end of the input stream.

Parsing a flow of tokens

The actual parsing occurs in the Parser class, which takes a flow of Token.

Parsing is performed through the parse() method, which will return the next parsed expression.

class tdparser.Parser

Handles parsing of a flow of tokens. Maintains a pointer to the current Token.


Stores the current position within the token flow. Starts at 0.


The next Token to parse. When calling a token’s nud() or led(), this attribute points to the next token, not the token whose method has been called.


Iterable of tokens to parse. Can be any kind of iterable — will only be walked once.

Type:iterable of Token
consume(self, expect_class=None)

Consume the active current_token, and advance to the next token.

If the expect_class is provided, this will ensure that the current_token matches that token class, and raise a InvalidTokenError otherwise.

Parameters:expect_class (tdparser.Token) – Optionnal Token subclass that the current_token should be an instance of
Returns:the current_token at the time of calling.
expression(self, rbp=0)

Retrieve the next expression from the flow of tokens.

The rbp argument describes the “right binding power” of the calling token. This means that the parsing of the expression will stop at the first token whose left binding power is lower than this right binding power.

This obscure definition describes the right precedence of a token. In other words, it means “fetch an expression, and stop whenever you meet an operator with a lower precedence”.


In the 1 + 2 * 3 ** 4 + 5, the led() method of the * token will call context.expression(20). This call will absorb the 3 ** 4 part as a single expression, and stop when meeting the +, whose left binding power, 10, is lower than 20.

Parameters:rbp (int) – The (optional) right binding power to use when fetching the next subexpression.

Compute the first expression from the flow of tokens.

Generating tokens from a string

The Parser class works on an iterable of tokens.

In order to retrieve those tokens, the simplest way is to use the Lexer class.

class tdparser.Lexer

This class handles converting a string into an iterable of tokens.

Once initialized, a Lexer must be passed a set of tokens to handle.

The lexer parses strings according to the following algorithm:

  • Try each regexp in order for a match at the start of the string
  • If none match:
    • If the first character is a blank (see blank_chars), remove it from the beginning of the string and go back to step 1
    • Otherwise, raise a ValueError.
  • If more than one regexp match, keep the one with the longest match.
  • Among those with the same, longest, match, keep the first registered one
  • Instantiate the Token associated with that best regexp, passing its constructor the substring that was matched by the regexp
  • Yield that Token instance
  • Strip the matched substring from the text, and go back to step 1.


The Lexer can be used as a standalone parser: the tokens passed to Lexer.register_token() are simply instantiated with the matching text as first argument.


A TokenRegistry holding the set of known tokens.


An iterable of chars that should be considered as “blank” and thus not parsed into a Token.

Type:iterable of str

The Token subclass to use to mark the end of the flow

register_token(self, token_class[, regexp=None])

Registers a token class in the lexer (actually, in the TokenRegistry at tokens).

There are two methods to provide the regular expression for token extraction:

  • In the regepx parameter to register_token()
  • If that parameter isn’t provided, the Lexer will look for a regexp string attribute on the provided token_class.
  • token_class (tdparser.Token) – The Token subclass to add to the list of available tokens
  • regexp (str) – The regular expression to use when extracting tokens from some text; if empty, the regexp attribute of the token_class will be used instead.
register_tokens(self, token_class[, token_class[, ...]])

Register a batch of Token subclasses. This is equivalent to calling lexer.register_token(token_class) for each passed token_class.

The regular expression associated to each token must be set on its regexp attribute; no overrides are available with this method.

Parameters:token_class (tdparser.Token) – token classes to register
lex(self, text)

Read a text, and lex it, yielding Token instances.

This will walk the text, eating chunks that can be paired to a Token through its associated regular expression.

It will yield Token instances while parsing the text, and end with an instance of the EndToken class as set in the lexer‘s end_token attribute.

Parameters:text (str) – The text to lex
Returns:Iterable of Token instances
parse(self, text)

Shortcut method for lexing and parsing a text.

Will lex() the text, then instantiate a Parser with the resulting Token flow and call its parse() method.