Skip to content

Tokenizer improvements

Bitbucket Importer requested to merge bitbucket/merged-pr-110 into branch/2.4

Created originally on Bitbucket by felipeochoa (Felipe)

Was already merged in Bitbucket before import, marked as merged by the import user

Tokenizer improvements

(Closes issue #582 (closed))

This PR addresses most of the issues from issue #582 (closed). Specifically, the following changes are in here:

  • #GETTING_DATA is now recognized as a valid error code

  • Scientific notation now accepts both capital and lowercase E

  • Tokenizer.parse() is renamed to Tokenizer._parse() and is now called automatically upon creating a Tokenizer instance. Client code can now simply do:

    tok = Tokenizer(formula) for token in tok.items: # ...

  • The Tokenizer.parse_* have similarly been renamed to Tokenizer._parse_* and are now private

  • Updates DefinedName to use the new tokenizer pattern

Note: I wasn't able to get tox tests running on my Windows machine. I tested under Python 3.4, which I think should be enough, given these changes are unlikely to introduce Py 2 vs. 3 issues or binary issues.

Merge request reports

Loading