It seems that the _preprocess() function comment stripping regex wrongly matches that // as a comment. As a result, csource after substitution becomes:
# HG changeset patch# User Dylan Young <dylanyoungmeijer@gmail.com># Date 1590462572 10800# Tue May 26 00:09:32 2020 -0300# Branch HP-454# Node ID f0affb19306123d9738a8757a4f06907c3943dcf# Parent 3011cd0272f3b4cd116521e2d395867c6def5d55#454 Allow '//' in line directive filenames- '/*' still won't work in most casesdiff --git a/cffi/cparser.py b/cffi/cparser.py--- a/cffi/cparser.py+++ b/cffi/cparser.py@@ -24,8 +24,12 @@ def _workaround_for_static_import_finder import pycparser.lextab CDEF_SOURCE_STRING = "<cdef source string>"-_r_comment = re.compile(r"/\*.*?\*/|//([^\n\\]|\\.)*?$",+# exclude #line directives as they might have // as part of the file name+_r_comment = re.compile(r"/\*.*?\*/|^(?! *#line)//([^\n\\]|\\.)*?$", re.DOTALL | re.MULTILINE)+# strop the single line comments following a #line directive+_r_line_directive_comment = re.compile(r'(^\s*#line\s*[0-9]*\s*(".*?")?\s*)//.*$')+ _r_define = re.compile(r"^\s*#\s*define\s+([A-Za-z_][A-Za-z_0-9]*)" r"\b((?:[^\n\\]|\\.)*?)$", re.DOTALL | re.MULTILINE)@@ -165,8 +169,10 @@ def _warn_for_non_extern_non_static_glob def _preprocess(csource): # Remove comments. NOTE: this only work because the cdef() section- # should not contain any string literal!+ # should not contain any string literals (except in line directives)! csource = _r_comment.sub(' ', csource)+ # Handle the string literals in line directives (filename may contain //, but not /* currently)+ csource = _r_line_directive_comment.sub(r'\1 ', csource) # Remove the "#define FOO x" lines macros = {} for match in _r_define.finditer(csource):
I think random filenames might still confuse some other parts of pre-parsing, say if they contain ... (unusual in a filename, but you never know). We should instead remove completely the lines with #line, do the pre-parsing, and then put them back afterwards.
It didn't work! My apologies! I deleted my original comment when I realized it wasn't working as I thought (but I guess that didn't work); I should have said something, but it was late ;)
I think the one attached now works, but I haven't done thorough testing. I see you've fixed it already, but feel free to use this one if your prefer it
Here's the actual patch in case the attachment doesn't work again:
# HG changeset patch# User Dylan Young <dylanyoungmeijer@gmail.com># Date 1590462572 10800# Tue May 26 00:09:32 2020 -0300# Branch HP-454# Node ID ba363fdcbced352e8f0e5d77392f341172a8943f# Parent 3011cd0272f3b4cd116521e2d395867c6def5d55#454 Allow '//' in line directive filenames- '/*' still won't work in most casesdiff --git a/cffi/cparser.py b/cffi/cparser.py--- a/cffi/cparser.py+++ b/cffi/cparser.py@@ -24,8 +24,15 @@ def _workaround_for_static_import_finder import pycparser.lextab CDEF_SOURCE_STRING = "<cdef source string>"-_r_comment = re.compile(r"/\*.*?\*/|//([^\n\\]|\\.)*?$",- re.DOTALL | re.MULTILINE)+_r_block_comment = re.compile(r"/\*.*?\*/",+ re.DOTALL | re.MULTILINE)+# exclude #line directives as they might have // as part of the file name+_r_line_comment = re.compile(r"^(?!\s*#line\s*)([^\n]*?)//(?:[^\n\\]|\\.)*?$",+ re.DOTALL | re.MULTILINE)+# strip the single line comments following a #line directive+_r_line_directive_comment = re.compile(r'(^\s*#line\s*\d*\s*(?:"[^\n]*?")?\s*)//(?:[^\n\\]|\\.)*?$',+ re.DOTALL | re.MULTILINE)+ _r_define = re.compile(r"^\s*#\s*define\s+([A-Za-z_][A-Za-z_0-9]*)" r"\b((?:[^\n\\]|\\.)*?)$", re.DOTALL | re.MULTILINE)@@ -165,8 +172,12 @@ def _warn_for_non_extern_non_static_glob def _preprocess(csource): # Remove comments. NOTE: this only work because the cdef() section- # should not contain any string literal!- csource = _r_comment.sub(' ', csource)+ # should not contain any string literals (except in line directives)!+ csource = _r_block_comment.sub(' ', csource)+ csource = _r_line_comment.sub(r'\1 ', csource)+ # Handle the string literals in line directives (filename may contain //, but not /* currently)+ csource = _r_line_directive_comment.sub(r'\1 ', csource)+ # Remove the "#define FOO x" lines macros = {} for match in _r_define.finditer(csource):
I think I prefer your solution though. @arigo Curious if you'd be interested in a more unified implementation of the preprocessing? Maybe a push-down automaton of sorts? I'd be happy to work on such a thing and I feel like it would make further syntax enhancements a fair bit easier.
I also tried to split the comment-removing regexp in two, but realized I couldn't---there are tests showing why. Did you even try to run the existing tests? Notably testing/cffi0/test_parsing.py :-) That's what tests are for: such a change also needs its own new tests. Otherwise, the next time we need to make a tweak, we're likely to break this change here---just like now we would have broken these older details if they weren't tested.
Agreed. Never would have suggested merging it without tests :)
With that in mind, are there tests somewhere for setuptools_ext? I'd love to add tests to my other patch (adding support for setup.cfg, pyproject.toml, etc.).
I'll take a look at those failing tests, probably I made a stupid assumption.
Sorry, you asked that question already but I forgot! Yes, tests for that are in testing/cffi0/test_zintegration where we try to make a virtualenv and install dummy cffi-based extensions inside.
Ok, I'm afraid the patch doesn't work for us. I'm sorry if my comment was confusing but the compiler is using the shorter # <num> ... syntax, as pasted in the samples.