Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.5.3: pytest is failing in some units #194

Open
kloczek opened this issue May 20, 2023 · 11 comments
Open

0.5.3: pytest is failing in some units #194

kloczek opened this issue May 20, 2023 · 11 comments

Comments

@kloczek
Copy link

kloczek commented May 20, 2023

I'm packaging your module as an rpm package so I'm using the typical PEP517 based build, install and test cycle used on building packages from non-root account.

  • python3 -sBm build -w --no-isolation
  • because I'm calling build with --no-isolation I'm using during all processes only locally installed modules
  • install .whl file in </install/prefix> using 'installer` module
  • run pytest with $PYTHONPATH pointing to sitearch and sitelib inside </install/prefix>
  • build is performed in env which is cut off from access to the public network (pytest is executed with -m "not network")

Here is list of installed modules in build env

Package                       Version
----------------------------- -------
alabaster                     0.7.13
asttokens                     2.2.1
Babel                         2.12.1
backcall                      0.2.0
build                         0.10.0
charset-normalizer            3.1.0
contourpy                     1.0.7
cycler                        0.11.0
decorator                     5.1.1
distro                        1.8.0
docutils                      0.19
exceptiongroup                1.1.1
executing                     1.2.0
fonttools                     4.39.4
gpg                           1.20.0
idna                          3.4
imagesize                     1.4.1
importlib-metadata            6.6.0
iniconfig                     2.0.0
installer                     0.7.0
ipython                       8.12.0
jedi                          0.18.2
Jinja2                        3.1.2
kiwisolver                    1.4.4
libcomps                      0.1.19
MarkupSafe                    2.1.2
matplotlib                    3.6.3
matplotlib-inline             0.1.6
numpy                         1.24.3
olefile                       0.46
packaging                     23.1
parso                         0.8.3
pexpect                       4.8.0
pickleshare                   0.7.5
Pillow                        9.5.0
pluggy                        1.0.0
prompt-toolkit                3.0.38
ptyprocess                    0.7.0
pure-eval                     0.2.2
Pygments                      2.15.1
pyparsing                     3.0.9
pyproject_hooks               1.0.0
pytest                        7.3.1
python-dateutil               2.8.2
pytz                          2023.2
requests                      2.30.0
setuptools                    67.7.2
six                           1.16.0
snowballstemmer               2.2.0
Sphinx                        6.2.1
sphinxcontrib-applehelp       1.0.4
sphinxcontrib-devhelp         1.0.2
sphinxcontrib-htmlhelp        2.0.0
sphinxcontrib-jsmath          1.0.1
sphinxcontrib-qthelp          1.0.3
sphinxcontrib-serializinghtml 1.1.5
stack-data                    0.6.2
tomli                         2.0.1
traitlets                     5.9.0
typing_extensions             4.5.0
urllib3                       1.26.15
wcwidth                       0.2.6
wheel                         0.40.0
zipp                          3.15.0
@kloczek
Copy link
Author

kloczek commented May 20, 2023

Here is pytest output:

+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-patsy-0.5.3-3.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-patsy-0.5.3-3.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -m 'not network'
==================================================================================== test session starts ====================================================================================
platform linux -- Python 3.8.16, pytest-7.3.1, pluggy-1.0.0
rootdir: /home/tkloczko/rpmbuild/BUILD/patsy-0.5.3
configfile: setup.cfg
testpaths: patsy
collected 148 items

patsy/build.py ......F                                                                                                                                                                [  4%]
patsy/builtins.py ..                                                                                                                                                                  [  6%]
patsy/categorical.py ....                                                                                                                                                             [  8%]
patsy/constraint.py .....                                                                                                                                                             [ 12%]
patsy/contrasts.py .F.......                                                                                                                                                          [ 18%]
patsy/desc.py ..FFFF                                                                                                                                                                  [ 22%]
patsy/design_info.py ........                                                                                                                                                         [ 27%]
patsy/eval.py ..........FFFFF                                                                                                                                                         [ 37%]
patsy/infix_parser.py .                                                                                                                                                               [ 38%]
patsy/mgcv_cubic_splines.py ......FF.FFF                                                                                                                                              [ 46%]
patsy/missing.py .....                                                                                                                                                                [ 50%]
patsy/origin.py .                                                                                                                                                                     [ 50%]
patsy/parse_formula.py FFFFF                                                                                                                                                          [ 54%]
patsy/redundancy.py ....                                                                                                                                                              [ 56%]
patsy/splines.py .FFF                                                                                                                                                                 [ 59%]
patsy/test_build.py ................F                                                                                                                                                 [ 70%]
patsy/test_highlevel.py F.FFFFFFF.FFFFFFF.                                                                                                                                            [ 83%]
patsy/test_regressions.py F                                                                                                                                                           [ 83%]
patsy/test_state.py ...                                                                                                                                                               [ 85%]
patsy/tokens.py ..                                                                                                                                                                    [ 87%]
patsy/user_util.py ...                                                                                                                                                                [ 89%]
patsy/util.py .....F..........                                                                                                                                                        [100%]

========================================================================================= FAILURES ==========================================================================================
________________________________________________________________________________ test__examine_factor_types _________________________________________________________________________________

    def test__examine_factor_types():
        from patsy.categorical import C
        class MockFactor(object):
            def __init__(self):
                # You should check this using 'is', not '=='
                from patsy.origin import Origin
                self.origin = Origin("MOCK", 1, 2)

            def eval(self, state, data):
                return state[data]

            def name(self):
                return "MOCK MOCK"

        # This hacky class can only be iterated over once, but it keeps track of
        # how far it got.
        class DataIterMaker(object):
            def __init__(self):
                self.i = -1

            def __call__(self):
                return self

            def __iter__(self):
                return self

            def __next__(self):
                self.i += 1
                if self.i > 1:
                    raise StopIteration
                return self.i
            __next__ = next

        num_1dim = MockFactor()
        num_1col = MockFactor()
        num_4col = MockFactor()
        categ_1col = MockFactor()
        bool_1col = MockFactor()
        string_1col = MockFactor()
        object_1col = MockFactor()
        object_levels = (object(), object(), object())
        factor_states = {
            num_1dim: ([1, 2, 3], [4, 5, 6]),
            num_1col: ([[1], [2], [3]], [[4], [5], [6]]),
            num_4col: (np.zeros((3, 4)), np.ones((3, 4))),
            categ_1col: (C(["a", "b", "c"], levels=("a", "b", "c"),
                           contrast="MOCK CONTRAST"),
                         C(["c", "b", "a"], levels=("a", "b", "c"),
                           contrast="MOCK CONTRAST")),
            bool_1col: ([True, True, False], [False, True, True]),
            # It has to read through all the data to see all the possible levels:
            string_1col: (["a", "a", "a"], ["c", "b", "a"]),
            object_1col: ([object_levels[0]] * 3, object_levels),
            }

        it = DataIterMaker()
        (num_column_counts, cat_levels_contrasts,
>        ) = _examine_factor_types(list(factor_states.keys()), factor_states, it,
                                   NAAction())

patsy/build.py:523:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

factors = [<patsy.build.test__examine_factor_types.<locals>.MockFactor object at 0x7f69542fdc70>, <patsy.build.test__examine_fac... object at 0x7f6954244b50>, <patsy.build.test__examine_factor_types.<locals>.MockFactor object at 0x7f6954244400>, ...]
factor_states = {<patsy.build.test__examine_factor_types.<locals>.MockFactor object at 0x7f69542fdc70>: ([1, 2, 3], [4, 5, 6]), <patsy...egorical._CategoricalBox object at 0x7f69542444c0>, <patsy.categorical._CategoricalBox object at 0x7f6954244dc0>), ...}
data_iter_maker = <patsy.build.test__examine_factor_types.<locals>.DataIterMaker object at 0x7f6954244e80>, NA_action = <patsy.missing.NAAction object at 0x7f69542447c0>

    def _examine_factor_types(factors, factor_states, data_iter_maker, NA_action):
        num_column_counts = {}
        cat_sniffers = {}
        examine_needed = set(factors)
>       for data in data_iter_maker():
E       TypeError: next expected at least 1 argument, got 0

patsy/build.py:441: TypeError
_________________________________________________________________________________ test__obj_to_readable_str _________________________________________________________________________________

    def test__obj_to_readable_str():
        def t(obj, expected):
            got = _obj_to_readable_str(obj)
            assert type(got) is str
            assert got == expected
        t(1, "1")
        t(1.0, "1.0")
        t("asdf", "asdf")
        t(six.u("asdf"), "asdf")
        if sys.version_info >= (3,):
            # we can use "foo".encode here b/c this is python 3!
            # a utf-8 encoded euro-sign comes out as a real euro sign.
            t("\\u20ac".encode("utf-8"), six.u("\\u20ac"))
            # but a iso-8859-15 euro sign can't be decoded, and we fall back on
            # repr()
>           t("\\u20ac".encode("iso-8859-15"), "b'\\xa4'")

patsy/contrasts.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

obj = b'\\u20ac', expected = "b'\\xa4'"

    def t(obj, expected):
        got = _obj_to_readable_str(obj)
        assert type(got) is str
>       assert got == expected
E       assert '\\u20ac' == "b'\\xa4'"
E         - b'\xa4'
E         + \u20ac

patsy/contrasts.py:89: AssertionError
________________________________________________________________________________ test_ModelDesc_from_formula ________________________________________________________________________________

    def test_ModelDesc_from_formula():
>       for input in ("y ~ x", parse_formula("y ~ x")):

patsy/desc.py:189:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ x', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_eval_formula _____________________________________________________________________________________

    def test_eval_formula():
>       _do_eval_formula_tests(_eval_tests)

patsy/desc.py:612:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/desc.py:601: in _do_eval_formula_tests
    model_desc = ModelDesc.from_formula(code)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________ test_eval_formula_error_reporting _____________________________________________________________________________

    def test_eval_formula_error_reporting():
        from patsy.parse_formula import _parsing_error_test
        parse_fn = lambda formula: ModelDesc.from_formula(formula)
>       _parsing_error_test(parse_fn, _eval_error_tests)

patsy/desc.py:617:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:273: in _parsing_error_test
    parse_fn(bad_code)
patsy/desc.py:616: in <lambda>
    parse_fn = lambda formula: ModelDesc.from_formula(formula)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a +', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
a <+>
'a +' 2 3
________________________________________________________________________________ test_formula_factor_origin _________________________________________________________________________________

    def test_formula_factor_origin():
        from patsy.origin import Origin
>       desc = ModelDesc.from_formula("a + b")

patsy/desc.py:621:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a + b', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________ test_EvalFactor_memorize_passes_needed ___________________________________________________________________________

    def test_EvalFactor_memorize_passes_needed():
        from patsy.state import stateful_transform
        foo = stateful_transform(lambda: "FOO-OBJ")
        bar = stateful_transform(lambda: "BAR-OBJ")
        quux = stateful_transform(lambda: "QUUX-OBJ")
        e = EvalFactor("foo(x) + bar(foo(y)) + quux(z, w)")

        state = {}
        eval_env = EvalEnvironment.capture(0)
>       passes = e.memorize_passes_needed(state, eval_env)

patsy/eval.py:595:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:496: in memorize_passes_needed
    eval_code = replace_bare_funcalls(self.code, new_name_maker)
patsy/eval.py:736: in replace_bare_funcalls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'foo(x) + bar(foo(y)) + quux(z, w)'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
________________________________________________________________________________ test_EvalFactor_end_to_end _________________________________________________________________________________

    def test_EvalFactor_end_to_end():
        from patsy.state import stateful_transform
        foo = stateful_transform(_MockTransform)
        e = EvalFactor("foo(x) + foo(foo(y))")
        state = {}
        eval_env = EvalEnvironment.capture(0)
>       passes = e.memorize_passes_needed(state, eval_env)

patsy/eval.py:652:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:496: in memorize_passes_needed
    eval_code = replace_bare_funcalls(self.code, new_name_maker)
patsy/eval.py:736: in replace_bare_funcalls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'foo(x) + foo(foo(y))'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
___________________________________________________________________________________ test_annotated_tokens ___________________________________________________________________________________

    def test_annotated_tokens():
>       tokens_without_origins = [(token_type, token, props)
                                  for (token_type, token, origin, props)
                                  in (annotated_tokens("a(b) + c.d"))]

patsy/eval.py:710:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:710: in <listcomp>
    tokens_without_origins = [(token_type, token, props)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a(b) + c.d'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
________________________________________________________________________________ test_replace_bare_funcalls _________________________________________________________________________________

    def test_replace_bare_funcalls():
        def replacer1(token):
            return {"a": "b", "foo": "_internal.foo.process"}.get(token, token)
        def t1(code, expected):
            replaced = replace_bare_funcalls(code, replacer1)
            print(("%r -> %r" % (code, replaced)))
            print(("(wanted %r)" % (expected,)))
            assert replaced == expected
>       t1("foobar()", "foobar()")

patsy/eval.py:750:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:746: in t1
    replaced = replace_bare_funcalls(code, replacer1)
patsy/eval.py:736: in replace_bare_funcalls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'foobar()'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
_______________________________________________________________________________ test_capture_obj_method_calls _______________________________________________________________________________

    def test_capture_obj_method_calls():
>       assert (capture_obj_method_calls("foo", "a + foo.baz(bar) + b.c(d)")
                == [("foo.baz", "foo.baz(bar)")])

patsy/eval.py:798:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:789: in capture_obj_method_calls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a + foo.baz(bar) + b.c(d)'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
______________________________________________________________________________________ test_crs_compat ______________________________________________________________________________________

knots = array([-2216.83782005,   -13.921875  ,     9.28125   ,  1477.89188004])

    def _get_natural_f(knots):
        """Returns mapping of natural cubic spline values to 2nd derivatives.

        .. note:: See 'Generalized Additive Models', Simon N. Wood, 2006, pp 145-146

        :param knots: The 1-d array knots used for cubic spline parametrization,
         must be sorted in ascending order.
        :return: A 2-d array mapping natural cubic spline values at
         knots to second derivatives.

        :raise ImportError: if scipy is not found, required for
         ``linalg.solve_banded()``
        """
        try:
>           from scipy import linalg
E           ModuleNotFoundError: No module named 'scipy'

patsy/mgcv_cubic_splines.py:34: ModuleNotFoundError

During handling of the above exception, another exception occurred:

    def test_crs_compat():
        from patsy.test_state import check_stateful
        from patsy.test_splines_crs_data import (R_crs_test_x,
                                                 R_crs_test_data,
                                                 R_crs_num_tests)
        lines = R_crs_test_data.split("\n")
        tests_ran = 0
        start_idx = lines.index("--BEGIN TEST CASE--")
        while True:
            if not lines[start_idx] == "--BEGIN TEST CASE--":
                break
            start_idx += 1
            stop_idx = lines.index("--END TEST CASE--", start_idx)
            block = lines[start_idx:stop_idx]
            test_data = {}
            for line in block:
                key, value = line.split("=", 1)
                test_data[key] = value
            # Translate the R output into Python calling conventions
            adjust_df = 0
            if test_data["spline_type"] == "cr" or test_data["spline_type"] == "cs":
                spline_type = CR
            elif test_data["spline_type"] == "cc":
                spline_type = CC
                adjust_df += 1
            else:
                raise ValueError("Unrecognized spline type %r"
                                 % (test_data["spline_type"],))
            kwargs = {}
            if test_data["absorb_cons"] == "TRUE":
                kwargs["constraints"] = "center"
                adjust_df += 1
            if test_data["knots"] != "None":
                all_knots = np.asarray(eval(test_data["knots"]))
                all_knots.sort()
                kwargs["knots"] = all_knots[1:-1]
                kwargs["lower_bound"] = all_knots[0]
                kwargs["upper_bound"] = all_knots[-1]
            else:
                kwargs["df"] = eval(test_data["nb_knots"]) - adjust_df
            output = np.asarray(eval(test_data["output"]))
            # Do the actual test
>           check_stateful(spline_type, False, R_crs_test_x, output, **kwargs)

patsy/mgcv_cubic_splines.py:815:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_state.py:81: in check_stateful
    output_chunk = t.transform(input_chunk, *args, **kwargs)
patsy/mgcv_cubic_splines.py:680: in transform
    dm = _get_crs_dmatrix(x, self._all_knots,
patsy/mgcv_cubic_splines.py:365: in _get_crs_dmatrix
    dm = _get_free_crs_dmatrix(x, knots, cyclic)
patsy/mgcv_cubic_splines.py:339: in _get_free_crs_dmatrix
    f = _get_natural_f(knots)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

knots = array([-2216.83782005,   -13.921875  ,     9.28125   ,  1477.89188004])

    def _get_natural_f(knots):
        """Returns mapping of natural cubic spline values to 2nd derivatives.

        .. note:: See 'Generalized Additive Models', Simon N. Wood, 2006, pp 145-146

        :param knots: The 1-d array knots used for cubic spline parametrization,
         must be sorted in ascending order.
        :return: A 2-d array mapping natural cubic spline values at
         knots to second derivatives.

        :raise ImportError: if scipy is not found, required for
         ``linalg.solve_banded()``
        """
        try:
            from scipy import linalg
        except ImportError: # pragma: no cover
>           raise ImportError("Cubic spline functionality requires scipy.")
E           ImportError: Cubic spline functionality requires scipy.

patsy/mgcv_cubic_splines.py:36: ImportError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
[array([ 1.00000000e+00, -1.50000000e+00,  2.25000000e+00, -3.37500000e+00,
        5.06250000e+00, -7.59375000e+00,  1.13906250e+01, -1.70859375e+01,
        2.56289062e+01, -3.84433594e+01,  5.76650391e+01, -8.64975586e+01,
        1.29746338e+02, -1.94619507e+02,  2.91929260e+02, -4.37893890e+02,
        6.56840836e+02, -9.85261253e+02,  1.47789188e+03, -2.21683782e+03])]
_____________________________________________________________________________ test_crs_with_specific_constraint _____________________________________________________________________________

    def test_crs_with_specific_constraint():
        from patsy.highlevel import incr_dbuilder, build_design_matrices, dmatrix
        x = (-1.5)**np.arange(20)
        # Hard coded R values for smooth: s(x, bs="cr", k=5)
        # R> knots <- smooth$xp
        knots_R = np.array([-2216.837820053100585937,
                            -50.456909179687500000,
                            -0.250000000000000000,
                            33.637939453125000000,
                            1477.891880035400390625])
        # R> centering.constraint <- t(qr.X(attr(smooth, "qrc")))
        centering_constraint_R = np.array([[0.064910676323168478574,
                                            1.4519875239407085132,
                                            -2.1947446912471946234,
                                            1.6129783104357671153,
                                            0.064868180547550072235]])
        # values for which we want a prediction
        new_x = np.array([-3000., -200., 300., 2000.])
>       result1 = dmatrix("cr(new_x, knots=knots_R[1:-1], "
                          "lower_bound=knots_R[0], upper_bound=knots_R[-1], "
                          "constraints=centering_constraint_R)")

patsy/mgcv_cubic_splines.py:841:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'cr(new_x, knots=knots_R[1:-1], lower_bound=knots_R[0], upper_bound=knots_R[-1], constraints=centering_constraint_R)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_te_1smooth ______________________________________________________________________________________

knots = array([-2216.83782005,  -108.12194824,    -5.0625    ,     3.375     ,
          72.08129883,  1477.89188004])

    def _get_natural_f(knots):
        """Returns mapping of natural cubic spline values to 2nd derivatives.

        .. note:: See 'Generalized Additive Models', Simon N. Wood, 2006, pp 145-146

        :param knots: The 1-d array knots used for cubic spline parametrization,
         must be sorted in ascending order.
        :return: A 2-d array mapping natural cubic spline values at
         knots to second derivatives.

        :raise ImportError: if scipy is not found, required for
         ``linalg.solve_banded()``
        """
        try:
>           from scipy import linalg
E           ModuleNotFoundError: No module named 'scipy'

patsy/mgcv_cubic_splines.py:34: ModuleNotFoundError

During handling of the above exception, another exception occurred:

    def test_te_1smooth():
        from patsy.splines import bs
        # Tensor product of 1 smooth covariate should be the same
        # as the smooth alone
        x = (-1.5)**np.arange(20)
>       assert np.allclose(cr(x, df=6), te(cr(x, df=6)))

patsy/mgcv_cubic_splines.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/state.py:48: in stateful_transform_wrapper
    return transform.transform(*args, **kwargs)
patsy/mgcv_cubic_splines.py:680: in transform
    dm = _get_crs_dmatrix(x, self._all_knots,
patsy/mgcv_cubic_splines.py:365: in _get_crs_dmatrix
    dm = _get_free_crs_dmatrix(x, knots, cyclic)
patsy/mgcv_cubic_splines.py:339: in _get_free_crs_dmatrix
    f = _get_natural_f(knots)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

knots = array([-2216.83782005,  -108.12194824,    -5.0625    ,     3.375     ,
          72.08129883,  1477.89188004])

    def _get_natural_f(knots):
        """Returns mapping of natural cubic spline values to 2nd derivatives.

        .. note:: See 'Generalized Additive Models', Simon N. Wood, 2006, pp 145-146

        :param knots: The 1-d array knots used for cubic spline parametrization,
         must be sorted in ascending order.
        :return: A 2-d array mapping natural cubic spline values at
         knots to second derivatives.

        :raise ImportError: if scipy is not found, required for
         ``linalg.solve_banded()``
        """
        try:
            from scipy import linalg
        except ImportError: # pragma: no cover
>           raise ImportError("Cubic spline functionality requires scipy.")
E           ImportError: Cubic spline functionality requires scipy.

patsy/mgcv_cubic_splines.py:36: ImportError
_____________________________________________________________________________________ test_te_2smooths ______________________________________________________________________________________

    def test_te_2smooths():
        from patsy.highlevel import incr_dbuilder, build_design_matrices
        x1 = (-1.5)**np.arange(20)
        x2 = (1.6)**np.arange(20)
        # Hard coded R results for smooth: te(x1, x2, bs=c("cs", "cc"), k=c(5,7))
        # Without centering constraint:
        dmatrix_R_nocons = \
            np.array([[-4.4303024184609255207e-06,  7.9884438387230142235e-06,
                       9.7987758194797719025e-06,   -7.2894213245475212959e-08,
                       1.5907686862964493897e-09,   -3.2565884983072595159e-11,
                       0.0170749607855874667439,    -3.0788499835965849050e-02,
                       -3.7765754357352458725e-02,  2.8094376299826799787e-04,
                       -6.1310290747349201414e-06,  1.2551314933193442915e-07,
                       -0.26012671685838206770,     4.6904420337437874311e-01,
                       0.5753384627946153129230,    -4.2800085814700449330e-03,
                       9.3402525733484874533e-05,   -1.9121170389937518131e-06,
                       -0.0904312240489447832781,   1.6305991924427923334e-01,
                       2.0001237112941641638e-01,   -1.4879148887003382663e-03,
                       3.2470731316462736135e-05,   -6.6473404365914134499e-07,
                       2.0447857920168824846e-05,   -3.6870296695050991799e-05,
                       -4.5225801045409022233e-05,  3.3643990293641665710e-07,
                       -7.3421200200015877329e-09,  1.5030635073660743297e-10],
                      [-9.4006130602653794302e-04,  7.8681398069163730347e-04,
                       2.4573006857381437217e-04,   -1.4524712230452725106e-04,
                       7.8216741353106329551e-05,   -3.1304283003914264551e-04,
                       3.6231183382798337611064,    -3.0324832476174168328e+00,
                       -9.4707559178211142559e-01,  5.5980126937492580286e-01,
                       -3.0145747744342332730e-01,  1.2065077148806895302e+00,
                       -35.17561267504181188315,    2.9441339255948005160e+01,
                       9.1948319320782125885216,    -5.4349184288245195873e+00,
                       2.9267472035096449012e+00,   -1.1713569391233907169e+01,
                       34.0275626863976370373166,   -2.8480442582712722555e+01,
                       -8.8947340548151565542e+00,  5.2575353623762932642e+00,
                       -2.8312249982592527786e+00,  1.1331265795534763541e+01,
                       7.9462158845078978420e-01,   -6.6508361863670617531e-01,
                       -2.0771242914526857892e-01,  1.2277550230353953542e-01,
                       -6.6115593588420035198e-02,  2.6461103043402139923e-01]])
        # With centering constraint:
        dmatrix_R_cons = \
            np.array([[0.00329998606323867252343,   1.6537431155796576600e-04,
                       -1.2392262709790753433e-04,  6.5405304166706783407e-05,
                       -6.6764045799537624095e-05,  -0.1386431081763726258504,
                       0.124297283800864313830,     -3.5487293655619825405e-02,
                       -3.0527115315785902268e-03,  5.2009247643311604277e-04,
                       -0.00384203992301702674378,  -0.058901915802819435064,
                       0.266422358491648914036,     0.5739281693874087597607,
                       -1.3171008503525844392e-03,  8.2573456631878912413e-04,
                       6.6730833453016958831e-03,   -0.1467677784718444955470,
                       0.220757650934837484913,     0.1983127687880171796664,
                       -1.6269930328365173316e-03,  -1.7785892412241208812e-03,
                       -3.2702835436351201243e-03,  -4.3252183044300757109e-02,
                       4.3403766976235179376e-02,   3.5973406402893762387e-05,
                       -5.4035858568225075046e-04,  2.9565209382794241247e-04,
                       -2.2769990750264097637e-04],
                      [0.41547954838956052681098,   1.9843570584107707994e-02,
                       -1.5746590234791378593e-02,  8.3171184312221431434e-03,
                       -8.7233014052017516377e-03,  -15.9926770785086258541696,
                       16.503663226274017716833,    -6.6005803955894726265e-01,
                       1.3986092022708346283e-01,   -2.3516913533670955050e-01,
                       0.72251037497207359905360,   -9.827337059999853963177,
                       3.917078117294827688255,     9.0171773596973618936090,
                       -5.0616811270787671617e+00,  3.0189990249009683865e+00,
                       -1.0872720629943064097e+01,  26.9308504460453121964747,
                       -21.212262927009287949431,   -9.1088328555582247503253,
                       5.2400156972500298025e+00,   -3.0593641098325474736e+00,
                       1.0919392118399086300e+01,   -4.6564290223265718538e+00,
                       4.8071307441606982991e+00,   -1.9748377005689798924e-01,
                       5.4664183716965096538e-02,   -2.8871392916916285148e-02,
                       2.3592766838010845176e-01]])
        new_x1 = np.array([11.390625, 656.84083557128906250])
        new_x2 = np.array([16.777216000000006346, 1844.6744073709567147])
        new_data = {"x1": new_x1, "x2": new_x2}
        data_chunked = [{"x1": x1[:10], "x2": x2[:10]},
                        {"x1": x1[10:], "x2": x2[10:]}]

>       builder = incr_dbuilder("te(cr(x1, df=5), cc(x2, df=6)) - 1",
                                lambda: iter(data_chunked))

patsy/mgcv_cubic_splines.py:1051:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'te(cr(x1, df=5), cc(x2, df=6)) - 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_te_3smooths ______________________________________________________________________________________

    def test_te_3smooths():
        from patsy.highlevel import incr_dbuilder, build_design_matrices
        x1 = (-1.5)**np.arange(20)
        x2 = (1.6)**np.arange(20)
        x3 = (-1.2)**np.arange(20)
        # Hard coded R results for smooth:  te(x1, x2, x3, bs=c("cr", "cs", "cc"), k=c(3,3,4))
        design_matrix_R = \
            np.array([[7.2077663709837084334e-05,   2.0648333344343273131e-03,
                       -4.7934014082310591768e-04,  2.3923430783992746568e-04,
                       6.8534265421922660466e-03,   -1.5909867344112936776e-03,
                       -6.8057712777151204314e-09,  -1.9496724335203412851e-07,
                       4.5260614658693259131e-08,   0.0101479754187435277507,
                       0.290712501531622591333,     -0.067487370093906928759,
                       0.03368233306025386619709,   0.9649092451763204847381,
                       -0.2239985793289433757547,   -9.5819975394704535133e-07,
                       -2.7449874082511405643e-05,  6.3723431275833230217e-06,
                       -1.5205851762850489204e-04,  -0.00435607204539782688624,
                       0.00101123909269346416370,   -5.0470024059694933508e-04,
                       -1.4458319360584082416e-02,  3.3564223914790921634e-03,
                       1.4357783514933466209e-08,   4.1131230514870551983e-07,
                       -9.5483976834512651038e-08]])
        new_data = {"x1": -38.443359375000000000,
                    "x2": 68.719476736000032702,
                    "x3": -5.1597803519999985156}
        data_chunked = [{"x1": x1[:10], "x2": x2[:10], "x3": x3[:10]},
                        {"x1": x1[10:], "x2": x2[10:], "x3": x3[10:]}]
>       builder = incr_dbuilder("te(cr(x1, df=3), cr(x2, df=3), cc(x3, df=3)) - 1",
                                lambda: iter(data_chunked))

patsy/mgcv_cubic_splines.py:1089:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'te(cr(x1, df=3), cr(x2, df=3), cc(x3, df=3)) - 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________________ test__tokenize_formula ___________________________________________________________________________________

    def test__tokenize_formula():
        code = "y ~ a + (foo(b,c +   2)) + -1 + 0 + 10"
>       tokens = list(_tokenize_formula(code, ["+", "-", "~"]))

patsy/parse_formula.py:98:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ a + (foo(b,c +   2)) + -1 + 0 + 10', operator_strings = ['+', '-', '~']

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
____________________________________________________________________________________ test_parse_formula _____________________________________________________________________________________

    def test_parse_formula():
>       _do_parse_test(_parser_tests, [])

patsy/parse_formula.py:208:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:202: in _do_parse_test
    actual = parse_formula(code, extra_operators=extra_operators)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_parse_origin _____________________________________________________________________________________

    def test_parse_origin():
>       tree = parse_formula("a ~ b + c")

patsy/parse_formula.py:211:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a ~ b + c', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_parse_errors _____________________________________________________________________________________

extra_operators = []

    def test_parse_errors(extra_operators=[]):
        def parse_fn(code):
            return parse_formula(code, extra_operators=extra_operators)
>       _parsing_error_test(parse_fn, _parser_error_tests)

patsy/parse_formula.py:285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:273: in _parsing_error_test
    parse_fn(bad_code)
patsy/parse_formula.py:284: in parse_fn
    return parse_formula(code, extra_operators=extra_operators)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a +', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
a <+>
'a +' 2 3
____________________________________________________________________________________ test_parse_extra_op ____________________________________________________________________________________

    def test_parse_extra_op():
        extra_operators = [Operator("|", 2, 250)]
>       _do_parse_test(_parser_tests,
                       extra_operators=extra_operators)

patsy/parse_formula.py:294:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:202: in _do_parse_test
    actual = parse_formula(code, extra_operators=extra_operators)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_bs_compat _______________________________________________________________________________________

x = array([1.00000000e+00, 1.50000000e+00, 2.25000000e+00, 3.37500000e+00,
       5.06250000e+00, 7.59375000e+00, 1.139062...1.94619507e+02, 2.91929260e+02, 4.37893890e+02,
       6.56840836e+02, 9.85261253e+02, 1.47789188e+03, 2.21683782e+03])
knots = array([1.00000000e+00, 1.00000000e+00, 4.80541992e+01, 2.21683782e+03,
       2.21683782e+03]), degree = 1

    def _eval_bspline_basis(x, knots, degree):
        try:
>           from scipy.interpolate import splev
E           ModuleNotFoundError: No module named 'scipy'

patsy/splines.py:20: ModuleNotFoundError

During handling of the above exception, another exception occurred:

    def test_bs_compat():
        from patsy.test_state import check_stateful
        from patsy.test_splines_bs_data import (R_bs_test_x,
                                                R_bs_test_data,
                                                R_bs_num_tests)
        lines = R_bs_test_data.split("\n")
        tests_ran = 0
        start_idx = lines.index("--BEGIN TEST CASE--")
        while True:
            if not lines[start_idx] == "--BEGIN TEST CASE--":
                break
            start_idx += 1
            stop_idx = lines.index("--END TEST CASE--", start_idx)
            block = lines[start_idx:stop_idx]
            test_data = {}
            for line in block:
                key, value = line.split("=", 1)
                test_data[key] = value
            # Translate the R output into Python calling conventions
            kwargs = {
                "degree": int(test_data["degree"]),
                # integer, or None
                "df": eval(test_data["df"]),
                # np.array() call, or None
                "knots": eval(test_data["knots"]),
                }
            if test_data["Boundary.knots"] != "None":
                lower, upper = eval(test_data["Boundary.knots"])
                kwargs["lower_bound"] = lower
                kwargs["upper_bound"] = upper
            kwargs["include_intercept"] = (test_data["intercept"] == "TRUE")
            # Special case: in R, setting intercept=TRUE increases the effective
            # dof by 1. Adjust our arguments to match.
            # if kwargs["df"] is not None and kwargs["include_intercept"]:
            #     kwargs["df"] += 1
            output = np.asarray(eval(test_data["output"]))
            if kwargs["df"] is not None:
                assert output.shape[1] == kwargs["df"]
            # Do the actual test
>           check_stateful(BS, False, R_bs_test_x, output, **kwargs)

patsy/splines.py:291:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_state.py:81: in check_stateful
    output_chunk = t.transform(input_chunk, *args, **kwargs)
patsy/splines.py:239: in transform
    basis = _eval_bspline_basis(x, self._all_knots, self._degree)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

x = array([1.00000000e+00, 1.50000000e+00, 2.25000000e+00, 3.37500000e+00,
       5.06250000e+00, 7.59375000e+00, 1.139062...1.94619507e+02, 2.91929260e+02, 4.37893890e+02,
       6.56840836e+02, 9.85261253e+02, 1.47789188e+03, 2.21683782e+03])
knots = array([1.00000000e+00, 1.00000000e+00, 4.80541992e+01, 2.21683782e+03,
       2.21683782e+03]), degree = 1

    def _eval_bspline_basis(x, knots, degree):
        try:
            from scipy.interpolate import splev
        except ImportError: # pragma: no cover
>           raise ImportError("spline functionality requires scipy")
E           ImportError: spline functionality requires scipy

patsy/splines.py:22: ImportError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
[array([1.00000000e+00, 1.50000000e+00, 2.25000000e+00, 3.37500000e+00,
       5.06250000e+00, 7.59375000e+00, 1.13906250e+01, 1.70859375e+01,
       2.56289062e+01, 3.84433594e+01, 5.76650391e+01, 8.64975586e+01,
       1.29746338e+02, 1.94619507e+02, 2.91929260e+02, 4.37893890e+02,
       6.56840836e+02, 9.85261253e+02, 1.47789188e+03, 2.21683782e+03])]
______________________________________________________________________________________ test_bs_0degree ______________________________________________________________________________________

x = array([ 0.1       ,  0.16681005,  0.27825594,  0.46415888,  0.77426368,
        1.29154967,  2.15443469,  3.59381366,  5.9948425 , 10.        ]), knots = array([ 0.1,  1. ,  4. , 10. ])
degree = 0

    def _eval_bspline_basis(x, knots, degree):
        try:
>           from scipy.interpolate import splev
E           ModuleNotFoundError: No module named 'scipy'

patsy/splines.py:20: ModuleNotFoundError

During handling of the above exception, another exception occurred:

    def test_bs_0degree():
        x = np.logspace(-1, 1, 10)
>       result = bs(x, knots=[1, 4], degree=0, include_intercept=True)

patsy/splines.py:303:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/state.py:48: in stateful_transform_wrapper
    return transform.transform(*args, **kwargs)
patsy/splines.py:239: in transform
    basis = _eval_bspline_basis(x, self._all_knots, self._degree)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

x = array([ 0.1       ,  0.16681005,  0.27825594,  0.46415888,  0.77426368,
        1.29154967,  2.15443469,  3.59381366,  5.9948425 , 10.        ]), knots = array([ 0.1,  1. ,  4. , 10. ])
degree = 0

    def _eval_bspline_basis(x, knots, degree):
        try:
            from scipy.interpolate import splev
        except ImportError: # pragma: no cover
>           raise ImportError("spline functionality requires scipy")
E           ImportError: spline functionality requires scipy

patsy/splines.py:22: ImportError
______________________________________________________________________________________ test_bs_errors _______________________________________________________________________________________

x = array([-10.        ,  -8.94736842,  -7.89473684,  -6.84210526,
        -5.78947368,  -4.73684211,  -3.68421053,  -2.63...  2.63157895,   3.68421053,   4.73684211,   5.78947368,
         6.84210526,   7.89473684,   8.94736842,  10.        ])
knots = array([ 0.,  0.,  0.,  0., 10., 10., 10., 10.]), degree = 3

    def _eval_bspline_basis(x, knots, degree):
        try:
>           from scipy.interpolate import splev
E           ModuleNotFoundError: No module named 'scipy'

patsy/splines.py:20: ModuleNotFoundError

During handling of the above exception, another exception occurred:

    def test_bs_errors():
        import pytest
        x = np.linspace(-10, 10, 20)
        # error checks:
        # out of bounds
>       pytest.raises(NotImplementedError, bs, x, 3, lower_bound=0)

patsy/splines.py:333:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/state.py:48: in stateful_transform_wrapper
    return transform.transform(*args, **kwargs)
patsy/splines.py:239: in transform
    basis = _eval_bspline_basis(x, self._all_knots, self._degree)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

x = array([-10.        ,  -8.94736842,  -7.89473684,  -6.84210526,
        -5.78947368,  -4.73684211,  -3.68421053,  -2.63...  2.63157895,   3.68421053,   4.73684211,   5.78947368,
         6.84210526,   7.89473684,   8.94736842,  10.        ])
knots = array([ 0.,  0.,  0.,  0., 10., 10., 10., 10.]), degree = 3

    def _eval_bspline_basis(x, knots, degree):
        try:
            from scipy.interpolate import splev
        except ImportError: # pragma: no cover
>           raise ImportError("spline functionality requires scipy")
E           ImportError: spline functionality requires scipy

patsy/splines.py:22: ImportError
__________________________________________________________________________________ test_DesignInfo_subset ___________________________________________________________________________________

    def test_DesignInfo_subset():
        # For each combination of:
        #   formula, term names, term objects, mixed term name and term objects
        # check that results match subset of full build
        # and that removed variables don't hurt
        all_data = {"x": [1, 2],
                    "y": [[3.1, 3.2],
                          [4.1, 4.2]],
                    "z": [5, 6]}
        all_terms = make_termlist("x", "y", "z")
        def iter_maker():
            yield all_data
        all_builder = design_matrix_builders([all_terms], iter_maker, 0)[0]
        full_matrix = build_design_matrices([all_builder], all_data)[0]

        def t(which_terms, variables, columns):
            sub_design_info = all_builder.subset(which_terms)
            sub_data = {}
            for variable in variables:
                sub_data[variable] = all_data[variable]
            sub_matrix = build_design_matrices([sub_design_info], sub_data)[0]
            sub_full_matrix = full_matrix[:, columns]
            if not isinstance(which_terms, six.string_types):
                assert len(which_terms) == len(sub_design_info.terms)
            assert np.array_equal(sub_matrix, sub_full_matrix)

>       t("~ 0 + x + y + z", ["x", "y", "z"], slice(None))

patsy/test_build.py:700:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_build.py:690: in t
    sub_design_info = all_builder.subset(which_terms)
patsy/design_info.py:630: in subset
    desc = ModelDesc.from_formula(which_terms)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 0 + x + y + z', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
____________________________________________________________________________________ test_formula_likes _____________________________________________________________________________________

    def test_formula_likes():
        # Plain array-like, rhs only
        t([[1, 2, 3], [4, 5, 6]], {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        t((None, [[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        t(np.asarray([[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        t((None, np.asarray([[1, 2, 3], [4, 5, 6]])), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        dm = DesignMatrix([[1, 2, 3], [4, 5, 6]], default_column_prefix="foo")
        t(dm, {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["foo0", "foo1", "foo2"])
        t((None, dm), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["foo0", "foo1", "foo2"])

        # Plain array-likes, lhs and rhs
        t(([1, 2], [[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        t(([[1], [2]], [[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        t((np.asarray([1, 2]), np.asarray([[1, 2, 3], [4, 5, 6]])), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        t((np.asarray([[1], [2]]), np.asarray([[1, 2, 3], [4, 5, 6]])), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        x_dm = DesignMatrix([[1, 2, 3], [4, 5, 6]], default_column_prefix="foo")
        y_dm = DesignMatrix([1, 2], default_column_prefix="bar")
        t((y_dm, x_dm), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["foo0", "foo1", "foo2"],
          [[1], [2]], ["bar0"])
        # number of rows must match
        t_invalid(([1, 2, 3], [[1, 2, 3], [4, 5, 6]]), {}, 0)

        # tuples must have the right size
        t_invalid(([[1, 2, 3]],), {}, 0)
        t_invalid(([[1, 2, 3]], [[1, 2, 3]], [[1, 2, 3]]), {}, 0)

        # plain Series and DataFrames
        if have_pandas:
            # Names are extracted
            t(pandas.DataFrame({"x": [1, 2, 3]}), {}, 0,
              False,
              [[1], [2], [3]], ["x"])
            t(pandas.Series([1, 2, 3], name="asdf"), {}, 0,
              False,
              [[1], [2], [3]], ["asdf"])
            t((pandas.DataFrame({"y": [4, 5, 6]}),
               pandas.DataFrame({"x": [1, 2, 3]})), {}, 0,
              False,
              [[1], [2], [3]], ["x"],
              [[4], [5], [6]], ["y"])
            t((pandas.Series([4, 5, 6], name="y"),
               pandas.Series([1, 2, 3], name="x")), {}, 0,
              False,
              [[1], [2], [3]], ["x"],
              [[4], [5], [6]], ["y"])
            # Or invented
            t((pandas.DataFrame([[4, 5, 6]]),
               pandas.DataFrame([[1, 2, 3]], columns=[7, 8, 9])), {}, 0,
              False,
              [[1, 2, 3]], ["x7", "x8", "x9"],
              [[4, 5, 6]], ["y0", "y1", "y2"])
            t(pandas.Series([1, 2, 3]), {}, 0,
              False,
              [[1], [2], [3]], ["x0"])
            # indices must match
            t_invalid((pandas.DataFrame([[1]], index=[1]),
                       pandas.DataFrame([[1]], index=[2])),
                      {}, 0)

        # Foreign ModelDesc factories
        class ForeignModelSource(object):
            def __patsy_get_model_desc__(self, data):
                return ModelDesc([Term([LookupFactor("Y")])],
                                 [Term([LookupFactor("X")])])
        foreign_model = ForeignModelSource()
        t(foreign_model,
          {"Y": [1, 2],
           "X": [[1, 2], [3, 4]]},
          0,
          True,
          [[1, 2], [3, 4]], ["X[0]", "X[1]"],
          [[1], [2]], ["Y"])
        class BadForeignModelSource(object):
            def __patsy_get_model_desc__(self, data):
                return data
        t_invalid(BadForeignModelSource(), {}, 0)

        # string formulas
>       t("y ~ x", {"y": [1, 2], "x": [3, 4]}, 0,
          True,
          [[1, 3], [1, 4]], ["Intercept", "x"],
          [[1], [2]], ["y"])

patsy/test_highlevel.py:252:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:87: in t
    builders = incr_dbuilders(formula_like, data_iter_maker, depth)
patsy/highlevel.py:129: in incr_dbuilders
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ x', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_term_info _______________________________________________________________________________________

    def test_term_info():
        data = balanced(a=2, b=2)
>       rhs = dmatrix("a:b", data)

patsy/test_highlevel.py:400:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a:b', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_data_types ______________________________________________________________________________________

    def test_data_types():
        data = {"a": [1, 2, 3],
                "b": [1.0, 2.0, 3.0],
                "c": np.asarray([1, 2, 3], dtype=np.float32),
                "d": [True, False, True],
                "e": ["foo", "bar", "baz"],
                "f": C([1, 2, 3]),
                "g": C(["foo", "bar", "baz"]),
                "h": np.array(["foo", 1, (1, "hi")], dtype=object),
                }
>       t("~ 0 + a", data, 0, True,
          [[1], [2], [3]], ["a"])

patsy/test_highlevel.py:417:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 0 + a', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_categorical ______________________________________________________________________________________

    def test_categorical():
        data = balanced(a=2, b=2)
        # There are more exhaustive tests for all the different coding options in
        # test_build; let's just make sure that C() and stuff works.
>       t("~ C(a)", data, 0,
          True,
          [[1, 0], [1, 0], [1, 1], [1, 1]], ["Intercept", "C(a)[T.a2]"])

patsy/test_highlevel.py:440:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ C(a)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________________ test_builtins _______________________________________________________________________________________

    def test_builtins():
        data = {"x": [1, 2, 3],
                "y": [4, 5, 6],
                "a b c": [10, 20, 30]}
>       t("0 + I(x + y)", data, 0,
          True,
          [[1], [2], [3], [4], [5], [6]], ["I(x + y)"])

patsy/test_highlevel.py:492:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '0 + I(x + y)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_incremental ______________________________________________________________________________________

    def test_incremental():
        # incr_dbuilder(s)
        # stateful transformations
        datas = [
            {"a": ["a2", "a2", "a2"],
             "x": [1, 2, 3]},
            {"a": ["a2", "a2", "a1"],
             "x": [4, 5, 6]},
            ]
        x = np.asarray([1, 2, 3, 4, 5, 6])
        sin_center_x = np.sin(x - np.mean(x))
        x_col = sin_center_x - np.mean(sin_center_x)
        def data_iter_maker():
            return iter(datas)
>       builders = incr_dbuilders("1 ~ a + center(np.sin(center(x)))",
                                  data_iter_maker)

patsy/test_highlevel.py:516:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:129: in incr_dbuilders
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '1 ~ a + center(np.sin(center(x)))', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
____________________________________________________________________________________ test_env_transform _____________________________________________________________________________________

    def test_env_transform():
>       t("~ np.sin(x)", {"x": [1, 2, 3]}, 0,
          True,
          [[1, np.sin(1)], [1, np.sin(2)], [1, np.sin(3)]],
          ["Intercept", "np.sin(x)"])

patsy/test_highlevel.py:543:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ np.sin(x)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_term_order ______________________________________________________________________________________

    def test_term_order():
        data = balanced(a=2, b=2)
        data["x1"] = np.linspace(0, 1, 4)
        data["x2"] = data["x1"] ** 2

        def t_terms(formula, order):
            m = dmatrix(formula, data)
            assert m.design_info.term_names == order

>       t_terms("a + b + x1 + x2", ["Intercept", "a", "b", "x1", "x2"])

patsy/test_highlevel.py:567:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:564: in t_terms
    m = dmatrix(formula, data)
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a + b + x1 + x2', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_multicolumn ______________________________________________________________________________________

    def test_multicolumn():
        data = {
            "a": ["a1", "a2"],
            "X": [[1, 2], [3, 4]],
            "Y": [[1, 3], [2, 4]],
            }
>       t("X*Y", data, 0,
          True,
          [[1, 1, 2, 1, 3, 1 * 1, 2 * 1, 1 * 3, 2 * 3],
           [1, 3, 4, 2, 4, 3 * 2, 4 * 2, 3 * 4, 4 * 4]],
          ["Intercept", "X[0]", "X[1]", "Y[0]", "Y[1]",
           "X[0]:Y[0]", "X[1]:Y[0]", "X[0]:Y[1]", "X[1]:Y[1]"])

patsy/test_highlevel.py:607:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'X*Y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________ test_dmatrix_dmatrices_no_data _______________________________________________________________________________

    def test_dmatrix_dmatrices_no_data():
        x = [1, 2, 3]
        y = [4, 5, 6]
>       assert np.allclose(dmatrix("x"), [[1, 1], [1, 2], [1, 3]])

patsy/test_highlevel.py:624:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_________________________________________________________________________________ test_designinfo_describe __________________________________________________________________________________

    def test_designinfo_describe():
>       lhs, rhs = dmatrices("y ~ x + a", {"y": [1, 2, 3],
                                           "x": [4, 5, 6],
                                           "a": ["a1", "a2", "a3"]})

patsy/test_highlevel.py:630:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:309: in dmatrices
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ x + a', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________________ test_evalfactor_reraise __________________________________________________________________________________

    def test_evalfactor_reraise():
        # This will produce a PatsyError, but buried inside the factor evaluation,
        # so the original code has no way to give it an appropriate origin=
        # attribute. EvalFactor should notice this, and add a useful origin:
        def raise_patsy_error(x):
            raise PatsyError("WHEEEEEE")
        formula = "raise_patsy_error(X) + Y"
        try:
>           dmatrix(formula, {"X": [1, 2, 3], "Y": [4, 5, 6]})

patsy/test_highlevel.py:644:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'raise_patsy_error(X) + Y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________________ test_dmatrix_NA_action ___________________________________________________________________________________

    def test_dmatrix_NA_action():
        data = {"x": [1, 2, 3, np.nan], "y": [np.nan, 20, 30, 40]}

        return_types = ["matrix"]
        if have_pandas:
            return_types.append("dataframe")

        for return_type in return_types:
>           mat = dmatrix("x + y", data=data, return_type=return_type)

patsy/test_highlevel.py:671:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x + y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________________ test_0d_data ________________________________________________________________________________________

    def test_0d_data():
        # Use case from statsmodels/statsmodels#1881
        data_0d = {"x1": 1.1, "x2": 1.2, "a": "a1"}

        for formula, expected in [
                ("x1 + x2", [[1, 1.1, 1.2]]),
                ("C(a, levels=('a1', 'a2')) + x1", [[1, 0, 1.1]]),
                ]:
>           mat = dmatrix(formula, data_0d)

patsy/test_highlevel.py:710:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x1 + x2', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________ test_env_not_saved_in_builder _______________________________________________________________________________

    def test_env_not_saved_in_builder():
        x_in_env = [1, 2, 3]
>       design_matrix = dmatrix("x_in_env", {})

patsy/test_highlevel.py:726:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x_in_env', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________________ test_issue_11 _______________________________________________________________________________________

    def test_issue_11():
        # Give a sensible error message for level mismatches
        # (At some points we've failed to put an origin= on these errors)
        env = EvalEnvironment.capture()
        data = {"X" : [0,1,2,3], "Y" : [1,2,3,4]}
        formula = "C(X) + Y"
        new_data = {"X" : [0,0,1,2,3,3,4], "Y" : [1,2,3,4,5,6,7]}
>       info = dmatrix(formula, data)

patsy/test_regressions.py:18:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'C(X) + Y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
___________________________________________________________________________________ test_PushbackAdapter ____________________________________________________________________________________

    def test_PushbackAdapter():
        it = PushbackAdapter(iter([1, 2, 3, 4]))
>       assert it.has_more()

patsy/util.py:370:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/util.py:362: in has_more
    self.peek()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <patsy.util.PushbackAdapter object at 0x7f69547dc5b0>

    def peek(self):
        try:
>           obj = six.advance_iterator(self)
E           TypeError: next expected at least 1 argument, got 0

patsy/util.py:354: TypeError
================================================================================== short test summary info ==================================================================================
FAILED patsy/build.py::test__examine_factor_types - TypeError: next expected at least 1 argument, got 0
FAILED patsy/contrasts.py::test__obj_to_readable_str - assert '\\u20ac' == "b'\\xa4'"
FAILED patsy/desc.py::test_ModelDesc_from_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/desc.py::test_eval_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/desc.py::test_eval_formula_error_reporting - TypeError: next expected at least 1 argument, got 0
FAILED patsy/desc.py::test_formula_factor_origin - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_EvalFactor_memorize_passes_needed - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_EvalFactor_end_to_end - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_annotated_tokens - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_replace_bare_funcalls - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_capture_obj_method_calls - TypeError: next expected at least 1 argument, got 0
FAILED patsy/mgcv_cubic_splines.py::test_crs_compat - ImportError: Cubic spline functionality requires scipy.
FAILED patsy/mgcv_cubic_splines.py::test_crs_with_specific_constraint - TypeError: next expected at least 1 argument, got 0
FAILED patsy/mgcv_cubic_splines.py::test_te_1smooth - ImportError: Cubic spline functionality requires scipy.
FAILED patsy/mgcv_cubic_splines.py::test_te_2smooths - TypeError: next expected at least 1 argument, got 0
FAILED patsy/mgcv_cubic_splines.py::test_te_3smooths - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test__tokenize_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_origin - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_errors - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_extra_op - TypeError: next expected at least 1 argument, got 0
FAILED patsy/splines.py::test_bs_compat - ImportError: spline functionality requires scipy
FAILED patsy/splines.py::test_bs_0degree - ImportError: spline functionality requires scipy
FAILED patsy/splines.py::test_bs_errors - ImportError: spline functionality requires scipy
FAILED patsy/test_build.py::test_DesignInfo_subset - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_formula_likes - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_term_info - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_data_types - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_categorical - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_builtins - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_incremental - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_env_transform - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_term_order - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_multicolumn - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_dmatrix_dmatrices_no_data - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_designinfo_describe - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_evalfactor_reraise - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_dmatrix_NA_action - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_0d_data - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_env_not_saved_in_builder - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_regressions.py::test_issue_11 - TypeError: next expected at least 1 argument, got 0
FAILED patsy/util.py::test_PushbackAdapter - TypeError: next expected at least 1 argument, got 0
============================================================================== 42 failed, 106 passed in 38.91s ==============================================================================

@kloczek
Copy link
Author

kloczek commented Nov 30, 2023

Just tested new version and those units still are failing.
Is it possible at least reproduce this issue? 🤔

@matthewwardrop
Copy link
Collaborator

Hi @kloczek ! Sorry for the delay, and thanks for reporting. That does look like an annoying issue.

I cannot reproduce that error here, and things are working in the CI tests. I don't have a whole lot of time to dig into obscure patsy issues, since I'd much rather spend my time on formulaic or other work. If you do some digging and find out what's going on, let me know, and I'll happily work on a fix.

Alternatively, perhaps @njsmith has some insight?

@kloczek
Copy link
Author

kloczek commented Dec 25, 2023

Here is pytest output of new 0.5.5
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-patsy-0.5.5-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-patsy-0.5.5-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -m 'not network'
==================================================================================== test session starts ====================================================================================
platform linux -- Python 3.8.18, pytest-7.4.3, pluggy-1.3.0
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/tkloczko/rpmbuild/BUILD/patsy-0.5.5
configfile: setup.cfg
testpaths: patsy
plugins: hypothesis-6.84.3, xdist-3.5.0, flaky-3.7.0, anyio-4.2.0, asyncio-0.23.2, benchmark-4.0.0, mock-3.12.0, Faker-21.0.0, services-2.2.1
asyncio: mode=strict
collected 148 items

patsy/build.py ......F                                                                                                                                                                [  4%]
patsy/builtins.py ..                                                                                                                                                                  [  6%]
patsy/categorical.py ....                                                                                                                                                             [  8%]
patsy/constraint.py .....                                                                                                                                                             [ 12%]
patsy/contrasts.py .F.......                                                                                                                                                          [ 18%]
patsy/desc.py ..FFFF                                                                                                                                                                  [ 22%]
patsy/design_info.py ........                                                                                                                                                         [ 27%]
patsy/eval.py ..........FFFFF                                                                                                                                                         [ 37%]
patsy/infix_parser.py .                                                                                                                                                               [ 38%]
patsy/mgcv_cubic_splines.py ......FF.FFF                                                                                                                                              [ 46%]
patsy/missing.py .....                                                                                                                                                                [ 50%]
patsy/origin.py .                                                                                                                                                                     [ 50%]
patsy/parse_formula.py FFFFF                                                                                                                                                          [ 54%]
patsy/redundancy.py ....                                                                                                                                                              [ 56%]
patsy/splines.py .FFF                                                                                                                                                                 [ 59%]
patsy/test_build.py ................F                                                                                                                                                 [ 70%]
patsy/test_highlevel.py FFFFFFFFF.FFFFFFFF                                                                                                                                            [ 83%]
patsy/test_regressions.py F                                                                                                                                                           [ 83%]
patsy/test_state.py ...                                                                                                                                                               [ 85%]
patsy/tokens.py ..                                                                                                                                                                    [ 87%]
patsy/user_util.py ...                                                                                                                                                                [ 89%]
patsy/util.py .....F..........                                                                                                                                                        [100%]

========================================================================================= FAILURES ==========================================================================================
________________________________________________________________________________ test__examine_factor_types _________________________________________________________________________________

    def test__examine_factor_types():
        from patsy.categorical import C
        class MockFactor(object):
            def __init__(self):
                # You should check this using 'is', not '=='
                from patsy.origin import Origin
                self.origin = Origin("MOCK", 1, 2)

            def eval(self, state, data):
                return state[data]

            def name(self):
                return "MOCK MOCK"

        # This hacky class can only be iterated over once, but it keeps track of
        # how far it got.
        class DataIterMaker(object):
            def __init__(self):
                self.i = -1

            def __call__(self):
                return self

            def __iter__(self):
                return self

            def __next__(self):
                self.i += 1
                if self.i > 1:
                    raise StopIteration
                return self.i
            __next__ = next

        num_1dim = MockFactor()
        num_1col = MockFactor()
        num_4col = MockFactor()
        categ_1col = MockFactor()
        bool_1col = MockFactor()
        string_1col = MockFactor()
        object_1col = MockFactor()
        object_levels = (object(), object(), object())
        factor_states = {
            num_1dim: ([1, 2, 3], [4, 5, 6]),
            num_1col: ([[1], [2], [3]], [[4], [5], [6]]),
            num_4col: (np.zeros((3, 4)), np.ones((3, 4))),
            categ_1col: (C(["a", "b", "c"], levels=("a", "b", "c"),
                           contrast="MOCK CONTRAST"),
                         C(["c", "b", "a"], levels=("a", "b", "c"),
                           contrast="MOCK CONTRAST")),
            bool_1col: ([True, True, False], [False, True, True]),
            # It has to read through all the data to see all the possible levels:
            string_1col: (["a", "a", "a"], ["c", "b", "a"]),
            object_1col: ([object_levels[0]] * 3, object_levels),
            }

        it = DataIterMaker()
        (num_column_counts, cat_levels_contrasts,
>        ) = _examine_factor_types(list(factor_states.keys()), factor_states, it,
                                   NAAction())

patsy/build.py:523:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

factors = [<patsy.build.test__examine_factor_types.<locals>.MockFactor object at 0x7f88202559d0>, <patsy.build.test__examine_fac... object at 0x7f88202551f0>, <patsy.build.test__examine_factor_types.<locals>.MockFactor object at 0x7f8820255f10>, ...]
factor_states = {<patsy.build.test__examine_factor_types.<locals>.MockFactor object at 0x7f88202559d0>: ([1, 2, 3], [4, 5, 6]), <patsy...egorical._CategoricalBox object at 0x7f8820255310>, <patsy.categorical._CategoricalBox object at 0x7f88202558e0>), ...}
data_iter_maker = <patsy.build.test__examine_factor_types.<locals>.DataIterMaker object at 0x7f8820255a30>, NA_action = <patsy.missing.NAAction object at 0x7f8820255640>

    def _examine_factor_types(factors, factor_states, data_iter_maker, NA_action):
        num_column_counts = {}
        cat_sniffers = {}
        examine_needed = set(factors)
>       for data in data_iter_maker():
E       TypeError: next expected at least 1 argument, got 0

patsy/build.py:441: TypeError
_________________________________________________________________________________ test__obj_to_readable_str _________________________________________________________________________________

    def test__obj_to_readable_str():
        def t(obj, expected):
            got = _obj_to_readable_str(obj)
            assert type(got) is str
            assert got == expected
        t(1, "1")
        t(1.0, "1.0")
        t("asdf", "asdf")
        t(six.u("asdf"), "asdf")
        if sys.version_info >= (3,):
            # we can use "foo".encode here b/c this is python 3!
            # a utf-8 encoded euro-sign comes out as a real euro sign.
            t("\\u20ac".encode("utf-8"), six.u("\\u20ac"))
            # but a iso-8859-15 euro sign can't be decoded, and we fall back on
            # repr()
>           t("\\u20ac".encode("iso-8859-15"), "b'\\xa4'")

patsy/contrasts.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

obj = b'\\u20ac', expected = "b'\\xa4'"

    def t(obj, expected):
        got = _obj_to_readable_str(obj)
        assert type(got) is str
>       assert got == expected
E       assert '\\u20ac' == "b'\\xa4'"
E         - b'\xa4'
E         + \u20ac

patsy/contrasts.py:89: AssertionError
________________________________________________________________________________ test_ModelDesc_from_formula ________________________________________________________________________________

    def test_ModelDesc_from_formula():
>       for input in ("y ~ x", parse_formula("y ~ x")):

patsy/desc.py:189:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ x', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_eval_formula _____________________________________________________________________________________

    def test_eval_formula():
>       _do_eval_formula_tests(_eval_tests)

patsy/desc.py:612:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/desc.py:601: in _do_eval_formula_tests
    model_desc = ModelDesc.from_formula(code)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________ test_eval_formula_error_reporting _____________________________________________________________________________

    def test_eval_formula_error_reporting():
        from patsy.parse_formula import _parsing_error_test
        parse_fn = lambda formula: ModelDesc.from_formula(formula)
>       _parsing_error_test(parse_fn, _eval_error_tests)

patsy/desc.py:617:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:273: in _parsing_error_test
    parse_fn(bad_code)
patsy/desc.py:616: in <lambda>
    parse_fn = lambda formula: ModelDesc.from_formula(formula)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a +', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
a <+>
'a +' 2 3
________________________________________________________________________________ test_formula_factor_origin _________________________________________________________________________________

    def test_formula_factor_origin():
        from patsy.origin import Origin
>       desc = ModelDesc.from_formula("a + b")

patsy/desc.py:621:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a + b', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________ test_EvalFactor_memorize_passes_needed ___________________________________________________________________________

    def test_EvalFactor_memorize_passes_needed():
        from patsy.state import stateful_transform
        foo = stateful_transform(lambda: "FOO-OBJ")
        bar = stateful_transform(lambda: "BAR-OBJ")
        quux = stateful_transform(lambda: "QUUX-OBJ")
        e = EvalFactor("foo(x) + bar(foo(y)) + quux(z, w)")

        state = {}
        eval_env = EvalEnvironment.capture(0)
>       passes = e.memorize_passes_needed(state, eval_env)

patsy/eval.py:595:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:496: in memorize_passes_needed
    eval_code = replace_bare_funcalls(self.code, new_name_maker)
patsy/eval.py:736: in replace_bare_funcalls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'foo(x) + bar(foo(y)) + quux(z, w)'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
________________________________________________________________________________ test_EvalFactor_end_to_end _________________________________________________________________________________

    def test_EvalFactor_end_to_end():
        from patsy.state import stateful_transform
        foo = stateful_transform(_MockTransform)
        e = EvalFactor("foo(x) + foo(foo(y))")
        state = {}
        eval_env = EvalEnvironment.capture(0)
>       passes = e.memorize_passes_needed(state, eval_env)

patsy/eval.py:652:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:496: in memorize_passes_needed
    eval_code = replace_bare_funcalls(self.code, new_name_maker)
patsy/eval.py:736: in replace_bare_funcalls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'foo(x) + foo(foo(y))'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
___________________________________________________________________________________ test_annotated_tokens ___________________________________________________________________________________

    def test_annotated_tokens():
>       tokens_without_origins = [(token_type, token, props)
                                  for (token_type, token, origin, props)
                                  in (annotated_tokens("a(b) + c.d"))]

patsy/eval.py:710:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:710: in <listcomp>
    tokens_without_origins = [(token_type, token, props)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a(b) + c.d'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
________________________________________________________________________________ test_replace_bare_funcalls _________________________________________________________________________________

    def test_replace_bare_funcalls():
        def replacer1(token):
            return {"a": "b", "foo": "_internal.foo.process"}.get(token, token)
        def t1(code, expected):
            replaced = replace_bare_funcalls(code, replacer1)
            print(("%r -> %r" % (code, replaced)))
            print(("(wanted %r)" % (expected,)))
            assert replaced == expected
>       t1("foobar()", "foobar()")

patsy/eval.py:750:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:746: in t1
    replaced = replace_bare_funcalls(code, replacer1)
patsy/eval.py:736: in replace_bare_funcalls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'foobar()'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
_______________________________________________________________________________ test_capture_obj_method_calls _______________________________________________________________________________

    def test_capture_obj_method_calls():
>       assert (capture_obj_method_calls("foo", "a + foo.baz(bar) + b.c(d)")
                == [("foo.baz", "foo.baz(bar)")])

patsy/eval.py:798:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:789: in capture_obj_method_calls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a + foo.baz(bar) + b.c(d)'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
______________________________________________________________________________________ test_crs_compat ______________________________________________________________________________________

knots = array([-2216.83782005,   -13.921875  ,     9.28125   ,  1477.89188004])

    def _get_natural_f(knots):
        """Returns mapping of natural cubic spline values to 2nd derivatives.

        .. note:: See 'Generalized Additive Models', Simon N. Wood, 2006, pp 145-146

        :param knots: The 1-d array knots used for cubic spline parametrization,
         must be sorted in ascending order.
        :return: A 2-d array mapping natural cubic spline values at
         knots to second derivatives.

        :raise ImportError: if scipy is not found, required for
         ``linalg.solve_banded()``
        """
        try:
>           from scipy import linalg
E           ModuleNotFoundError: No module named 'scipy'

patsy/mgcv_cubic_splines.py:34: ModuleNotFoundError

During handling of the above exception, another exception occurred:

    def test_crs_compat():
        from patsy.test_state import check_stateful
        from patsy.test_splines_crs_data import (R_crs_test_x,
                                                 R_crs_test_data,
                                                 R_crs_num_tests)
        lines = R_crs_test_data.split("\n")
        tests_ran = 0
        start_idx = lines.index("--BEGIN TEST CASE--")
        while True:
            if not lines[start_idx] == "--BEGIN TEST CASE--":
                break
            start_idx += 1
            stop_idx = lines.index("--END TEST CASE--", start_idx)
            block = lines[start_idx:stop_idx]
            test_data = {}
            for line in block:
                key, value = line.split("=", 1)
                test_data[key] = value
            # Translate the R output into Python calling conventions
            adjust_df = 0
            if test_data["spline_type"] == "cr" or test_data["spline_type"] == "cs":
                spline_type = CR
            elif test_data["spline_type"] == "cc":
                spline_type = CC
                adjust_df += 1
            else:
                raise ValueError("Unrecognized spline type %r"
                                 % (test_data["spline_type"],))
            kwargs = {}
            if test_data["absorb_cons"] == "TRUE":
                kwargs["constraints"] = "center"
                adjust_df += 1
            if test_data["knots"] != "None":
                all_knots = np.asarray(eval(test_data["knots"]))
                all_knots.sort()
                kwargs["knots"] = all_knots[1:-1]
                kwargs["lower_bound"] = all_knots[0]
                kwargs["upper_bound"] = all_knots[-1]
            else:
                kwargs["df"] = eval(test_data["nb_knots"]) - adjust_df
            output = np.asarray(eval(test_data["output"]))
            # Do the actual test
>           check_stateful(spline_type, False, R_crs_test_x, output, **kwargs)

patsy/mgcv_cubic_splines.py:815:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_state.py:81: in check_stateful
    output_chunk = t.transform(input_chunk, *args, **kwargs)
patsy/mgcv_cubic_splines.py:680: in transform
    dm = _get_crs_dmatrix(x, self._all_knots,
patsy/mgcv_cubic_splines.py:365: in _get_crs_dmatrix
    dm = _get_free_crs_dmatrix(x, knots, cyclic)
patsy/mgcv_cubic_splines.py:339: in _get_free_crs_dmatrix
    f = _get_natural_f(knots)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

knots = array([-2216.83782005,   -13.921875  ,     9.28125   ,  1477.89188004])

    def _get_natural_f(knots):
        """Returns mapping of natural cubic spline values to 2nd derivatives.

        .. note:: See 'Generalized Additive Models', Simon N. Wood, 2006, pp 145-146

        :param knots: The 1-d array knots used for cubic spline parametrization,
         must be sorted in ascending order.
        :return: A 2-d array mapping natural cubic spline values at
         knots to second derivatives.

        :raise ImportError: if scipy is not found, required for
         ``linalg.solve_banded()``
        """
        try:
            from scipy import linalg
        except ImportError: # pragma: no cover
>           raise ImportError("Cubic spline functionality requires scipy.")
E           ImportError: Cubic spline functionality requires scipy.

patsy/mgcv_cubic_splines.py:36: ImportError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
[array([ 1.00000000e+00, -1.50000000e+00,  2.25000000e+00, -3.37500000e+00,
        5.06250000e+00, -7.59375000e+00,  1.13906250e+01, -1.70859375e+01,
        2.56289062e+01, -3.84433594e+01,  5.76650391e+01, -8.64975586e+01,
        1.29746338e+02, -1.94619507e+02,  2.91929260e+02, -4.37893890e+02,
        6.56840836e+02, -9.85261253e+02,  1.47789188e+03, -2.21683782e+03])]
_____________________________________________________________________________ test_crs_with_specific_constraint _____________________________________________________________________________

    def test_crs_with_specific_constraint():
        from patsy.highlevel import incr_dbuilder, build_design_matrices, dmatrix
        x = (-1.5)**np.arange(20)
        # Hard coded R values for smooth: s(x, bs="cr", k=5)
        # R> knots <- smooth$xp
        knots_R = np.array([-2216.837820053100585937,
                            -50.456909179687500000,
                            -0.250000000000000000,
                            33.637939453125000000,
                            1477.891880035400390625])
        # R> centering.constraint <- t(qr.X(attr(smooth, "qrc")))
        centering_constraint_R = np.array([[0.064910676323168478574,
                                            1.4519875239407085132,
                                            -2.1947446912471946234,
                                            1.6129783104357671153,
                                            0.064868180547550072235]])
        # values for which we want a prediction
        new_x = np.array([-3000., -200., 300., 2000.])
>       result1 = dmatrix("cr(new_x, knots=knots_R[1:-1], "
                          "lower_bound=knots_R[0], upper_bound=knots_R[-1], "
                          "constraints=centering_constraint_R)")

patsy/mgcv_cubic_splines.py:841:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'cr(new_x, knots=knots_R[1:-1], lower_bound=knots_R[0], upper_bound=knots_R[-1], constraints=centering_constraint_R)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_te_1smooth ______________________________________________________________________________________

knots = array([-2216.83782005,  -108.12194824,    -5.0625    ,     3.375     ,
          72.08129883,  1477.89188004])

    def _get_natural_f(knots):
        """Returns mapping of natural cubic spline values to 2nd derivatives.

        .. note:: See 'Generalized Additive Models', Simon N. Wood, 2006, pp 145-146

        :param knots: The 1-d array knots used for cubic spline parametrization,
         must be sorted in ascending order.
        :return: A 2-d array mapping natural cubic spline values at
         knots to second derivatives.

        :raise ImportError: if scipy is not found, required for
         ``linalg.solve_banded()``
        """
        try:
>           from scipy import linalg
E           ModuleNotFoundError: No module named 'scipy'

patsy/mgcv_cubic_splines.py:34: ModuleNotFoundError

During handling of the above exception, another exception occurred:

    def test_te_1smooth():
        from patsy.splines import bs
        # Tensor product of 1 smooth covariate should be the same
        # as the smooth alone
        x = (-1.5)**np.arange(20)
>       assert np.allclose(cr(x, df=6), te(cr(x, df=6)))

patsy/mgcv_cubic_splines.py:964:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/state.py:48: in stateful_transform_wrapper
    return transform.transform(*args, **kwargs)
patsy/mgcv_cubic_splines.py:680: in transform
    dm = _get_crs_dmatrix(x, self._all_knots,
patsy/mgcv_cubic_splines.py:365: in _get_crs_dmatrix
    dm = _get_free_crs_dmatrix(x, knots, cyclic)
patsy/mgcv_cubic_splines.py:339: in _get_free_crs_dmatrix
    f = _get_natural_f(knots)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

knots = array([-2216.83782005,  -108.12194824,    -5.0625    ,     3.375     ,
          72.08129883,  1477.89188004])

    def _get_natural_f(knots):
        """Returns mapping of natural cubic spline values to 2nd derivatives.

        .. note:: See 'Generalized Additive Models', Simon N. Wood, 2006, pp 145-146

        :param knots: The 1-d array knots used for cubic spline parametrization,
         must be sorted in ascending order.
        :return: A 2-d array mapping natural cubic spline values at
         knots to second derivatives.

        :raise ImportError: if scipy is not found, required for
         ``linalg.solve_banded()``
        """
        try:
            from scipy import linalg
        except ImportError: # pragma: no cover
>           raise ImportError("Cubic spline functionality requires scipy.")
E           ImportError: Cubic spline functionality requires scipy.

patsy/mgcv_cubic_splines.py:36: ImportError
_____________________________________________________________________________________ test_te_2smooths ______________________________________________________________________________________

    def test_te_2smooths():
        from patsy.highlevel import incr_dbuilder, build_design_matrices
        x1 = (-1.5)**np.arange(20)
        x2 = (1.6)**np.arange(20)
        # Hard coded R results for smooth: te(x1, x2, bs=c("cs", "cc"), k=c(5,7))
        # Without centering constraint:
        dmatrix_R_nocons = \
            np.array([[-4.4303024184609255207e-06,  7.9884438387230142235e-06,
                       9.7987758194797719025e-06,   -7.2894213245475212959e-08,
                       1.5907686862964493897e-09,   -3.2565884983072595159e-11,
                       0.0170749607855874667439,    -3.0788499835965849050e-02,
                       -3.7765754357352458725e-02,  2.8094376299826799787e-04,
                       -6.1310290747349201414e-06,  1.2551314933193442915e-07,
                       -0.26012671685838206770,     4.6904420337437874311e-01,
                       0.5753384627946153129230,    -4.2800085814700449330e-03,
                       9.3402525733484874533e-05,   -1.9121170389937518131e-06,
                       -0.0904312240489447832781,   1.6305991924427923334e-01,
                       2.0001237112941641638e-01,   -1.4879148887003382663e-03,
                       3.2470731316462736135e-05,   -6.6473404365914134499e-07,
                       2.0447857920168824846e-05,   -3.6870296695050991799e-05,
                       -4.5225801045409022233e-05,  3.3643990293641665710e-07,
                       -7.3421200200015877329e-09,  1.5030635073660743297e-10],
                      [-9.4006130602653794302e-04,  7.8681398069163730347e-04,
                       2.4573006857381437217e-04,   -1.4524712230452725106e-04,
                       7.8216741353106329551e-05,   -3.1304283003914264551e-04,
                       3.6231183382798337611064,    -3.0324832476174168328e+00,
                       -9.4707559178211142559e-01,  5.5980126937492580286e-01,
                       -3.0145747744342332730e-01,  1.2065077148806895302e+00,
                       -35.17561267504181188315,    2.9441339255948005160e+01,
                       9.1948319320782125885216,    -5.4349184288245195873e+00,
                       2.9267472035096449012e+00,   -1.1713569391233907169e+01,
                       34.0275626863976370373166,   -2.8480442582712722555e+01,
                       -8.8947340548151565542e+00,  5.2575353623762932642e+00,
                       -2.8312249982592527786e+00,  1.1331265795534763541e+01,
                       7.9462158845078978420e-01,   -6.6508361863670617531e-01,
                       -2.0771242914526857892e-01,  1.2277550230353953542e-01,
                       -6.6115593588420035198e-02,  2.6461103043402139923e-01]])
        # With centering constraint:
        dmatrix_R_cons = \
            np.array([[0.00329998606323867252343,   1.6537431155796576600e-04,
                       -1.2392262709790753433e-04,  6.5405304166706783407e-05,
                       -6.6764045799537624095e-05,  -0.1386431081763726258504,
                       0.124297283800864313830,     -3.5487293655619825405e-02,
                       -3.0527115315785902268e-03,  5.2009247643311604277e-04,
                       -0.00384203992301702674378,  -0.058901915802819435064,
                       0.266422358491648914036,     0.5739281693874087597607,
                       -1.3171008503525844392e-03,  8.2573456631878912413e-04,
                       6.6730833453016958831e-03,   -0.1467677784718444955470,
                       0.220757650934837484913,     0.1983127687880171796664,
                       -1.6269930328365173316e-03,  -1.7785892412241208812e-03,
                       -3.2702835436351201243e-03,  -4.3252183044300757109e-02,
                       4.3403766976235179376e-02,   3.5973406402893762387e-05,
                       -5.4035858568225075046e-04,  2.9565209382794241247e-04,
                       -2.2769990750264097637e-04],
                      [0.41547954838956052681098,   1.9843570584107707994e-02,
                       -1.5746590234791378593e-02,  8.3171184312221431434e-03,
                       -8.7233014052017516377e-03,  -15.9926770785086258541696,
                       16.503663226274017716833,    -6.6005803955894726265e-01,
                       1.3986092022708346283e-01,   -2.3516913533670955050e-01,
                       0.72251037497207359905360,   -9.827337059999853963177,
                       3.917078117294827688255,     9.0171773596973618936090,
                       -5.0616811270787671617e+00,  3.0189990249009683865e+00,
                       -1.0872720629943064097e+01,  26.9308504460453121964747,
                       -21.212262927009287949431,   -9.1088328555582247503253,
                       5.2400156972500298025e+00,   -3.0593641098325474736e+00,
                       1.0919392118399086300e+01,   -4.6564290223265718538e+00,
                       4.8071307441606982991e+00,   -1.9748377005689798924e-01,
                       5.4664183716965096538e-02,   -2.8871392916916285148e-02,
                       2.3592766838010845176e-01]])
        new_x1 = np.array([11.390625, 656.84083557128906250])
        new_x2 = np.array([16.777216000000006346, 1844.6744073709567147])
        new_data = {"x1": new_x1, "x2": new_x2}
        data_chunked = [{"x1": x1[:10], "x2": x2[:10]},
                        {"x1": x1[10:], "x2": x2[10:]}]

>       builder = incr_dbuilder("te(cr(x1, df=5), cc(x2, df=6)) - 1",
                                lambda: iter(data_chunked))

patsy/mgcv_cubic_splines.py:1051:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'te(cr(x1, df=5), cc(x2, df=6)) - 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_te_3smooths ______________________________________________________________________________________

    def test_te_3smooths():
        from patsy.highlevel import incr_dbuilder, build_design_matrices
        x1 = (-1.5)**np.arange(20)
        x2 = (1.6)**np.arange(20)
        x3 = (-1.2)**np.arange(20)
        # Hard coded R results for smooth:  te(x1, x2, x3, bs=c("cr", "cs", "cc"), k=c(3,3,4))
        design_matrix_R = \
            np.array([[7.2077663709837084334e-05,   2.0648333344343273131e-03,
                       -4.7934014082310591768e-04,  2.3923430783992746568e-04,
                       6.8534265421922660466e-03,   -1.5909867344112936776e-03,
                       -6.8057712777151204314e-09,  -1.9496724335203412851e-07,
                       4.5260614658693259131e-08,   0.0101479754187435277507,
                       0.290712501531622591333,     -0.067487370093906928759,
                       0.03368233306025386619709,   0.9649092451763204847381,
                       -0.2239985793289433757547,   -9.5819975394704535133e-07,
                       -2.7449874082511405643e-05,  6.3723431275833230217e-06,
                       -1.5205851762850489204e-04,  -0.00435607204539782688624,
                       0.00101123909269346416370,   -5.0470024059694933508e-04,
                       -1.4458319360584082416e-02,  3.3564223914790921634e-03,
                       1.4357783514933466209e-08,   4.1131230514870551983e-07,
                       -9.5483976834512651038e-08]])
        new_data = {"x1": -38.443359375000000000,
                    "x2": 68.719476736000032702,
                    "x3": -5.1597803519999985156}
        data_chunked = [{"x1": x1[:10], "x2": x2[:10], "x3": x3[:10]},
                        {"x1": x1[10:], "x2": x2[10:], "x3": x3[10:]}]
>       builder = incr_dbuilder("te(cr(x1, df=3), cr(x2, df=3), cc(x3, df=3)) - 1",
                                lambda: iter(data_chunked))

patsy/mgcv_cubic_splines.py:1089:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'te(cr(x1, df=3), cr(x2, df=3), cc(x3, df=3)) - 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________________ test__tokenize_formula ___________________________________________________________________________________

    def test__tokenize_formula():
        code = "y ~ a + (foo(b,c +   2)) + -1 + 0 + 10"
>       tokens = list(_tokenize_formula(code, ["+", "-", "~"]))

patsy/parse_formula.py:98:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ a + (foo(b,c +   2)) + -1 + 0 + 10', operator_strings = ['+', '-', '~']

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
____________________________________________________________________________________ test_parse_formula _____________________________________________________________________________________

    def test_parse_formula():
>       _do_parse_test(_parser_tests, [])

patsy/parse_formula.py:208:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:202: in _do_parse_test
    actual = parse_formula(code, extra_operators=extra_operators)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_parse_origin _____________________________________________________________________________________

    def test_parse_origin():
>       tree = parse_formula("a ~ b + c")

patsy/parse_formula.py:211:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a ~ b + c', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_parse_errors _____________________________________________________________________________________

extra_operators = []

    def test_parse_errors(extra_operators=[]):
        def parse_fn(code):
            return parse_formula(code, extra_operators=extra_operators)
>       _parsing_error_test(parse_fn, _parser_error_tests)

patsy/parse_formula.py:285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:273: in _parsing_error_test
    parse_fn(bad_code)
patsy/parse_formula.py:284: in parse_fn
    return parse_formula(code, extra_operators=extra_operators)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a +', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
a <+>
'a +' 2 3
____________________________________________________________________________________ test_parse_extra_op ____________________________________________________________________________________

    def test_parse_extra_op():
        extra_operators = [Operator("|", 2, 250)]
>       _do_parse_test(_parser_tests,
                       extra_operators=extra_operators)

patsy/parse_formula.py:294:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:202: in _do_parse_test
    actual = parse_formula(code, extra_operators=extra_operators)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_bs_compat _______________________________________________________________________________________

x = array([1.00000000e+00, 1.50000000e+00, 2.25000000e+00, 3.37500000e+00,
       5.06250000e+00, 7.59375000e+00, 1.139062...1.94619507e+02, 2.91929260e+02, 4.37893890e+02,
       6.56840836e+02, 9.85261253e+02, 1.47789188e+03, 2.21683782e+03])
knots = array([1.00000000e+00, 1.00000000e+00, 4.80541992e+01, 2.21683782e+03,
       2.21683782e+03]), degree = 1

    def _eval_bspline_basis(x, knots, degree):
        try:
>           from scipy.interpolate import splev
E           ModuleNotFoundError: No module named 'scipy'

patsy/splines.py:20: ModuleNotFoundError

During handling of the above exception, another exception occurred:

    def test_bs_compat():
        from patsy.test_state import check_stateful
        from patsy.test_splines_bs_data import (R_bs_test_x,
                                                R_bs_test_data,
                                                R_bs_num_tests)
        lines = R_bs_test_data.split("\n")
        tests_ran = 0
        start_idx = lines.index("--BEGIN TEST CASE--")
        while True:
            if not lines[start_idx] == "--BEGIN TEST CASE--":
                break
            start_idx += 1
            stop_idx = lines.index("--END TEST CASE--", start_idx)
            block = lines[start_idx:stop_idx]
            test_data = {}
            for line in block:
                key, value = line.split("=", 1)
                test_data[key] = value
            # Translate the R output into Python calling conventions
            kwargs = {
                "degree": int(test_data["degree"]),
                # integer, or None
                "df": eval(test_data["df"]),
                # np.array() call, or None
                "knots": eval(test_data["knots"]),
                }
            if test_data["Boundary.knots"] != "None":
                lower, upper = eval(test_data["Boundary.knots"])
                kwargs["lower_bound"] = lower
                kwargs["upper_bound"] = upper
            kwargs["include_intercept"] = (test_data["intercept"] == "TRUE")
            # Special case: in R, setting intercept=TRUE increases the effective
            # dof by 1. Adjust our arguments to match.
            # if kwargs["df"] is not None and kwargs["include_intercept"]:
            #     kwargs["df"] += 1
            output = np.asarray(eval(test_data["output"]))
            if kwargs["df"] is not None:
                assert output.shape[1] == kwargs["df"]
            # Do the actual test
>           check_stateful(BS, False, R_bs_test_x, output, **kwargs)

patsy/splines.py:291:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_state.py:81: in check_stateful
    output_chunk = t.transform(input_chunk, *args, **kwargs)
patsy/splines.py:239: in transform
    basis = _eval_bspline_basis(x, self._all_knots, self._degree)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

x = array([1.00000000e+00, 1.50000000e+00, 2.25000000e+00, 3.37500000e+00,
       5.06250000e+00, 7.59375000e+00, 1.139062...1.94619507e+02, 2.91929260e+02, 4.37893890e+02,
       6.56840836e+02, 9.85261253e+02, 1.47789188e+03, 2.21683782e+03])
knots = array([1.00000000e+00, 1.00000000e+00, 4.80541992e+01, 2.21683782e+03,
       2.21683782e+03]), degree = 1

    def _eval_bspline_basis(x, knots, degree):
        try:
            from scipy.interpolate import splev
        except ImportError: # pragma: no cover
>           raise ImportError("spline functionality requires scipy")
E           ImportError: spline functionality requires scipy

patsy/splines.py:22: ImportError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
[array([1.00000000e+00, 1.50000000e+00, 2.25000000e+00, 3.37500000e+00,
       5.06250000e+00, 7.59375000e+00, 1.13906250e+01, 1.70859375e+01,
       2.56289062e+01, 3.84433594e+01, 5.76650391e+01, 8.64975586e+01,
       1.29746338e+02, 1.94619507e+02, 2.91929260e+02, 4.37893890e+02,
       6.56840836e+02, 9.85261253e+02, 1.47789188e+03, 2.21683782e+03])]
______________________________________________________________________________________ test_bs_0degree ______________________________________________________________________________________

x = array([ 0.1       ,  0.16681005,  0.27825594,  0.46415888,  0.77426368,
        1.29154967,  2.15443469,  3.59381366,  5.9948425 , 10.        ]), knots = array([ 0.1,  1. ,  4. , 10. ])
degree = 0

    def _eval_bspline_basis(x, knots, degree):
        try:
>           from scipy.interpolate import splev
E           ModuleNotFoundError: No module named 'scipy'

patsy/splines.py:20: ModuleNotFoundError

During handling of the above exception, another exception occurred:

    def test_bs_0degree():
        x = np.logspace(-1, 1, 10)
>       result = bs(x, knots=[1, 4], degree=0, include_intercept=True)

patsy/splines.py:303:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/state.py:48: in stateful_transform_wrapper
    return transform.transform(*args, **kwargs)
patsy/splines.py:239: in transform
    basis = _eval_bspline_basis(x, self._all_knots, self._degree)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

x = array([ 0.1       ,  0.16681005,  0.27825594,  0.46415888,  0.77426368,
        1.29154967,  2.15443469,  3.59381366,  5.9948425 , 10.        ]), knots = array([ 0.1,  1. ,  4. , 10. ])
degree = 0

    def _eval_bspline_basis(x, knots, degree):
        try:
            from scipy.interpolate import splev
        except ImportError: # pragma: no cover
>           raise ImportError("spline functionality requires scipy")
E           ImportError: spline functionality requires scipy

patsy/splines.py:22: ImportError
______________________________________________________________________________________ test_bs_errors _______________________________________________________________________________________

x = array([-10.        ,  -8.94736842,  -7.89473684,  -6.84210526,
        -5.78947368,  -4.73684211,  -3.68421053,  -2.63...  2.63157895,   3.68421053,   4.73684211,   5.78947368,
         6.84210526,   7.89473684,   8.94736842,  10.        ])
knots = array([ 0.,  0.,  0.,  0., 10., 10., 10., 10.]), degree = 3

    def _eval_bspline_basis(x, knots, degree):
        try:
>           from scipy.interpolate import splev
E           ModuleNotFoundError: No module named 'scipy'

patsy/splines.py:20: ModuleNotFoundError

During handling of the above exception, another exception occurred:

    def test_bs_errors():
        import pytest
        x = np.linspace(-10, 10, 20)
        # error checks:
        # out of bounds
>       pytest.raises(NotImplementedError, bs, x, 3, lower_bound=0)

patsy/splines.py:333:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/state.py:48: in stateful_transform_wrapper
    return transform.transform(*args, **kwargs)
patsy/splines.py:239: in transform
    basis = _eval_bspline_basis(x, self._all_knots, self._degree)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

x = array([-10.        ,  -8.94736842,  -7.89473684,  -6.84210526,
        -5.78947368,  -4.73684211,  -3.68421053,  -2.63...  2.63157895,   3.68421053,   4.73684211,   5.78947368,
         6.84210526,   7.89473684,   8.94736842,  10.        ])
knots = array([ 0.,  0.,  0.,  0., 10., 10., 10., 10.]), degree = 3

    def _eval_bspline_basis(x, knots, degree):
        try:
            from scipy.interpolate import splev
        except ImportError: # pragma: no cover
>           raise ImportError("spline functionality requires scipy")
E           ImportError: spline functionality requires scipy

patsy/splines.py:22: ImportError
__________________________________________________________________________________ test_DesignInfo_subset ___________________________________________________________________________________

    def test_DesignInfo_subset():
        # For each combination of:
        #   formula, term names, term objects, mixed term name and term objects
        # check that results match subset of full build
        # and that removed variables don't hurt
        all_data = {"x": [1, 2],
                    "y": [[3.1, 3.2],
                          [4.1, 4.2]],
                    "z": [5, 6]}
        all_terms = make_termlist("x", "y", "z")
        def iter_maker():
            yield all_data
        all_builder = design_matrix_builders([all_terms], iter_maker, 0)[0]
        full_matrix = build_design_matrices([all_builder], all_data)[0]

        def t(which_terms, variables, columns):
            sub_design_info = all_builder.subset(which_terms)
            sub_data = {}
            for variable in variables:
                sub_data[variable] = all_data[variable]
            sub_matrix = build_design_matrices([sub_design_info], sub_data)[0]
            sub_full_matrix = full_matrix[:, columns]
            if not isinstance(which_terms, six.string_types):
                assert len(which_terms) == len(sub_design_info.terms)
            assert np.array_equal(sub_matrix, sub_full_matrix)

>       t("~ 0 + x + y + z", ["x", "y", "z"], slice(None))

patsy/test_build.py:700:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_build.py:690: in t
    sub_design_info = all_builder.subset(which_terms)
patsy/design_info.py:630: in subset
    desc = ModelDesc.from_formula(which_terms)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 0 + x + y + z', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
____________________________________________________________________________________ test_formula_likes _____________________________________________________________________________________

    def test_formula_likes():
        # Plain array-like, rhs only
        t([[1, 2, 3], [4, 5, 6]], {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        t((None, [[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        t(np.asarray([[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        t((None, np.asarray([[1, 2, 3], [4, 5, 6]])), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        dm = DesignMatrix([[1, 2, 3], [4, 5, 6]], default_column_prefix="foo")
        t(dm, {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["foo0", "foo1", "foo2"])
        t((None, dm), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["foo0", "foo1", "foo2"])

        # Plain array-likes, lhs and rhs
        t(([1, 2], [[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        t(([[1], [2]], [[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        t((np.asarray([1, 2]), np.asarray([[1, 2, 3], [4, 5, 6]])), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        t((np.asarray([[1], [2]]), np.asarray([[1, 2, 3], [4, 5, 6]])), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        x_dm = DesignMatrix([[1, 2, 3], [4, 5, 6]], default_column_prefix="foo")
        y_dm = DesignMatrix([1, 2], default_column_prefix="bar")
        t((y_dm, x_dm), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["foo0", "foo1", "foo2"],
          [[1], [2]], ["bar0"])
        # number of rows must match
        t_invalid(([1, 2, 3], [[1, 2, 3], [4, 5, 6]]), {}, 0)

        # tuples must have the right size
        t_invalid(([[1, 2, 3]],), {}, 0)
        t_invalid(([[1, 2, 3]], [[1, 2, 3]], [[1, 2, 3]]), {}, 0)

        # plain Series and DataFrames
        if have_pandas:
            # Names are extracted
            t(pandas.DataFrame({"x": [1, 2, 3]}), {}, 0,
              False,
              [[1], [2], [3]], ["x"])
            t(pandas.Series([1, 2, 3], name="asdf"), {}, 0,
              False,
              [[1], [2], [3]], ["asdf"])
            t((pandas.DataFrame({"y": [4, 5, 6]}),
               pandas.DataFrame({"x": [1, 2, 3]})), {}, 0,
              False,
              [[1], [2], [3]], ["x"],
              [[4], [5], [6]], ["y"])
            t((pandas.Series([4, 5, 6], name="y"),
               pandas.Series([1, 2, 3], name="x")), {}, 0,
              False,
              [[1], [2], [3]], ["x"],
              [[4], [5], [6]], ["y"])
            # Or invented
            t((pandas.DataFrame([[4, 5, 6]]),
               pandas.DataFrame([[1, 2, 3]], columns=[7, 8, 9])), {}, 0,
              False,
              [[1, 2, 3]], ["x7", "x8", "x9"],
              [[4, 5, 6]], ["y0", "y1", "y2"])
            t(pandas.Series([1, 2, 3]), {}, 0,
              False,
              [[1], [2], [3]], ["x0"])
            # indices must match
            t_invalid((pandas.DataFrame([[1]], index=[1]),
                       pandas.DataFrame([[1]], index=[2])),
                      {}, 0)

        # Foreign ModelDesc factories
        class ForeignModelSource(object):
            def __patsy_get_model_desc__(self, data):
                return ModelDesc([Term([LookupFactor("Y")])],
                                 [Term([LookupFactor("X")])])
        foreign_model = ForeignModelSource()
        t(foreign_model,
          {"Y": [1, 2],
           "X": [[1, 2], [3, 4]]},
          0,
          True,
          [[1, 2], [3, 4]], ["X[0]", "X[1]"],
          [[1], [2]], ["Y"])
        class BadForeignModelSource(object):
            def __patsy_get_model_desc__(self, data):
                return data
        t_invalid(BadForeignModelSource(), {}, 0)

        # string formulas
>       t("y ~ x", {"y": [1, 2], "x": [3, 4]}, 0,
          True,
          [[1, 3], [1, 4]], ["Intercept", "x"],
          [[1], [2]], ["y"])

patsy/test_highlevel.py:252:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:87: in t
    builders = incr_dbuilders(formula_like, data_iter_maker, depth)
patsy/highlevel.py:129: in incr_dbuilders
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ x', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
____________________________________________________________________________________ test_return_pandas _____________________________________________________________________________________

    def test_return_pandas():
        if not have_pandas:
            return
        # basic check of pulling a Series out of the environment
        s1 = pandas.Series([1, 2, 3], name="AA", index=[10, 20, 30])
        s2 = pandas.Series([4, 5, 6], name="BB", index=[10, 20, 30])
>       df1 = dmatrix("s1", return_type="dataframe")

patsy/test_highlevel.py:349:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 's1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_term_info _______________________________________________________________________________________

    def test_term_info():
        data = balanced(a=2, b=2)
>       rhs = dmatrix("a:b", data)

patsy/test_highlevel.py:400:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a:b', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_data_types ______________________________________________________________________________________

    def test_data_types():
        data = {"a": [1, 2, 3],
                "b": [1.0, 2.0, 3.0],
                "c": np.asarray([1, 2, 3], dtype=np.float32),
                "d": [True, False, True],
                "e": ["foo", "bar", "baz"],
                "f": C([1, 2, 3]),
                "g": C(["foo", "bar", "baz"]),
                "h": np.array(["foo", 1, (1, "hi")], dtype=object),
                }
>       t("~ 0 + a", data, 0, True,
          [[1], [2], [3]], ["a"])

patsy/test_highlevel.py:417:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 0 + a', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_categorical ______________________________________________________________________________________

    def test_categorical():
        data = balanced(a=2, b=2)
        # There are more exhaustive tests for all the different coding options in
        # test_build; let's just make sure that C() and stuff works.
>       t("~ C(a)", data, 0,
          True,
          [[1, 0], [1, 0], [1, 1], [1, 1]], ["Intercept", "C(a)[T.a2]"])

patsy/test_highlevel.py:440:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ C(a)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________________ test_builtins _______________________________________________________________________________________

    def test_builtins():
        data = {"x": [1, 2, 3],
                "y": [4, 5, 6],
                "a b c": [10, 20, 30]}
>       t("0 + I(x + y)", data, 0,
          True,
          [[1], [2], [3], [4], [5], [6]], ["I(x + y)"])

patsy/test_highlevel.py:492:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '0 + I(x + y)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_incremental ______________________________________________________________________________________

    def test_incremental():
        # incr_dbuilder(s)
        # stateful transformations
        datas = [
            {"a": ["a2", "a2", "a2"],
             "x": [1, 2, 3]},
            {"a": ["a2", "a2", "a1"],
             "x": [4, 5, 6]},
            ]
        x = np.asarray([1, 2, 3, 4, 5, 6])
        sin_center_x = np.sin(x - np.mean(x))
        x_col = sin_center_x - np.mean(sin_center_x)
        def data_iter_maker():
            return iter(datas)
>       builders = incr_dbuilders("1 ~ a + center(np.sin(center(x)))",
                                  data_iter_maker)

patsy/test_highlevel.py:516:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:129: in incr_dbuilders
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '1 ~ a + center(np.sin(center(x)))', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
____________________________________________________________________________________ test_env_transform _____________________________________________________________________________________

    def test_env_transform():
>       t("~ np.sin(x)", {"x": [1, 2, 3]}, 0,
          True,
          [[1, np.sin(1)], [1, np.sin(2)], [1, np.sin(3)]],
          ["Intercept", "np.sin(x)"])

patsy/test_highlevel.py:543:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ np.sin(x)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_term_order ______________________________________________________________________________________

    def test_term_order():
        data = balanced(a=2, b=2)
        data["x1"] = np.linspace(0, 1, 4)
        data["x2"] = data["x1"] ** 2

        def t_terms(formula, order):
            m = dmatrix(formula, data)
            assert m.design_info.term_names == order

>       t_terms("a + b + x1 + x2", ["Intercept", "a", "b", "x1", "x2"])

patsy/test_highlevel.py:567:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:564: in t_terms
    m = dmatrix(formula, data)
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a + b + x1 + x2', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_multicolumn ______________________________________________________________________________________

    def test_multicolumn():
        data = {
            "a": ["a1", "a2"],
            "X": [[1, 2], [3, 4]],
            "Y": [[1, 3], [2, 4]],
            }
>       t("X*Y", data, 0,
          True,
          [[1, 1, 2, 1, 3, 1 * 1, 2 * 1, 1 * 3, 2 * 3],
           [1, 3, 4, 2, 4, 3 * 2, 4 * 2, 3 * 4, 4 * 4]],
          ["Intercept", "X[0]", "X[1]", "Y[0]", "Y[1]",
           "X[0]:Y[0]", "X[1]:Y[0]", "X[0]:Y[1]", "X[1]:Y[1]"])

patsy/test_highlevel.py:607:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'X*Y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________ test_dmatrix_dmatrices_no_data _______________________________________________________________________________

    def test_dmatrix_dmatrices_no_data():
        x = [1, 2, 3]
        y = [4, 5, 6]
>       assert np.allclose(dmatrix("x"), [[1, 1], [1, 2], [1, 3]])

patsy/test_highlevel.py:624:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_________________________________________________________________________________ test_designinfo_describe __________________________________________________________________________________

    def test_designinfo_describe():
>       lhs, rhs = dmatrices("y ~ x + a", {"y": [1, 2, 3],
                                           "x": [4, 5, 6],
                                           "a": ["a1", "a2", "a3"]})

patsy/test_highlevel.py:630:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:309: in dmatrices
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ x + a', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________________ test_evalfactor_reraise __________________________________________________________________________________

    def test_evalfactor_reraise():
        # This will produce a PatsyError, but buried inside the factor evaluation,
        # so the original code has no way to give it an appropriate origin=
        # attribute. EvalFactor should notice this, and add a useful origin:
        def raise_patsy_error(x):
            raise PatsyError("WHEEEEEE")
        formula = "raise_patsy_error(X) + Y"
        try:
>           dmatrix(formula, {"X": [1, 2, 3], "Y": [4, 5, 6]})

patsy/test_highlevel.py:644:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'raise_patsy_error(X) + Y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________________ test_dmatrix_NA_action ___________________________________________________________________________________

    def test_dmatrix_NA_action():
        data = {"x": [1, 2, 3, np.nan], "y": [np.nan, 20, 30, 40]}

        return_types = ["matrix"]
        if have_pandas:
            return_types.append("dataframe")

        for return_type in return_types:
>           mat = dmatrix("x + y", data=data, return_type=return_type)

patsy/test_highlevel.py:671:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x + y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________________ test_0d_data ________________________________________________________________________________________

    def test_0d_data():
        # Use case from statsmodels/statsmodels#1881
        data_0d = {"x1": 1.1, "x2": 1.2, "a": "a1"}

        for formula, expected in [
                ("x1 + x2", [[1, 1.1, 1.2]]),
                ("C(a, levels=('a1', 'a2')) + x1", [[1, 0, 1.1]]),
                ]:
>           mat = dmatrix(formula, data_0d)

patsy/test_highlevel.py:710:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x1 + x2', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________ test_env_not_saved_in_builder _______________________________________________________________________________

    def test_env_not_saved_in_builder():
        x_in_env = [1, 2, 3]
>       design_matrix = dmatrix("x_in_env", {})

patsy/test_highlevel.py:726:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x_in_env', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________ test_C_and_pandas_categorical _______________________________________________________________________________

    def test_C_and_pandas_categorical():
        if not have_pandas_categorical:
            return

        objs = [pandas_Categorical_from_codes([1, 0, 1], ["b", "a"])]
        if have_pandas_categorical_dtype:
            objs.append(pandas.Series(objs[0]))
        for obj in objs:
            d = {"obj": obj}
>           assert np.allclose(dmatrix("obj", d),
                               [[1, 1],
                                [1, 0],
                                [1, 1]])

patsy/test_highlevel.py:742:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'obj', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________________ test_issue_11 _______________________________________________________________________________________

    def test_issue_11():
        # Give a sensible error message for level mismatches
        # (At some points we've failed to put an origin= on these errors)
        env = EvalEnvironment.capture()
        data = {"X" : [0,1,2,3], "Y" : [1,2,3,4]}
        formula = "C(X) + Y"
        new_data = {"X" : [0,0,1,2,3,3,4], "Y" : [1,2,3,4,5,6,7]}
>       info = dmatrix(formula, data)

patsy/test_regressions.py:18:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'C(X) + Y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
___________________________________________________________________________________ test_PushbackAdapter ____________________________________________________________________________________

    def test_PushbackAdapter():
        it = PushbackAdapter(iter([1, 2, 3, 4]))
>       assert it.has_more()

patsy/util.py:379:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/util.py:371: in has_more
    self.peek()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <patsy.util.PushbackAdapter object at 0x7f8820055130>

    def peek(self):
        try:
>           obj = six.advance_iterator(self)
E           TypeError: next expected at least 1 argument, got 0

patsy/util.py:363: TypeError
================================================================================== short test summary info ==================================================================================
FAILED patsy/build.py::test__examine_factor_types - TypeError: next expected at least 1 argument, got 0
FAILED patsy/contrasts.py::test__obj_to_readable_str - assert '\\u20ac' == "b'\\xa4'"
FAILED patsy/desc.py::test_ModelDesc_from_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/desc.py::test_eval_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/desc.py::test_eval_formula_error_reporting - TypeError: next expected at least 1 argument, got 0
FAILED patsy/desc.py::test_formula_factor_origin - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_EvalFactor_memorize_passes_needed - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_EvalFactor_end_to_end - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_annotated_tokens - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_replace_bare_funcalls - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_capture_obj_method_calls - TypeError: next expected at least 1 argument, got 0
FAILED patsy/mgcv_cubic_splines.py::test_crs_compat - ImportError: Cubic spline functionality requires scipy.
FAILED patsy/mgcv_cubic_splines.py::test_crs_with_specific_constraint - TypeError: next expected at least 1 argument, got 0
FAILED patsy/mgcv_cubic_splines.py::test_te_1smooth - ImportError: Cubic spline functionality requires scipy.
FAILED patsy/mgcv_cubic_splines.py::test_te_2smooths - TypeError: next expected at least 1 argument, got 0
FAILED patsy/mgcv_cubic_splines.py::test_te_3smooths - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test__tokenize_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_origin - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_errors - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_extra_op - TypeError: next expected at least 1 argument, got 0
FAILED patsy/splines.py::test_bs_compat - ImportError: spline functionality requires scipy
FAILED patsy/splines.py::test_bs_0degree - ImportError: spline functionality requires scipy
FAILED patsy/splines.py::test_bs_errors - ImportError: spline functionality requires scipy
FAILED patsy/test_build.py::test_DesignInfo_subset - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_formula_likes - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_return_pandas - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_term_info - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_data_types - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_categorical - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_builtins - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_incremental - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_env_transform - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_term_order - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_multicolumn - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_dmatrix_dmatrices_no_data - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_designinfo_describe - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_evalfactor_reraise - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_dmatrix_NA_action - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_0d_data - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_env_not_saved_in_builder - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_C_and_pandas_categorical - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_regressions.py::test_issue_11 - TypeError: next expected at least 1 argument, got 0
FAILED patsy/util.py::test_PushbackAdapter - TypeError: next expected at least 1 argument, got 0
============================================================================== 44 failed, 104 passed in 42.37s ==============================================================================

@bashtage
Copy link
Contributor

bashtage commented Jan 2, 2024

I also ran the latest locally, and there are no failures. It seems likely that the failures are some fragility around not having network access, but not a fundamental issue with patsy.

@bashtage
Copy link
Contributor

bashtage commented Jan 2, 2024

At least some of the failures are because you do not have scipy installed, which is required to run the test suite.

@kloczek
Copy link
Author

kloczek commented Jan 2, 2024

I also ran the latest locally, and there are no failures. It seems likely that the failures are some fragility around not having network access, but not a fundamental issue with patsy.

I've provided list of modules installed in build env. Could you please compare that with your env?

@bashtage
Copy link
Contributor

bashtage commented Jan 2, 2024

What happens if you add scipy?

@kloczek
Copy link
Author

kloczek commented Jan 2, 2024

Not to match .. however number of failing units decreased (42->37)

Here is pytest output with added scipy to buildenv
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-patsy-0.5.5-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-patsy-0.5.5-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -m 'not network'
==================================================================================== test session starts ====================================================================================
platform linux -- Python 3.8.18, pytest-7.4.4, pluggy-1.3.0
rootdir: /home/tkloczko/rpmbuild/BUILD/patsy-0.5.5
configfile: setup.cfg
testpaths: patsy
collected 148 items

patsy/build.py ......F                                                                                                                                                                [  4%]
patsy/builtins.py ..                                                                                                                                                                  [  6%]
patsy/categorical.py ....                                                                                                                                                             [  8%]
patsy/constraint.py .....                                                                                                                                                             [ 12%]
patsy/contrasts.py .F.......                                                                                                                                                          [ 18%]
patsy/desc.py ..FFFF                                                                                                                                                                  [ 22%]
patsy/design_info.py ........                                                                                                                                                         [ 27%]
patsy/eval.py ..........FFFFF                                                                                                                                                         [ 37%]
patsy/infix_parser.py .                                                                                                                                                               [ 38%]
patsy/mgcv_cubic_splines.py .......F..FF                                                                                                                                              [ 46%]
patsy/missing.py .....                                                                                                                                                                [ 50%]
patsy/origin.py .                                                                                                                                                                     [ 50%]
patsy/parse_formula.py FFFFF                                                                                                                                                          [ 54%]
patsy/redundancy.py ....                                                                                                                                                              [ 56%]
patsy/splines.py ....                                                                                                                                                                 [ 59%]
patsy/test_build.py ................F                                                                                                                                                 [ 70%]
patsy/test_highlevel.py F.FFFFFFF.FFFFFFF.                                                                                                                                            [ 83%]
patsy/test_regressions.py F                                                                                                                                                           [ 83%]
patsy/test_state.py ...                                                                                                                                                               [ 85%]
patsy/tokens.py ..                                                                                                                                                                    [ 87%]
patsy/user_util.py ...                                                                                                                                                                [ 89%]
patsy/util.py .....F..........                                                                                                                                                        [100%]

========================================================================================= FAILURES ==========================================================================================
________________________________________________________________________________ test__examine_factor_types _________________________________________________________________________________

    def test__examine_factor_types():
        from patsy.categorical import C
        class MockFactor(object):
            def __init__(self):
                # You should check this using 'is', not '=='
                from patsy.origin import Origin
                self.origin = Origin("MOCK", 1, 2)

            def eval(self, state, data):
                return state[data]

            def name(self):
                return "MOCK MOCK"

        # This hacky class can only be iterated over once, but it keeps track of
        # how far it got.
        class DataIterMaker(object):
            def __init__(self):
                self.i = -1

            def __call__(self):
                return self

            def __iter__(self):
                return self

            def __next__(self):
                self.i += 1
                if self.i > 1:
                    raise StopIteration
                return self.i
            __next__ = next

        num_1dim = MockFactor()
        num_1col = MockFactor()
        num_4col = MockFactor()
        categ_1col = MockFactor()
        bool_1col = MockFactor()
        string_1col = MockFactor()
        object_1col = MockFactor()
        object_levels = (object(), object(), object())
        factor_states = {
            num_1dim: ([1, 2, 3], [4, 5, 6]),
            num_1col: ([[1], [2], [3]], [[4], [5], [6]]),
            num_4col: (np.zeros((3, 4)), np.ones((3, 4))),
            categ_1col: (C(["a", "b", "c"], levels=("a", "b", "c"),
                           contrast="MOCK CONTRAST"),
                         C(["c", "b", "a"], levels=("a", "b", "c"),
                           contrast="MOCK CONTRAST")),
            bool_1col: ([True, True, False], [False, True, True]),
            # It has to read through all the data to see all the possible levels:
            string_1col: (["a", "a", "a"], ["c", "b", "a"]),
            object_1col: ([object_levels[0]] * 3, object_levels),
            }

        it = DataIterMaker()
        (num_column_counts, cat_levels_contrasts,
>        ) = _examine_factor_types(list(factor_states.keys()), factor_states, it,
                                   NAAction())

patsy/build.py:523:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

factors = [<patsy.build.test__examine_factor_types.<locals>.MockFactor object at 0x7f2629d15fd0>, <patsy.build.test__examine_fac... object at 0x7f24a4d36c40>, <patsy.build.test__examine_factor_types.<locals>.MockFactor object at 0x7f24a4d36670>, ...]
factor_states = {<patsy.build.test__examine_factor_types.<locals>.MockFactor object at 0x7f2629d15fd0>: ([1, 2, 3], [4, 5, 6]), <patsy...egorical._CategoricalBox object at 0x7f24a4d36cd0>, <patsy.categorical._CategoricalBox object at 0x7f24a4d36130>), ...}
data_iter_maker = <patsy.build.test__examine_factor_types.<locals>.DataIterMaker object at 0x7f24a4d363d0>, NA_action = <patsy.missing.NAAction object at 0x7f24a4d36fa0>

    def _examine_factor_types(factors, factor_states, data_iter_maker, NA_action):
        num_column_counts = {}
        cat_sniffers = {}
        examine_needed = set(factors)
>       for data in data_iter_maker():
E       TypeError: next expected at least 1 argument, got 0

patsy/build.py:441: TypeError
_________________________________________________________________________________ test__obj_to_readable_str _________________________________________________________________________________

    def test__obj_to_readable_str():
        def t(obj, expected):
            got = _obj_to_readable_str(obj)
            assert type(got) is str
            assert got == expected
        t(1, "1")
        t(1.0, "1.0")
        t("asdf", "asdf")
        t(six.u("asdf"), "asdf")
        if sys.version_info >= (3,):
            # we can use "foo".encode here b/c this is python 3!
            # a utf-8 encoded euro-sign comes out as a real euro sign.
            t("\\u20ac".encode("utf-8"), six.u("\\u20ac"))
            # but a iso-8859-15 euro sign can't be decoded, and we fall back on
            # repr()
>           t("\\u20ac".encode("iso-8859-15"), "b'\\xa4'")

patsy/contrasts.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

obj = b'\\u20ac', expected = "b'\\xa4'"

    def t(obj, expected):
        got = _obj_to_readable_str(obj)
        assert type(got) is str
>       assert got == expected
E       assert '\\u20ac' == "b'\\xa4'"
E         - b'\xa4'
E         + \u20ac

patsy/contrasts.py:89: AssertionError
________________________________________________________________________________ test_ModelDesc_from_formula ________________________________________________________________________________

    def test_ModelDesc_from_formula():
>       for input in ("y ~ x", parse_formula("y ~ x")):

patsy/desc.py:189:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ x', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_eval_formula _____________________________________________________________________________________

    def test_eval_formula():
>       _do_eval_formula_tests(_eval_tests)

patsy/desc.py:612:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/desc.py:601: in _do_eval_formula_tests
    model_desc = ModelDesc.from_formula(code)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________ test_eval_formula_error_reporting _____________________________________________________________________________

    def test_eval_formula_error_reporting():
        from patsy.parse_formula import _parsing_error_test
        parse_fn = lambda formula: ModelDesc.from_formula(formula)
>       _parsing_error_test(parse_fn, _eval_error_tests)

patsy/desc.py:617:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:273: in _parsing_error_test
    parse_fn(bad_code)
patsy/desc.py:616: in <lambda>
    parse_fn = lambda formula: ModelDesc.from_formula(formula)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a +', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
a <+>
'a +' 2 3
________________________________________________________________________________ test_formula_factor_origin _________________________________________________________________________________

    def test_formula_factor_origin():
        from patsy.origin import Origin
>       desc = ModelDesc.from_formula("a + b")

patsy/desc.py:621:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a + b', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________ test_EvalFactor_memorize_passes_needed ___________________________________________________________________________

    def test_EvalFactor_memorize_passes_needed():
        from patsy.state import stateful_transform
        foo = stateful_transform(lambda: "FOO-OBJ")
        bar = stateful_transform(lambda: "BAR-OBJ")
        quux = stateful_transform(lambda: "QUUX-OBJ")
        e = EvalFactor("foo(x) + bar(foo(y)) + quux(z, w)")

        state = {}
        eval_env = EvalEnvironment.capture(0)
>       passes = e.memorize_passes_needed(state, eval_env)

patsy/eval.py:595:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:496: in memorize_passes_needed
    eval_code = replace_bare_funcalls(self.code, new_name_maker)
patsy/eval.py:736: in replace_bare_funcalls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'foo(x) + bar(foo(y)) + quux(z, w)'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
________________________________________________________________________________ test_EvalFactor_end_to_end _________________________________________________________________________________

    def test_EvalFactor_end_to_end():
        from patsy.state import stateful_transform
        foo = stateful_transform(_MockTransform)
        e = EvalFactor("foo(x) + foo(foo(y))")
        state = {}
        eval_env = EvalEnvironment.capture(0)
>       passes = e.memorize_passes_needed(state, eval_env)

patsy/eval.py:652:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:496: in memorize_passes_needed
    eval_code = replace_bare_funcalls(self.code, new_name_maker)
patsy/eval.py:736: in replace_bare_funcalls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'foo(x) + foo(foo(y))'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
___________________________________________________________________________________ test_annotated_tokens ___________________________________________________________________________________

    def test_annotated_tokens():
>       tokens_without_origins = [(token_type, token, props)
                                  for (token_type, token, origin, props)
                                  in (annotated_tokens("a(b) + c.d"))]

patsy/eval.py:710:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:710: in <listcomp>
    tokens_without_origins = [(token_type, token, props)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a(b) + c.d'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
________________________________________________________________________________ test_replace_bare_funcalls _________________________________________________________________________________

    def test_replace_bare_funcalls():
        def replacer1(token):
            return {"a": "b", "foo": "_internal.foo.process"}.get(token, token)
        def t1(code, expected):
            replaced = replace_bare_funcalls(code, replacer1)
            print(("%r -> %r" % (code, replaced)))
            print(("(wanted %r)" % (expected,)))
            assert replaced == expected
>       t1("foobar()", "foobar()")

patsy/eval.py:750:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:746: in t1
    replaced = replace_bare_funcalls(code, replacer1)
patsy/eval.py:736: in replace_bare_funcalls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'foobar()'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
_______________________________________________________________________________ test_capture_obj_method_calls _______________________________________________________________________________

    def test_capture_obj_method_calls():
>       assert (capture_obj_method_calls("foo", "a + foo.baz(bar) + b.c(d)")
                == [("foo.baz", "foo.baz(bar)")])

patsy/eval.py:798:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/eval.py:789: in capture_obj_method_calls
    for (token_type, token, origin, props) in annotated_tokens(code):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a + foo.baz(bar) + b.c(d)'

    def annotated_tokens(code):
        prev_was_dot = False
        it = PushbackAdapter(python_tokenize(code))
>       for (token_type, token, origin) in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/eval.py:701: TypeError
_____________________________________________________________________________ test_crs_with_specific_constraint _____________________________________________________________________________

    def test_crs_with_specific_constraint():
        from patsy.highlevel import incr_dbuilder, build_design_matrices, dmatrix
        x = (-1.5)**np.arange(20)
        # Hard coded R values for smooth: s(x, bs="cr", k=5)
        # R> knots <- smooth$xp
        knots_R = np.array([-2216.837820053100585937,
                            -50.456909179687500000,
                            -0.250000000000000000,
                            33.637939453125000000,
                            1477.891880035400390625])
        # R> centering.constraint <- t(qr.X(attr(smooth, "qrc")))
        centering_constraint_R = np.array([[0.064910676323168478574,
                                            1.4519875239407085132,
                                            -2.1947446912471946234,
                                            1.6129783104357671153,
                                            0.064868180547550072235]])
        # values for which we want a prediction
        new_x = np.array([-3000., -200., 300., 2000.])
>       result1 = dmatrix("cr(new_x, knots=knots_R[1:-1], "
                          "lower_bound=knots_R[0], upper_bound=knots_R[-1], "
                          "constraints=centering_constraint_R)")

patsy/mgcv_cubic_splines.py:841:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'cr(new_x, knots=knots_R[1:-1], lower_bound=knots_R[0], upper_bound=knots_R[-1], constraints=centering_constraint_R)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_te_2smooths ______________________________________________________________________________________

    def test_te_2smooths():
        from patsy.highlevel import incr_dbuilder, build_design_matrices
        x1 = (-1.5)**np.arange(20)
        x2 = (1.6)**np.arange(20)
        # Hard coded R results for smooth: te(x1, x2, bs=c("cs", "cc"), k=c(5,7))
        # Without centering constraint:
        dmatrix_R_nocons = \
            np.array([[-4.4303024184609255207e-06,  7.9884438387230142235e-06,
                       9.7987758194797719025e-06,   -7.2894213245475212959e-08,
                       1.5907686862964493897e-09,   -3.2565884983072595159e-11,
                       0.0170749607855874667439,    -3.0788499835965849050e-02,
                       -3.7765754357352458725e-02,  2.8094376299826799787e-04,
                       -6.1310290747349201414e-06,  1.2551314933193442915e-07,
                       -0.26012671685838206770,     4.6904420337437874311e-01,
                       0.5753384627946153129230,    -4.2800085814700449330e-03,
                       9.3402525733484874533e-05,   -1.9121170389937518131e-06,
                       -0.0904312240489447832781,   1.6305991924427923334e-01,
                       2.0001237112941641638e-01,   -1.4879148887003382663e-03,
                       3.2470731316462736135e-05,   -6.6473404365914134499e-07,
                       2.0447857920168824846e-05,   -3.6870296695050991799e-05,
                       -4.5225801045409022233e-05,  3.3643990293641665710e-07,
                       -7.3421200200015877329e-09,  1.5030635073660743297e-10],
                      [-9.4006130602653794302e-04,  7.8681398069163730347e-04,
                       2.4573006857381437217e-04,   -1.4524712230452725106e-04,
                       7.8216741353106329551e-05,   -3.1304283003914264551e-04,
                       3.6231183382798337611064,    -3.0324832476174168328e+00,
                       -9.4707559178211142559e-01,  5.5980126937492580286e-01,
                       -3.0145747744342332730e-01,  1.2065077148806895302e+00,
                       -35.17561267504181188315,    2.9441339255948005160e+01,
                       9.1948319320782125885216,    -5.4349184288245195873e+00,
                       2.9267472035096449012e+00,   -1.1713569391233907169e+01,
                       34.0275626863976370373166,   -2.8480442582712722555e+01,
                       -8.8947340548151565542e+00,  5.2575353623762932642e+00,
                       -2.8312249982592527786e+00,  1.1331265795534763541e+01,
                       7.9462158845078978420e-01,   -6.6508361863670617531e-01,
                       -2.0771242914526857892e-01,  1.2277550230353953542e-01,
                       -6.6115593588420035198e-02,  2.6461103043402139923e-01]])
        # With centering constraint:
        dmatrix_R_cons = \
            np.array([[0.00329998606323867252343,   1.6537431155796576600e-04,
                       -1.2392262709790753433e-04,  6.5405304166706783407e-05,
                       -6.6764045799537624095e-05,  -0.1386431081763726258504,
                       0.124297283800864313830,     -3.5487293655619825405e-02,
                       -3.0527115315785902268e-03,  5.2009247643311604277e-04,
                       -0.00384203992301702674378,  -0.058901915802819435064,
                       0.266422358491648914036,     0.5739281693874087597607,
                       -1.3171008503525844392e-03,  8.2573456631878912413e-04,
                       6.6730833453016958831e-03,   -0.1467677784718444955470,
                       0.220757650934837484913,     0.1983127687880171796664,
                       -1.6269930328365173316e-03,  -1.7785892412241208812e-03,
                       -3.2702835436351201243e-03,  -4.3252183044300757109e-02,
                       4.3403766976235179376e-02,   3.5973406402893762387e-05,
                       -5.4035858568225075046e-04,  2.9565209382794241247e-04,
                       -2.2769990750264097637e-04],
                      [0.41547954838956052681098,   1.9843570584107707994e-02,
                       -1.5746590234791378593e-02,  8.3171184312221431434e-03,
                       -8.7233014052017516377e-03,  -15.9926770785086258541696,
                       16.503663226274017716833,    -6.6005803955894726265e-01,
                       1.3986092022708346283e-01,   -2.3516913533670955050e-01,
                       0.72251037497207359905360,   -9.827337059999853963177,
                       3.917078117294827688255,     9.0171773596973618936090,
                       -5.0616811270787671617e+00,  3.0189990249009683865e+00,
                       -1.0872720629943064097e+01,  26.9308504460453121964747,
                       -21.212262927009287949431,   -9.1088328555582247503253,
                       5.2400156972500298025e+00,   -3.0593641098325474736e+00,
                       1.0919392118399086300e+01,   -4.6564290223265718538e+00,
                       4.8071307441606982991e+00,   -1.9748377005689798924e-01,
                       5.4664183716965096538e-02,   -2.8871392916916285148e-02,
                       2.3592766838010845176e-01]])
        new_x1 = np.array([11.390625, 656.84083557128906250])
        new_x2 = np.array([16.777216000000006346, 1844.6744073709567147])
        new_data = {"x1": new_x1, "x2": new_x2}
        data_chunked = [{"x1": x1[:10], "x2": x2[:10]},
                        {"x1": x1[10:], "x2": x2[10:]}]

>       builder = incr_dbuilder("te(cr(x1, df=5), cc(x2, df=6)) - 1",
                                lambda: iter(data_chunked))

patsy/mgcv_cubic_splines.py:1051:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'te(cr(x1, df=5), cc(x2, df=6)) - 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_te_3smooths ______________________________________________________________________________________

    def test_te_3smooths():
        from patsy.highlevel import incr_dbuilder, build_design_matrices
        x1 = (-1.5)**np.arange(20)
        x2 = (1.6)**np.arange(20)
        x3 = (-1.2)**np.arange(20)
        # Hard coded R results for smooth:  te(x1, x2, x3, bs=c("cr", "cs", "cc"), k=c(3,3,4))
        design_matrix_R = \
            np.array([[7.2077663709837084334e-05,   2.0648333344343273131e-03,
                       -4.7934014082310591768e-04,  2.3923430783992746568e-04,
                       6.8534265421922660466e-03,   -1.5909867344112936776e-03,
                       -6.8057712777151204314e-09,  -1.9496724335203412851e-07,
                       4.5260614658693259131e-08,   0.0101479754187435277507,
                       0.290712501531622591333,     -0.067487370093906928759,
                       0.03368233306025386619709,   0.9649092451763204847381,
                       -0.2239985793289433757547,   -9.5819975394704535133e-07,
                       -2.7449874082511405643e-05,  6.3723431275833230217e-06,
                       -1.5205851762850489204e-04,  -0.00435607204539782688624,
                       0.00101123909269346416370,   -5.0470024059694933508e-04,
                       -1.4458319360584082416e-02,  3.3564223914790921634e-03,
                       1.4357783514933466209e-08,   4.1131230514870551983e-07,
                       -9.5483976834512651038e-08]])
        new_data = {"x1": -38.443359375000000000,
                    "x2": 68.719476736000032702,
                    "x3": -5.1597803519999985156}
        data_chunked = [{"x1": x1[:10], "x2": x2[:10], "x3": x3[:10]},
                        {"x1": x1[10:], "x2": x2[10:], "x3": x3[10:]}]
>       builder = incr_dbuilder("te(cr(x1, df=3), cr(x2, df=3), cc(x3, df=3)) - 1",
                                lambda: iter(data_chunked))

patsy/mgcv_cubic_splines.py:1089:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'te(cr(x1, df=3), cr(x2, df=3), cc(x3, df=3)) - 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________________ test__tokenize_formula ___________________________________________________________________________________

    def test__tokenize_formula():
        code = "y ~ a + (foo(b,c +   2)) + -1 + 0 + 10"
>       tokens = list(_tokenize_formula(code, ["+", "-", "~"]))

patsy/parse_formula.py:98:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ a + (foo(b,c +   2)) + -1 + 0 + 10', operator_strings = ['+', '-', '~']

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
____________________________________________________________________________________ test_parse_formula _____________________________________________________________________________________

    def test_parse_formula():
>       _do_parse_test(_parser_tests, [])

patsy/parse_formula.py:208:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:202: in _do_parse_test
    actual = parse_formula(code, extra_operators=extra_operators)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_parse_origin _____________________________________________________________________________________

    def test_parse_origin():
>       tree = parse_formula("a ~ b + c")

patsy/parse_formula.py:211:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a ~ b + c', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_parse_errors _____________________________________________________________________________________

extra_operators = []

    def test_parse_errors(extra_operators=[]):
        def parse_fn(code):
            return parse_formula(code, extra_operators=extra_operators)
>       _parsing_error_test(parse_fn, _parser_error_tests)

patsy/parse_formula.py:285:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:273: in _parsing_error_test
    parse_fn(bad_code)
patsy/parse_formula.py:284: in parse_fn
    return parse_formula(code, extra_operators=extra_operators)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a +', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
a <+>
'a +' 2 3
____________________________________________________________________________________ test_parse_extra_op ____________________________________________________________________________________

    def test_parse_extra_op():
        extra_operators = [Operator("|", 2, 250)]
>       _do_parse_test(_parser_tests,
                       extra_operators=extra_operators)

patsy/parse_formula.py:294:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/parse_formula.py:202: in _do_parse_test
    actual = parse_formula(code, extra_operators=extra_operators)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 1', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________________ test_DesignInfo_subset ___________________________________________________________________________________

    def test_DesignInfo_subset():
        # For each combination of:
        #   formula, term names, term objects, mixed term name and term objects
        # check that results match subset of full build
        # and that removed variables don't hurt
        all_data = {"x": [1, 2],
                    "y": [[3.1, 3.2],
                          [4.1, 4.2]],
                    "z": [5, 6]}
        all_terms = make_termlist("x", "y", "z")
        def iter_maker():
            yield all_data
        all_builder = design_matrix_builders([all_terms], iter_maker, 0)[0]
        full_matrix = build_design_matrices([all_builder], all_data)[0]

        def t(which_terms, variables, columns):
            sub_design_info = all_builder.subset(which_terms)
            sub_data = {}
            for variable in variables:
                sub_data[variable] = all_data[variable]
            sub_matrix = build_design_matrices([sub_design_info], sub_data)[0]
            sub_full_matrix = full_matrix[:, columns]
            if not isinstance(which_terms, six.string_types):
                assert len(which_terms) == len(sub_design_info.terms)
            assert np.array_equal(sub_matrix, sub_full_matrix)

>       t("~ 0 + x + y + z", ["x", "y", "z"], slice(None))

patsy/test_build.py:700:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_build.py:690: in t
    sub_design_info = all_builder.subset(which_terms)
patsy/design_info.py:630: in subset
    desc = ModelDesc.from_formula(which_terms)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 0 + x + y + z', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
____________________________________________________________________________________ test_formula_likes _____________________________________________________________________________________

    def test_formula_likes():
        # Plain array-like, rhs only
        t([[1, 2, 3], [4, 5, 6]], {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        t((None, [[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        t(np.asarray([[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        t((None, np.asarray([[1, 2, 3], [4, 5, 6]])), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"])
        dm = DesignMatrix([[1, 2, 3], [4, 5, 6]], default_column_prefix="foo")
        t(dm, {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["foo0", "foo1", "foo2"])
        t((None, dm), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["foo0", "foo1", "foo2"])

        # Plain array-likes, lhs and rhs
        t(([1, 2], [[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        t(([[1], [2]], [[1, 2, 3], [4, 5, 6]]), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        t((np.asarray([1, 2]), np.asarray([[1, 2, 3], [4, 5, 6]])), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        t((np.asarray([[1], [2]]), np.asarray([[1, 2, 3], [4, 5, 6]])), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["x0", "x1", "x2"],
          [[1], [2]], ["y0"])
        x_dm = DesignMatrix([[1, 2, 3], [4, 5, 6]], default_column_prefix="foo")
        y_dm = DesignMatrix([1, 2], default_column_prefix="bar")
        t((y_dm, x_dm), {}, 0,
          False,
          [[1, 2, 3], [4, 5, 6]], ["foo0", "foo1", "foo2"],
          [[1], [2]], ["bar0"])
        # number of rows must match
        t_invalid(([1, 2, 3], [[1, 2, 3], [4, 5, 6]]), {}, 0)

        # tuples must have the right size
        t_invalid(([[1, 2, 3]],), {}, 0)
        t_invalid(([[1, 2, 3]], [[1, 2, 3]], [[1, 2, 3]]), {}, 0)

        # plain Series and DataFrames
        if have_pandas:
            # Names are extracted
            t(pandas.DataFrame({"x": [1, 2, 3]}), {}, 0,
              False,
              [[1], [2], [3]], ["x"])
            t(pandas.Series([1, 2, 3], name="asdf"), {}, 0,
              False,
              [[1], [2], [3]], ["asdf"])
            t((pandas.DataFrame({"y": [4, 5, 6]}),
               pandas.DataFrame({"x": [1, 2, 3]})), {}, 0,
              False,
              [[1], [2], [3]], ["x"],
              [[4], [5], [6]], ["y"])
            t((pandas.Series([4, 5, 6], name="y"),
               pandas.Series([1, 2, 3], name="x")), {}, 0,
              False,
              [[1], [2], [3]], ["x"],
              [[4], [5], [6]], ["y"])
            # Or invented
            t((pandas.DataFrame([[4, 5, 6]]),
               pandas.DataFrame([[1, 2, 3]], columns=[7, 8, 9])), {}, 0,
              False,
              [[1, 2, 3]], ["x7", "x8", "x9"],
              [[4, 5, 6]], ["y0", "y1", "y2"])
            t(pandas.Series([1, 2, 3]), {}, 0,
              False,
              [[1], [2], [3]], ["x0"])
            # indices must match
            t_invalid((pandas.DataFrame([[1]], index=[1]),
                       pandas.DataFrame([[1]], index=[2])),
                      {}, 0)

        # Foreign ModelDesc factories
        class ForeignModelSource(object):
            def __patsy_get_model_desc__(self, data):
                return ModelDesc([Term([LookupFactor("Y")])],
                                 [Term([LookupFactor("X")])])
        foreign_model = ForeignModelSource()
        t(foreign_model,
          {"Y": [1, 2],
           "X": [[1, 2], [3, 4]]},
          0,
          True,
          [[1, 2], [3, 4]], ["X[0]", "X[1]"],
          [[1], [2]], ["Y"])
        class BadForeignModelSource(object):
            def __patsy_get_model_desc__(self, data):
                return data
        t_invalid(BadForeignModelSource(), {}, 0)

        # string formulas
>       t("y ~ x", {"y": [1, 2], "x": [3, 4]}, 0,
          True,
          [[1, 3], [1, 4]], ["Intercept", "x"],
          [[1], [2]], ["y"])

patsy/test_highlevel.py:252:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:87: in t
    builders = incr_dbuilders(formula_like, data_iter_maker, depth)
patsy/highlevel.py:129: in incr_dbuilders
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ x', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_term_info _______________________________________________________________________________________

    def test_term_info():
        data = balanced(a=2, b=2)
>       rhs = dmatrix("a:b", data)

patsy/test_highlevel.py:400:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a:b', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_data_types ______________________________________________________________________________________

    def test_data_types():
        data = {"a": [1, 2, 3],
                "b": [1.0, 2.0, 3.0],
                "c": np.asarray([1, 2, 3], dtype=np.float32),
                "d": [True, False, True],
                "e": ["foo", "bar", "baz"],
                "f": C([1, 2, 3]),
                "g": C(["foo", "bar", "baz"]),
                "h": np.array(["foo", 1, (1, "hi")], dtype=object),
                }
>       t("~ 0 + a", data, 0, True,
          [[1], [2], [3]], ["a"])

patsy/test_highlevel.py:417:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ 0 + a', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_categorical ______________________________________________________________________________________

    def test_categorical():
        data = balanced(a=2, b=2)
        # There are more exhaustive tests for all the different coding options in
        # test_build; let's just make sure that C() and stuff works.
>       t("~ C(a)", data, 0,
          True,
          [[1, 0], [1, 0], [1, 1], [1, 1]], ["Intercept", "C(a)[T.a2]"])

patsy/test_highlevel.py:440:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ C(a)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________________ test_builtins _______________________________________________________________________________________

    def test_builtins():
        data = {"x": [1, 2, 3],
                "y": [4, 5, 6],
                "a b c": [10, 20, 30]}
>       t("0 + I(x + y)", data, 0,
          True,
          [[1], [2], [3], [4], [5], [6]], ["I(x + y)"])

patsy/test_highlevel.py:492:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '0 + I(x + y)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_incremental ______________________________________________________________________________________

    def test_incremental():
        # incr_dbuilder(s)
        # stateful transformations
        datas = [
            {"a": ["a2", "a2", "a2"],
             "x": [1, 2, 3]},
            {"a": ["a2", "a2", "a1"],
             "x": [4, 5, 6]},
            ]
        x = np.asarray([1, 2, 3, 4, 5, 6])
        sin_center_x = np.sin(x - np.mean(x))
        x_col = sin_center_x - np.mean(sin_center_x)
        def data_iter_maker():
            return iter(datas)
>       builders = incr_dbuilders("1 ~ a + center(np.sin(center(x)))",
                                  data_iter_maker)

patsy/test_highlevel.py:516:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:129: in incr_dbuilders
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '1 ~ a + center(np.sin(center(x)))', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
____________________________________________________________________________________ test_env_transform _____________________________________________________________________________________

    def test_env_transform():
>       t("~ np.sin(x)", {"x": [1, 2, 3]}, 0,
          True,
          [[1, np.sin(1)], [1, np.sin(2)], [1, np.sin(3)]],
          ["Intercept", "np.sin(x)"])

patsy/test_highlevel.py:543:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = '~ np.sin(x)', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________________ test_term_order ______________________________________________________________________________________

    def test_term_order():
        data = balanced(a=2, b=2)
        data["x1"] = np.linspace(0, 1, 4)
        data["x2"] = data["x1"] ** 2

        def t_terms(formula, order):
            m = dmatrix(formula, data)
            assert m.design_info.term_names == order

>       t_terms("a + b + x1 + x2", ["Intercept", "a", "b", "x1", "x2"])

patsy/test_highlevel.py:567:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:564: in t_terms
    m = dmatrix(formula, data)
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'a + b + x1 + x2', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_____________________________________________________________________________________ test_multicolumn ______________________________________________________________________________________

    def test_multicolumn():
        data = {
            "a": ["a1", "a2"],
            "X": [[1, 2], [3, 4]],
            "Y": [[1, 3], [2, 4]],
            }
>       t("X*Y", data, 0,
          True,
          [[1, 1, 2, 1, 3, 1 * 1, 2 * 1, 1 * 3, 2 * 3],
           [1, 3, 4, 2, 4, 3 * 2, 4 * 2, 3 * 4, 4 * 4]],
          ["Intercept", "X[0]", "X[1]", "Y[0]", "Y[1]",
           "X[0]:Y[0]", "X[1]:Y[0]", "X[0]:Y[1]", "X[1]:Y[1]"])

patsy/test_highlevel.py:607:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/test_highlevel.py:83: in t
    builder = incr_dbuilder(formula_like, data_iter_maker, depth)
patsy/highlevel.py:111: in incr_dbuilder
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'X*Y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
______________________________________________________________________________ test_dmatrix_dmatrices_no_data _______________________________________________________________________________

    def test_dmatrix_dmatrices_no_data():
        x = [1, 2, 3]
        y = [4, 5, 6]
>       assert np.allclose(dmatrix("x"), [[1, 1], [1, 2], [1, 3]])

patsy/test_highlevel.py:624:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_________________________________________________________________________________ test_designinfo_describe __________________________________________________________________________________

    def test_designinfo_describe():
>       lhs, rhs = dmatrices("y ~ x + a", {"y": [1, 2, 3],
                                           "x": [4, 5, 6],
                                           "a": ["a1", "a2", "a3"]})

patsy/test_highlevel.py:630:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:309: in dmatrices
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'y ~ x + a', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________________ test_evalfactor_reraise __________________________________________________________________________________

    def test_evalfactor_reraise():
        # This will produce a PatsyError, but buried inside the factor evaluation,
        # so the original code has no way to give it an appropriate origin=
        # attribute. EvalFactor should notice this, and add a useful origin:
        def raise_patsy_error(x):
            raise PatsyError("WHEEEEEE")
        formula = "raise_patsy_error(X) + Y"
        try:
>           dmatrix(formula, {"X": [1, 2, 3], "Y": [4, 5, 6]})

patsy/test_highlevel.py:644:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'raise_patsy_error(X) + Y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
__________________________________________________________________________________ test_dmatrix_NA_action ___________________________________________________________________________________

    def test_dmatrix_NA_action():
        data = {"x": [1, 2, 3, np.nan], "y": [np.nan, 20, 30, 40]}

        return_types = ["matrix"]
        if have_pandas:
            return_types.append("dataframe")

        for return_type in return_types:
>           mat = dmatrix("x + y", data=data, return_type=return_type)

patsy/test_highlevel.py:671:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x + y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________________ test_0d_data ________________________________________________________________________________________

    def test_0d_data():
        # Use case from statsmodels/statsmodels#1881
        data_0d = {"x1": 1.1, "x2": 1.2, "a": "a1"}

        for formula, expected in [
                ("x1 + x2", [[1, 1.1, 1.2]]),
                ("C(a, levels=('a1', 'a2')) + x1", [[1, 0, 1.1]]),
                ]:
>           mat = dmatrix(formula, data_0d)

patsy/test_highlevel.py:710:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x1 + x2', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________ test_env_not_saved_in_builder _______________________________________________________________________________

    def test_env_not_saved_in_builder():
        x_in_env = [1, 2, 3]
>       design_matrix = dmatrix("x_in_env", {})

patsy/test_highlevel.py:726:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'x_in_env', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
_______________________________________________________________________________________ test_issue_11 _______________________________________________________________________________________

    def test_issue_11():
        # Give a sensible error message for level mismatches
        # (At some points we've failed to put an origin= on these errors)
        env = EvalEnvironment.capture()
        data = {"X" : [0,1,2,3], "Y" : [1,2,3,4]}
        formula = "C(X) + Y"
        new_data = {"X" : [0,0,1,2,3,3,4], "Y" : [1,2,3,4,5,6,7]}
>       info = dmatrix(formula, data)

patsy/test_regressions.py:18:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/highlevel.py:290: in dmatrix
    (lhs, rhs) = _do_highlevel_design(formula_like, data, eval_env,
patsy/highlevel.py:164: in _do_highlevel_design
    design_infos = _try_incr_builders(formula_like, data_iter_maker, eval_env,
patsy/highlevel.py:62: in _try_incr_builders
    formula_like = ModelDesc.from_formula(formula_like)
patsy/desc.py:164: in from_formula
    tree = parse_formula(tree_or_string)
patsy/parse_formula.py:146: in parse_formula
    tree = infix_parse(_tokenize_formula(code, operator_strings),
patsy/infix_parser.py:210: in infix_parse
    for token in token_source:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

code = 'C(X) + Y', operator_strings = ['~', '~', '+', '-', '*', '/', ...]

    def _tokenize_formula(code, operator_strings):
        assert "(" not in operator_strings
        assert ")" not in operator_strings
        magic_token_types = {"(": Token.LPAREN,
                             ")": Token.RPAREN,
                             }
        for operator_string in operator_strings:
            magic_token_types[operator_string] = operator_string
        # Once we enter a Python expression, a ( does not end it, but any other
        # "magic" token does:
        end_tokens = set(magic_token_types)
        end_tokens.remove("(")

        it = PushbackAdapter(python_tokenize(code))
>       for pytype, token_string, origin in it:
E       TypeError: next expected at least 1 argument, got 0

patsy/parse_formula.py:89: TypeError
___________________________________________________________________________________ test_PushbackAdapter ____________________________________________________________________________________

    def test_PushbackAdapter():
        it = PushbackAdapter(iter([1, 2, 3, 4]))
>       assert it.has_more()

patsy/util.py:379:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
patsy/util.py:371: in has_more
    self.peek()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <patsy.util.PushbackAdapter object at 0x7f2628b69e20>

    def peek(self):
        try:
>           obj = six.advance_iterator(self)
E           TypeError: next expected at least 1 argument, got 0

patsy/util.py:363: TypeError
================================================================================== short test summary info ==================================================================================
FAILED patsy/build.py::test__examine_factor_types - TypeError: next expected at least 1 argument, got 0
FAILED patsy/contrasts.py::test__obj_to_readable_str - assert '\\u20ac' == "b'\\xa4'"
FAILED patsy/desc.py::test_ModelDesc_from_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/desc.py::test_eval_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/desc.py::test_eval_formula_error_reporting - TypeError: next expected at least 1 argument, got 0
FAILED patsy/desc.py::test_formula_factor_origin - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_EvalFactor_memorize_passes_needed - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_EvalFactor_end_to_end - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_annotated_tokens - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_replace_bare_funcalls - TypeError: next expected at least 1 argument, got 0
FAILED patsy/eval.py::test_capture_obj_method_calls - TypeError: next expected at least 1 argument, got 0
FAILED patsy/mgcv_cubic_splines.py::test_crs_with_specific_constraint - TypeError: next expected at least 1 argument, got 0
FAILED patsy/mgcv_cubic_splines.py::test_te_2smooths - TypeError: next expected at least 1 argument, got 0
FAILED patsy/mgcv_cubic_splines.py::test_te_3smooths - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test__tokenize_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_formula - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_origin - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_errors - TypeError: next expected at least 1 argument, got 0
FAILED patsy/parse_formula.py::test_parse_extra_op - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_build.py::test_DesignInfo_subset - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_formula_likes - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_term_info - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_data_types - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_categorical - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_builtins - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_incremental - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_env_transform - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_term_order - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_multicolumn - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_dmatrix_dmatrices_no_data - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_designinfo_describe - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_evalfactor_reraise - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_dmatrix_NA_action - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_0d_data - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_highlevel.py::test_env_not_saved_in_builder - TypeError: next expected at least 1 argument, got 0
FAILED patsy/test_regressions.py::test_issue_11 - TypeError: next expected at least 1 argument, got 0
FAILED patsy/util.py::test_PushbackAdapter - TypeError: next expected at least 1 argument, got 0
============================================================================== 37 failed, 111 passed in 54.11s ==============================================================================

@matthewwardrop
Copy link
Collaborator

@kloczek There's something screwy going on in your setup with the PushbackAdapter... but I can see nothing wrong with the code. It looks like the next method of PushbackAdapter is being invoked as a static method rather than an instance method... but it is being called on an instance, so that's just weird.

Is your version of Python modified in any way from standard CPython?

@bashtage
Copy link
Contributor

bashtage commented Jan 4, 2024

I also noticed that all of the failures were explicitly about next needing an argument (aside from the 6 related to SciPy), which seems very surprising.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants