-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix skipTest
inside subTest
#169
base: main
Are you sure you want to change the base?
Conversation
for more information, see https://pre-commit.ci
src/pytest_subtests/plugin.py
Outdated
TestCaseFunction._originaladdSkip = copy.copy(TestCaseFunction.addSkip) # type: ignore[attr-defined] | ||
TestCaseFunction.addSkip = _addSkip # type: ignore[method-assign] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _addSkip
in unittest.case
doesn't contain logic to call addSubTest
for outcome.skipped
(unlike _feedErrorsToResult
), and it turns out this causes the reporting issue described in the PR.
Unlike addSubTest
(where original TestCaseFunction
doesn't have it), addSkip
is defined TestCaseFunction
, and here we save the original implementation and patch TestCaseFunction.addSkip
to a newly defined _addSkip
that allows us to call addSubTest
.
The _originaladdSkip
allows us to restore to it in pytest_unconfigure
as well as it turns out necessary to call it too inside the newly defined _addSkip
.
Why the whole test is marked as passed if all subtests either failed or skipped? or we need to consider that there might be a testing code in the test out of subtest code? If so, I wish for documentation example for that with description. I also wonder how python unittest reports such failures? is pytest-subtests behavior aligned? |
I am not 100% sure though as I don't dive into the whole
is what I observed (even on the current
well, unittest gives
|
@@ -98,6 +100,24 @@ def _from_test_report(cls, test_report: TestReport) -> SubTestReport: | |||
return super()._from_json(test_report._to_json()) | |||
|
|||
|
|||
def _addSkip(self: TestCaseFunction, testcase: TestCase, reason: str) -> None: | |||
if isinstance(testcase, _SubTest): | |||
self._originaladdSkip(testcase, reason) # type: ignore[attr-defined] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems we don't really need call self._originaladdSkip
for subtest skips, i.e. the following also works
try:
raise pytest.skip.Exception(reason, _use_item_location=True)
except skip.Exception:
exc_info = sys.exc_info()
self.addSubTest(testcase.test_case, testcase, exc_info)
cc @nicoddemus |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @ydshieh thanks for the PR
Other than my comments, we need a test to ensure this does not regress in the future.
for more information, see https://pre-commit.ci
Good point! I will add some tests |
else: | ||
# For python < 3.11: the non-subtest skips have to be added by `_originaladdSkip` only after all subtest | ||
# failures are processed by `_addSubTest`. | ||
if sys.version_info < (3, 11): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python unittest has some changes (since 3.11) which leads to this if/else here
for more information, see https://pre-commit.ci
Currently, the reporting has issues in the cases where
skipTest
is used insidesubTest
. For exampleoutputs
which is obviously wrong.
This PR fix the above issue. The new output is