-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed Test Case Statistics based on Failure Reason #362
Comments
I like this idea a lot, this sounds like a good addition for the next release. |
@bischoffdev - I don't see any reference of DaggerClucumberCoreGraph class. Should it be part of any other repo? I downloaded to play around but couldn't proceed due to compilation issues |
If I'm not mistaken, that should be a generated class based on |
@gdonati78 - I didn't try that. Thanks for the lead. |
Exactly, Dagger code needs to be generated. |
@bischoffdev - I think long back you mentioned that you have it in your plan to generate a pdf report but not part of this project as this report focuses only on html reporting. Do we have any other project to generate pdf report that I can use? |
Hi, no there is no plan to do this. |
@bischoffdev - Any plan for a new relase this year? or the next version will be in Q1 2025? |
We might have another release this year. |
Is your feature request related to a problem? Please describe.
With a large test-suite with a high number of failures, currently it is difficult to prioritize the failed group test cases to increase the execution rate.
E.g. With a report of 300 test failures if 100 gets failed because an element is not accessible due to change is locator, currently there is no option to figure out how many failures fall under what exception reason.
Describe the solution you'd like
Just like we have a All Steps view, if we can get a view for failure reasons, The messages we see in red in below screenshot. It will solve the purpose. This will help the automation team to understand how many failures are intermittent/related to env issues and what all needs an attentions. Out of the failed test cases need to be fixed, they can pick the category with high failure and start working on that. It could be a minor fix which will fix majority of the test failures related to scripting.
Describe alternatives you've considered
Currently we are capturing the failures in a json as part of listener and do our analysis on those entries.
The text was updated successfully, but these errors were encountered: