Skip to content
hsye185 edited this page Aug 27, 2015 · 3 revisions

Summary

The test script as designed with configuration and simplicity in mind. This script is not hard coded in any way, all it takes to add tests to our script is placing the test either in the project root (for acceptance tests) or /se306Project1/tests/ directory for unit tests. Any tests with the prefix "Test_" will be picked up by the script and be run.

How it works

  • Initializing Files

    1. The script reads all files with the Prefix "Test_" within the Project Root and /se306Project1/tests/ directory
    2. Those within the Project Root get placed into the acceptance tests queue
    3. Those within the /se306Project1/tests/ directory get placed into the unit tests queue
  • Acceptance Tests

    1. Take the test at the top of the queue
    2. Invoke a process call to make the file executable
    3. Run the test using a sub process
    4. Wait for the process to end
    5. Log the results and display the result to the user
    6. Repeat from (1) until queue is empty
  • Unit Tests

    1. Generate the world file
    2. Run roscore
    3. Take the test at the top of the queue
    4. Invoke a process call to make the file executable
    5. Run the test using a sub process
    6. Wait for the process to end
    7. Log the results and display the result to the user
    8. Repeat from (3) until queue is empty

Modes

Please note that regardless of the mode, the logging remains the same.

  • Regular Mode

    1. Run with "python test.py"
    2. Regular mode displays a short version of the result visually to the user, It's simply the test name in either green or red depending on if that specific test ran all of it's tests in 100% completion or not. It has in brackets the number of tests passed out of the total tests in that file.
    3. Regular Mode
  • Verbose Mode

    1. Run with "python test.py -v"
    2. Verbose mode displays a little bit extra for the user to examine. This version is similar to the regular mode, but instead of just numbers, it displays which specific tests didn't pass due to failure, and which didn't pass due to errors. It explicitly displays the individual tests that have failed, this is mode is better for the tester rather than the average coder that's just checking the build to see if its stable before commiting.
    3. Verbose Mode

Logging

  • test.log

This provides a very comprehensive breakdown of all the tests. For each file, there is a summary and a breakdown of each test within that file. It shows failures, errors and explanation for assertion failures.

  • test_errors.log

The reason we've decided to have a separate file for errors is due to the fact that, when some errors happen the stack trace is gigantic. It often gets hard to decipher what's happening. The introduction of this second log file means that when there has been an error with a big stack trace, we can still look at our regular log file and see the summary without being confused.

Final Result

For the ease of the tester, the final outcome of the test has a colour rating. It helps the user quickly diagnose the seriousness of the outcome.

  • Green: Testing is at 100%, ready for commit.
  • Yellow: Testing is slightly broken, fix a few tests before doing a commit.
  • Red: Start panicking, someones seriously broken something!!!!

Result

Clone this wiki locally