-
Notifications
You must be signed in to change notification settings - Fork 3
2.5.1 Test Script
The test script as designed with configuration and simplicity in mind. This script is not hard coded in any way, all it takes to add tests to our script is placing the test either in the project root (for acceptance tests) or /se306Project1/tests/ directory for unit tests. Any tests with the prefix "Test_" will be picked up by the script and be run.
-
Initializing Files
- The script reads all files with the Prefix "Test_" within the Project Root and /se306Project1/tests/ directory
- Those within the Project Root get placed into the acceptance tests queue
- Those within the /se306Project1/tests/ directory get placed into the unit tests queue
-
Acceptance Tests
- Take the test at the top of the queue
- Invoke a process call to make the file executable
- Run the test using a sub process
- Wait for the process to end
- Log the results and display the result to the user
- Repeat from (1) until queue is empty
-
Unit Tests
- Generate the world file
- Run roscore
- Take the test at the top of the queue
- Invoke a process call to make the file executable
- Run the test using a sub process
- Wait for the process to end
- Log the results and display the result to the user
- Repeat from (3) until queue is empty
Please note that regardless of the mode, the logging remains the same.
-
Regular Mode
- Run with "python test.py"
- Regular mode displays a short version of the result visually to the user, It's simply the test name in either green or red depending on if that specific test ran all of it's tests in 100% completion or not. It has in brackets the number of tests passed out of the total tests in that file.
-
Verbose Mode
- Run with "python test.py -v"
- Verbose mode displays a little bit extra for the user to examine. This version is similar to the regular mode, but instead of just numbers, it displays which specific tests didn't pass due to failure, and which didn't pass due to errors. It explicitly displays the individual tests that have failed, this is mode is better for the tester rather than the average coder that's just checking the build to see if its stable before commiting.
- test.log
This provides a very comprehensive breakdown of all the tests. For each file, there is a summary and a breakdown of each test within that file. It shows failures, errors and explanation for assertion failures.
- test_errors.log
The reason we've decided to have a separate file for errors is due to the fact that, when some errors happen the stack trace is gigantic. It often gets hard to decipher what's happening. The introduction of this second log file means that when there has been an error with a big stack trace, we can still look at our regular log file and see the summary without being confused.
For the ease of the tester, the final outcome of the test has a colour rating. It helps the user quickly diagnose the seriousness of the outcome.
- Green: Testing is at 100%, ready for commit.
- Yellow: Testing is slightly broken, fix a few tests before doing a commit.
- Red: Start panicking, someones seriously broken something!!!!
##1.0 Introduction
##2.0 User Manual
##2.5 Testing
3.1 Launch Infrastructure
3.2 Entities and behaviours (Robots, humans, animals)
- 3.2.1 Entity Superclass
- 3.2.1.1 Entity Movement
- 3.2.2 Robot Entity
- 3.2.2.1 Robot Entity Detection
- 3.2.2.2 Robot Path Finding
- 3.2.3 Robot Pickers
- 3.2.4 Robot Carriers
- 3.2.4.1 Carrier Queue
- 3.2.5 Humans
- 3.2.6 Animals
- 3.2.7 Entity Topics
3.3 Special services and features
##4.0 Project Planning and management
- 4.1 Project plan
- 4.2 Git Branching and Merging Etiquette
- 4.3 Design Requirements, System requirements and Technical specifications
- 4.4 Key Factors and Constraints
- 4.5 System Design
- 4.6 Time spent
- 4.7 Testing and integration overview
- 4.8 Meeting minutes
##Miscellaneous resources