Test Suite
GRASS Test Suite
We aim at creating a comprehensive test suite for GRASS modules and libraries.
Background
See what has been done so far by Sören Gebbert and others here: Development#QA and the good [| VKT test suite]
Keep an eye on GRASS 7 ideas collection
Main picture
We plan to run unittests and integration tests for both libraries and modules. The test suite will be run after compilation, with a command like:
$ make tests [ proj ]
Options:
- proj: run the tests with a [list of] CRS, so that reprojection and map unit handling are tested.
- ...
Tests are executed recursively. If the "make tests" command is executed in the root source folder all libraries and modules are tested. To test only the libraries you need to switch in the lib directory and execute "make tests". If you want to test only raster modules switch into the raster directory and run "make tests", same for other modules directories. To test a single module switch into the module directory and run "make tests".
Tests
Modules
Tests are targeted to cover all modules as well as library test modules.
The modules tests should as independent as possible from other GRASS modules.
Tests are written as simple shell script with annotations. Test script should start with test. followed by the module name i.e. r.mapcalc. and further ending with .sh. Example test.r.mapcalc.1.sh. All test should be well documented using shell comments.
Annotations are integrated in the test documentation (simple shell comments) and specify pre-processing steps, the tests and the type of the data to validate. The following annotations should be supported:
- The @preprocess annotation
- All commands below this annotation are handled as preprocessing steps for the tests.
- If any of the preprocess commands fail, the framework should stop testing an create a detailed error report.
- Preprocessing steps may be data generation, region settings and so on.
- The preprocess annotation is valid till a @test annotation specifies the begin of a test
- Preprocess annotations can be specified at between tests
- The @test annotation
- All command below a this annotation are handled as tests by the framework
- The test annotation must be integrated in the comment block which describes the test
- Data validation is performed by the framework for tests if reference data is present
- The test annotation is valid till a @preprocess annotation specifies the begin of a preprocess block for further test runs
- The data type annotations
- Data type annotations should be specified in the same comment block as the @test annotation
- Data type annotations specify the grass data types which should be validated with reference data
- The following data type annotations should be specified
- @file the framework should compare the reference data with files
- @raster the framework should compare the reference data with raster maps using r.out.ascii for export
- @vector the framework should compare the reference data with vector maps using v.out.ascii for export
- @raster3d the framework should compare the reference data with raster3d maps using r3.out.ascii for export
- @color the framework should compare the reference data with color rules using r.color.out for export
- @table the framework should compare the reference data with SQL tables using db.select for export
- ... please add more
Reference data for validation must be located in the module/library directory. The reference data names must be identical with the generated data (files, maps, ...) except that reference data always has a .ref suffix.
Tests are in each module's and in each library's folder. To test library functions special modules must be implemented. Library test modules test the library functions directly and should be written in C. Have a look at the test directories in the g3d, gpde and gmath libraries of grass7.
Framework should be able to generate and compare grass data types: raster, vector, raster3d, general, db, icon, imagery, d.*?
wxGUI testing should be tested separately.
Automated tests on server generates HTML report. Test several platforms.
Test framework:
The test framework is something like:
- BaseTest:
- Creation of a test mapset for each test case in a specific test location located in the grass sources
- Setting the environment variables to external data sources (for import and data comparison) and to the test location
- Handling of region settings specific for single module tests. Using test specific mapsets the region settings of one test does not effect the region settings of another test.
- Handles module test results codes and messages
- Creates a HTML report from these
- Deletion of the generated test mapset to clean up the test location
- RasterTest(BaseTest):
- Raster data generator functions
- Raster data comparison functions (map by map, map by file, map by md5 value)
- VectorTest(BaseTest):
- Vector data comparison functions
- ...
Using these classes, each module will have its own test class, like:
- RMapcalcTest(RasterTest):
- Region setting
- generates data
- runs r.mapcalc
- compare results with the expected ones
- output the results of the test as messages, to be prettified in HTML
Example Python code for r.series:
# Example test for raster module r.series
# Import the grass test suite package
import GrassTestSuite
# Create the test class
class RSeriesTest(GrassTestSuite.Raster.TestCase):
# Add the dependencies in the constructor, this may be modules or libraries (gdal, proj, ...)
def __init__(self):
self.SetDescription("Simple r.series test case testing the average, sum and max methods with DCELL maps")
self.AddModuleDependency("r.mapcalc")
# Setup the input data and the result using r.mapcalc
# This method is specified in the framework and is called before any test method
def SetUp(self):
self.SetRegion(n=100, s=0, w=0, e=100, res=1)
self.CreateData("r.mapcalc", expression="testmap1 = 1.0")
self.CreateData("r.mapcalc", expression="testmap2 = 2.0")
self.CreateData("r.mapcalc", expression="average = (testmap1 + testmap2)/2.0")
self.CreateData("r.mapcalc", expression="sum = (testmap1 + testmap2)")
self.CreateData("r.mapcalc", expression="max = testmap2")
# This is a framework method called at the end of the test to remove the test specific
# mapset in the test location. Only override this method for specific needs.
def Finish(self):
self.CleanUp()
# The test functions. All methods tests should start with "test" and will called in alphabetic order
# by the framework. The RunModuleTest and CompareRasterMaps are method specified by the framework
# and must be used to run tests and compare results
# Test the average method
def testAverage(self):
self.RunModuleTest("r.series", input="testmap1,testmap2", output="resmap_av", method="average")
self.CompareRasterMaps(result="resmap_av", reference="average", threshold=3, mesgOnErro="Error in average method of r.series")
# Test the sum method
def testSum(self):
self.RunModuleTest("r.series", input="testmap1,testmap2", output="resmap_sum", method="sum")
self.CompareRasterMaps(result="resmap_sum", reference="sum", threshold=3, mesgOnErro="Error in sum method of r.series")
# Test the max method
def testMax(self):
self.RunModuleTest("r.series", input="testmap1,testmap2", output="resmap_max", method="max")
self.CompareRasterMaps(result="resmap_max", reference="max", threshold=3, mesgOnErro="Error in max method of r.series")
Timeline and status
TBD
Interested people
- Sören Gebbert
- Anne Ghisla
- Martin Landa
- Add your name here