Test Suite
GRASS Test Suite
We aim at creating a comprehensive test suite for GRASS modules and libraries.
Background
See what has been done so far by Sören Gebbert and others here: Development#QA and the good [| VKT test suite]
Keep an eye on GRASS 7 ideas collection
Main picture
We plan to run unittests and integration tests for both libraries and modules. The test suite will be run after compilation, with a command like:
$ make tests [ proj ]
Options:
- proj: run the tests with a [list of] CRS, so that reprojection and map unit handling are tested.
- ...
Tests are executed recursively. If the "make tests" command is executed in the root source folder all libraries and modules are tested. To test only the libraries you need to switch in the lib directory and execute "make tests". If you want to test only raster modules switch into the raster directory and run "make tests", same for other modules directories. To test a single module switch into the module directory and run "make tests".
Tests
Modules
Tests are targeted to cover all modules as well as library test modules.
The modules tests should as independent as possible from other GRASS modules.
Tests are written as simple shell script with annotations. Test script should start with test. followed by the module name i.e. r.mapcalc. and further ending with .sh. Example test.r.mapcalc.1.sh. All test should be well documented using shell comments.
The framework will execute all test scripts starting with test. and ending with .sh located in module or library directories.
Annotations are integrated in the test documentation (simple shell comments) and specify pre-processing steps, the tests and the type of the data to validate. The following annotations should be supported:
- The @preprocess annotation
- All commands below this annotation are handled as preprocessing steps for the tests.
- If any of the preprocess commands fail, the framework should stop testing an create a detailed error report.
- Preprocessing steps may be data generation, region settings and so on.
- The preprocess annotation is valid till a @test annotation specifies the begin of a test
- Preprocess annotations can be specified at between tests
- The @test annotation
- All command below a this annotation are handled as tests by the framework
- The test annotation must be integrated in the comment block which describes the test
- Data validation is performed by the framework for tests if reference data is present
- The test annotation is valid till a @preprocess annotation specifies the begin of a preprocess block for further test runs
- The data type annotations
- Data type annotations should be specified in the same comment block as the @test annotation
- Data type annotations specify the grass data types which should be validated with reference data
- The following data type annotations should be specified
- @file the framework should compare the reference data with files
- @raster the framework should compare the reference data with raster maps using r.out.ascii for export
- @vector the framework should compare the reference data with vector maps using v.out.ascii for export
- @raster3d the framework should compare the reference data with raster3d maps using r3.out.ascii for export
- @color the framework should compare the reference data with color rules using r.color.out for export
- @color3d the framework should compare the reference data with 3d color rules using r3.color.out for export
- @table the framework should compare the reference data with SQL tables using db.select for export
- ... please add more
- The @precision=[positive integer] annotation
- Should be located in a test comment block
- Specifies the precision to use to export grass data for validation with reference files
- In case the precision is not provided the default behavior of the export module is used
Reference data for validation must be located in the module/library directory. The reference data names must be identical with the generated data (files, maps, ...) except that reference data always has a .ref suffix.
Tests are in each module's and in each library's folder. To test library functions special modules must be implemented. Library test modules test the library functions directly and should be written in C. Have a look at the test directories in the g3d, gpde and gmath libraries of grass7.
Framework should be able to generate and compare grass data types: raster, vector, raster3d, general, db, icon, imagery, d.*?
wxGUI testing should be tested separately.
Automated tests on server generates HTML report. Test several platforms.
Here some examples already available in grass7:
test.v.random.sh
# This is a simple test for v.random
# We create several identical pseudo random points maps :)
# using the seed option for rand and drand48
# In the @preprocess step we set up a suitable region
g.region n=80 s=0 w=0 e=120 res=10 -p
# First @test the rand function. Create a 3d vector map with attribute table
# The validation is based on @vector map with a @precision=3
v.random --o -z output=test_random_vect_1 n=20 zmin=0 zmax=100 seed=501
# Now the attribute @table should be validated. Booth maps are identical
v.random --o -z output=test_random_vect_2 n=20 zmin=0 zmax=100 column=height seed=501
# Second @test the drand48 function. Create a 3d vector map with attribute table
# The validation is based on @vector map with a @precision=3
v.random --o -zd output=test_random_vect_3 n=20 zmin=0 zmax=100 seed=501
# Now the attribute @table should be validated. Booth maps are identical
v.random --o -zd output=test_random_vect_4 n=20 zmin=0 zmax=100 column=height seed=501
# Export the generated data as references
# v.out.ascii --o format=point dp=3 input=test_random_vect_1 output=test_random_vect_1.ref
# db.select "select * from test_random_vect_2" > test_random_vect_2.ref
# v.out.ascii --o format=point dp=3 input=test_random_vect_3 output=test_random_vect_3.ref
# db.select "select * from test_random_vect_4" > test_random_vect_4.ref
The following reference files are located in the v.random directory:
-rw-r--r-- 1 soeren users 460 18. Jun 01:20 test_random_vect_1.ref -rw-r--r-- 1 soeren users 257 18. Jun 01:20 test_random_vect_2.ref -rw-r--r-- 1 soeren users 455 18. Jun 01:20 test_random_vect_3.ref -rw-r--r-- 1 soeren users 256 18. Jun 01:20 test_random_vect_4.ref
test.r3.out.vtk.sh
# This script tests the export of voxel data
# into the VTK format. Almost all options of
# r3.out.vtk are tested. Validation data for each test
# is located in the module source directory
# We need to set a specific region in the
# @preprocess step of this test. We generate
# raster and voxel data with r.mapcalc and r3.mapcalc
# The region setting should work for UTM and LL test locations
g.region s=0 n=80 w=0 e=120 b=0 t=50 res=10 res3=10 -p3
# Now generate two elevation maps, we have 8 rows and use
# them for elevation computation. The rows are counted from north
# to south. So in the south the elevation must have a maximum.
r.mapcalc --o expr="elev_bottom = row()"
r.mapcalc --o expr="elev_top = row() + 50"
# Now create a voxel map with value = col + row + depth.
r3.mapcalc --o expr="volume = col() + row() + depth()"
# Add null value information
r3.mapcalc --o expr="volume_null = if(row() == 2 || row() == 7, null(), volume)"
# Create the rgb maps
r3.mapcalc --o expr="volume_rgb = volume_null * 5"
# The first @test just exports the volume map as cell and point data
# using a low precision and replaces the default null value with 0
# the created @files should be compared with the reference data
r3.out.vtk --o input=volume_null output=test_volume_null_1_cells.vtk dp=3 null=0
r3.out.vtk -p --o input=volume_null output=test_volume_null_1_points.vtk dp=3 null=0
# The second @test adds rgb and vector maps. We re-use the created volume map
# for vector creation. The rgb value must range fom 0 - 255. The generated @files
# should be compared with the reference data.
r3.out.vtk --o rgbmaps=volume_rgb,volume_rgb,volume_rgb vectormaps=volume_null,volume_null,volume_null input=volume_null output=test_volume_null_1_cells_rgb_vect.vtk dp=3 null=-1.0
r3.out.vtk -p --o rgbmaps=volume_rgb,volume_rgb,volume_rgb vectormaps=volume_null,volume_null,volume_null input=volume_null output=test_volume_null_1_points_rgb_vect.vtk dp=3 null=-1.0
# The third @test uses raster maps to create volume data with an elevation surface
# The maximum elevation should be in the south. Reference @files are present for validation.
r3.out.vtk -s --o top=elev_top bottom=elev_bottom input=volume_null output=test_volume_null_1_cells_elevation.vtk dp=3 null=0
r3.out.vtk -sp --o top=elev_top bottom=elev_bottom input=volume_null output=test_volume_null_1_points_elevation.vtk dp=3 null=0
The following reference files are located in the r3.out.vtk directory:
-rw-r--r-- 1 soeren users 104556 16. Jun 18:14 test_volume_null_1_cells_elevation.ref -rw-r--r-- 1 soeren users 3434 16. Jun 18:14 test_volume_null_1_cells.ref -rw-r--r-- 1 soeren users 22749 16. Jun 18:14 test_volume_null_1_cells_rgb_vect.ref -rw-r--r-- 1 soeren users 13360 16. Jun 18:14 test_volume_null_1_points_elevation.ref -rw-r--r-- 1 soeren users 3435 16. Jun 18:14 test_volume_null_1_points.ref -rw-r--r-- 1 soeren users 22750 16. Jun 18:14 test_volume_null_1_points_rgb_vect.ref
test.r3.cross.rast.sh
# This script tests the r3.cross.rast module to compute
# cross section raster maps based on a raster3d and elevation map
# We need to set a specific region in the
# @preprocess step of this test. We generate
# raster and voxel data with r.mapcalc and r3.mapcalc
# The region setting should work for UTM and LL test locations
g.region s=0 n=80 w=0 e=100 b=0 t=50 res=10 res3=10 -p3
# We create several elevation maps to create slices of the voxel map
# We start from bottom and raise to the top. Value equal or greater 50
# should generate grass NULL values
r.mapcalc --o expr="elev_0 = 0"
r.mapcalc --o expr="elev_1 = 5"
r.mapcalc --o expr="elev_2 = 15"
r.mapcalc --o expr="elev_3 = 25"
r.mapcalc --o expr="elev_4 = 35"
r.mapcalc --o expr="elev_5 = 45"
r.mapcalc --o expr="elev_NAN = 50"
r.mapcalc --o expr="elev_cross = float(col()* 5)"
# Now create a voxel map with value = col + row + depth.
r3.mapcalc --o expr="volume = col() + row() + depth()"
# Add null value information
r3.mapcalc --o expr="volume_null = if(row() == 1 || row() == 5, null(), volume)"
# We @test the creation of slices and a cross section of the voxel map. Reference data
# for @raster map validation is located in the r3.cross.rast source directory.
# Slice 0 and 1 should be identical. The last slice should consist only of grass NULL values
r3.cross.rast --o input=volume_null elevation=elev_0 output=test_cross_section_slice_0
r3.cross.rast --o input=volume_null elevation=elev_1 output=test_cross_section_slice_1
r3.cross.rast --o input=volume_null elevation=elev_2 output=test_cross_section_slice_2
r3.cross.rast --o input=volume_null elevation=elev_3 output=test_cross_section_slice_3
r3.cross.rast --o input=volume_null elevation=elev_4 output=test_cross_section_slice_4
r3.cross.rast --o input=volume_null elevation=elev_5 output=test_cross_section_slice_5
r3.cross.rast --o input=volume_null elevation=elev_NAN output=test_cross_section_slice_NAN
r3.cross.rast --o input=volume_null elevation=elev_cross output=test_cross_section_result
The following reference files are located in the r3.cross.rast directory:
-rw-r--r-- 1 soeren users 264 16. Jun 18:14 test_cross_section_result.ref -rw-r--r-- 1 soeren users 264 16. Jun 18:14 test_cross_section_slice_0.ref -rw-r--r-- 1 soeren users 264 16. Jun 18:14 test_cross_section_slice_1.ref -rw-r--r-- 1 soeren users 269 16. Jun 18:14 test_cross_section_slice_2.ref -rw-r--r-- 1 soeren users 273 16. Jun 18:14 test_cross_section_slice_3.ref -rw-r--r-- 1 soeren users 276 16. Jun 18:14 test_cross_section_slice_4.ref -rw-r--r-- 1 soeren users 279 16. Jun 18:14 test_cross_section_slice_5.ref -rw-r--r-- 1 soeren users 222 16. Jun 18:14 test_cross_section_slice_NAN.ref
test.r3.out.ascii.sh
# Tests for r3.out.ascii and r3.in.ascii
# This script tests the export of voxel data using r3.out.ascii
# as well as the import of the generated data with r3.in.ascii
# using different row and depth ordering options.
# We set up a specific region in the
# @preprocess step of this test. We generate
# voxel data with r3.mapcalc. The region setting
# should work for UTM and LL test locations
g.region s=0 n=80 w=0 e=120 b=0 t=50 res=10 res3=10 -p3
# Now create several (float, double, null value) voxel map
# with value = col + row + depth.
r3.mapcalc --o expr="volume_float = float(col() + row() + depth())"
r3.mapcalc --o expr="volume_double = double(col() + row() + depth())"
# Add null value information
r3.mapcalc --o expr="volume_float_null = if(row() == 1 || row() == 5, null(), volume_float)"
r3.mapcalc --o expr="volume_double_null = if(row() == 1 || row() == 5, null(), volume_double)"
# We export float data in the first @test using different order and precision
# as text @files for valdiation of correct ordering and null data handling
r3.out.ascii --o input=volume_float_null output=test_float_nsbt_null.txt dp=0 null=*
r3.out.ascii --o -r input=volume_float_null output=test_float_snbt_null.txt dp=0 null=*
r3.out.ascii --o -d input=volume_float_null output=test_float_nstb_null.txt dp=0 null=*
r3.out.ascii --o -rd input=volume_float_null output=test_float_sntb_null.txt dp=0 null=*
# Different precision and null values than default
r3.out.ascii --o input=volume_float_null output=test_float_nsbt_null_prec5.txt dp=5 null=-1000
r3.out.ascii --o -rd input=volume_float_null output=test_float_sntb_null_prec8.txt dp=8 null=-2000
# Test the no header and grass6 compatibility flags
r3.out.ascii --o -h input=volume_float_null output=test_float_nsbt_null_no_header.txt dp=3 null=*
r3.out.ascii --o -c input=volume_float_null output=test_float_nsbt_null_grass6_comp_1.txt dp=3 null=*
# Any row or depth order should be ignored in case grass6 compatibility is enabled
# The rsult of comp_1, _2 and _3 must be identical
r3.out.ascii --o -cr input=volume_float_null output=test_float_nsbt_null_grass6_comp_2.txt dp=3 null=*
r3.out.ascii --o -crd input=volume_float_null output=test_float_nsbt_null_grass6_comp_3.txt dp=3 null=*
# We export double data in the second @test using different order and precision
# as text @files for valdiation of correct ordering and null data handling. Its hte same
# procedure as with float data
r3.out.ascii --o input=volume_double_null output=test_double_nsbt_null.txt dp=0 null=*
r3.out.ascii --o -r input=volume_double_null output=test_double_snbt_null.txt dp=0 null=*
r3.out.ascii --o -d input=volume_double_null output=test_double_nstb_null.txt dp=0 null=*
r3.out.ascii --o -rd input=volume_double_null output=test_double_sntb_null.txt dp=0 null=*
# Different precision and null values than default
r3.out.ascii --o input=volume_double_null output=test_double_nsbt_null_prec5.txt dp=5 null=-1000
r3.out.ascii --o -rd input=volume_double_null output=test_double_sntb_null_prec8.txt dp=8 null=-2000
# Test the no header and grass6 compatibility flags
r3.out.ascii --o -h input=volume_double_null output=test_double_nsbt_null_no_header.txt dp=3 null=*
r3.out.ascii --o -c input=volume_double_null output=test_double_nsbt_null_grass6_comp_1.txt dp=3 null=*
# Any row or depth order should be ignored in case grass6 compatibility is enabled
# The result of comp_1, _2 and _3 must be identical
r3.out.ascii --o -cr input=volume_double_null output=test_double_nsbt_null_grass6_comp_2.txt dp=3 null=*
r3.out.ascii --o -crd input=volume_double_null output=test_double_nsbt_null_grass6_comp_3.txt dp=3 null=*
# In the third @test we import all the generated data using r3.in.ascii
# The created @raster maps should be identical to the map "volume_double_null"
# The export of the created g3d map should use as @precision=0 for data validation
# The same raster name is used for all the imported data and so for the validation reference file
r3.in.ascii --o output=test_double_nsbt_null input=test_double_nsbt_null.txt nv=*
r3.in.ascii --o output=test_double_nsbt_null input=test_double_snbt_null.txt nv=*
r3.in.ascii --o output=test_double_nsbt_null input=test_double_nstb_null.txt nv=*
r3.in.ascii --o output=test_double_nsbt_null input=test_double_sntb_null.txt nv=*
# Different precision and null values than default
r3.in.ascii --o output=test_double_nsbt_null input=test_double_nsbt_null_prec5.txt nv=-1000
r3.in.ascii --o output=test_double_nsbt_null input=test_double_sntb_null_prec8.txt nv=-2000
# Any row or depth order should be ignored in case grass6 compatibility is enabled
r3.in.ascii --o output=test_double_nsbt_null input=test_double_nsbt_null_grass6_comp_1.txt
# In this @preprocess step for the last test we create a large region and
# generate large input data to test the handling of large files
g.region s=0 n=800 w=0 e=1200 b=0 t=50 res=10 res3=1.5 -p3
r3.mapcalc --o expr="volume_double_large = double(col() + row() + depth())"
# Add null value information
r3.mapcalc --o expr="volume_double_null_large = if(row() == 1 || row() == 5, null(), volume_double_large)"
# Now @test the export and import of large data without validation
r3.out.ascii --o input=volume_double_null_large output=test_double_nsbt_null_large.txt dp=0 null=*
r3.in.ascii --o output=test_double_nsbt_null_large input=test_double_nsbt_null_large.txt nv=*
# Just for the logs
r3.info test_double_nsbt_null_large
The following reference files are located in the r3.out.ascii directory:
-rw-r--r-- 1 soeren users 2875 17. Jun 21:36 test_double_nsbt_null_grass6_comp_1.ref -rw-r--r-- 1 soeren users 2875 17. Jun 21:36 test_double_nsbt_null_grass6_comp_2.ref -rw-r--r-- 1 soeren users 2875 17. Jun 21:36 test_double_nsbt_null_grass6_comp_3.ref -rw-r--r-- 1 soeren users 2751 17. Jun 21:36 test_double_nsbt_null_no_header.ref -rw-r--r-- 1 soeren users 4103 17. Jun 21:36 test_double_nsbt_null_prec5.ref -rw-r--r-- 1 soeren users 1463 17. Jun 21:36 test_double_nsbt_null.ref -rw-r--r-- 1 soeren users 1463 17. Jun 21:36 test_double_nstb_null.ref -rw-r--r-- 1 soeren users 1463 17. Jun 21:36 test_double_snbt_null.ref -rw-r--r-- 1 soeren users 5183 17. Jun 21:36 test_double_sntb_null_prec8.ref -rw-r--r-- 1 soeren users 1463 17. Jun 21:36 test_double_sntb_null.ref -rw-r--r-- 1 soeren users 2875 17. Jun 21:36 test_float_nsbt_null_grass6_comp_1.ref -rw-r--r-- 1 soeren users 2875 17. Jun 21:36 test_float_nsbt_null_grass6_comp_2.ref -rw-r--r-- 1 soeren users 2875 17. Jun 21:36 test_float_nsbt_null_grass6_comp_3.ref -rw-r--r-- 1 soeren users 2751 17. Jun 21:36 test_float_nsbt_null_no_header.ref -rw-r--r-- 1 soeren users 4103 17. Jun 21:36 test_float_nsbt_null_prec5.ref -rw-r--r-- 1 soeren users 1463 17. Jun 21:36 test_float_nsbt_null.ref -rw-r--r-- 1 soeren users 1463 17. Jun 21:36 test_float_nstb_null.ref -rw-r--r-- 1 soeren users 1463 17. Jun 21:36 test_float_snbt_null.ref -rw-r--r-- 1 soeren users 5183 17. Jun 21:36 test_float_sntb_null_prec8.ref -rw-r--r-- 1 soeren users 1463 17. Jun 21:36 test_float_sntb_null.ref
Test framework:
What the test framework should do:
- Creation of a test mapset for each test case in a specific test location located in the grass sources
- Setting the environment variables to the test location and grass installation (grass environment to run modules)
- Parsing and interpretation and execution of test scripts
- Support of several test scripts in a single module directory
- Run of location specific test scripts (only LL or UTM test scripts)
- Handles module test results codes and stderr messages
- Validation of module output based on reference data and data type annotations in the test description
- Creates a HTML report for single modules and the whole test run
- Deletion of the generated test mapset to clean up the test location
Implementation
- All test cases must be implemented as text files in shell style including annotations in comment blocks
- The test framework should be implemented in Python, parsing and analyzing and executing the test cases line by line
- The Make-System must be modified to start the tests by typing "make tests"
Test framework Python approach
I suggest the following Python classes:
- TestBase class - implements the test logic
- LatLongTest class - for LatLong test location initialization and test run, derived from TestBase class
- UTMTest class - for UTM test location initialization and test run, derived from TestBase class
- A CommandBase class - implements command line specific methods and attributes and executes a command, logging return value and stderr using the subprocess module
- PreProcess class - derived from CommandBase class for preprocess execution
- TestCase class - derived from CommandBase class for test execution
- ...
The TestBase class implementing the following methods:
class TestBase:
def StartGrassSession(self):
"""Abstract method, should be overwritten in a subclass"""
pass
def Run(self):
"""Create the environment variables for the grass session, create a mapset and execute all tests"""
self.StartGrassSession()
self.CreateTestMapset()
self.CheckForReferenceFiles()
self.ExecuteAll()
def CreateTestMapset(self):
"""Creates a temporary mapset for the test using g.mapset"""
pass
def CreateExecutionOrder(self):
"""
* Reads the entire test case into memory and parses the file line by line
* Creates the execution order list as internal structure to store specific execution settings
* For each command in the test case a specific CommandBase object is created and stored in the execution order list
* TestCase objects store the validation type (@file, ...) and the validation precision (@precision=1) ... and maybe more
"""
pass
def ExecuteAll(self):
"""
* Calls ExecuteTest or ExecutePreProcess for each entry in the execution order list
* Summarize the HTML and text summary content for each entry in the list
"""
pass
def ExecuteTest(self):
"""
* Execute a single test command using the TestCase object
* Logs stderr and return value from TestCase object
* Analyses return value and stderr in case of an error
* Checks for new generated maps/files of the command
* Checks if reference files are present
* Export data with data specific commands (r/r3/v.out.ascii, r/r3.colors.out, ...)
* Compares reference files with exported files
* Creates HTML and text logfile entry
"""
pass
def CheckForReferenceFiles(self):
"""Lists all available reference files of a directory in an internal dict,
base name, path and full name are stored"""
pass
def ExecutePreprocess(self):
"""
* Execute a single preprocess command using the PreProcess object
* Logs stderr and return value from PreProcess object
* Analyses return value and stderr in case of an error
* Creates HTML and text logfile entry
"""
pass
def CompareVectorMap(self):
"""
* Exports vector map with v.out.ascii using the default or specific TestCase precision
* Compares exported file with reference file
"""
pass
def CompareRasterMap(self):
"""
* Exports raster map with r.out.ascii using the default or specific TestCase precision
* Compares exported file with reference file
"""
pass
def CompareRaster3dMap(self):
"""
* Exports raster3d map with r3.out.ascii using the default or specific TestCase precision
* Compares exported file with reference file
"""
pass
def CompareColor(self):
"""
* Export raster color with r.colors.out
* Compares exported file with reference file
"""
pass
def CompareColor3d(self):
"""
* Export raster3d color with r3.colors.out
* Compares exported file with reference file
"""
pass
def CreateHTMLTestSummary(self):
pass
def CreateTextTestSummary(self):
pass
def CleanUp(self):
""" Remove the temporary test mapset and all generated test files """
pass
# ... many more
The LatLongTest implements the following method:
- StartGrassSession
- starts the grass7 session in a LatLong test location
The UTMTest implements the following method:
- StartGrassSession
- starts the grass7 session in an UTM test location
The CommandBase class:
- Uses the subprocess module to execute a command
- Is able to identify some shell specific commands in the command line ("&&, |, <, >")
- Acts as a very simple shell interpreter
- In case if "|" connects multiple commands in a command line with subprocess.PIPE
- In case of ">" or "<" uses stdout or stdin io
- No support for complex shell functionality (for, test, ...)
Make system
New rules should be added to the make system so that:
make test ll
Will start recursively library and/or module tests from the current directory in the LatLong test location. The Make system should execute the Python test framework main Python file in each Module or Library directory. I guess test specific entries must be added to the modules and library Makefiles?
Timeline and status
TBD
Interested people
- Sören Gebbert
- Anne Ghisla
- Martin Landa
- Add your name here