Parallel GRASS jobs: Difference between revisions
(→Background: core modules are generally safe to run in the same mapset) |
(fix broken URLs) |
||
(16 intermediate revisions by 4 users not shown) | |||
Line 1: | Line 1: | ||
== Parallel GRASS jobs == | == Parallel GRASS jobs == | ||
The idea of parallel GRASS GIS jobs is to speed up the computation. | |||
=== Background === | |||
This presentation from 2022 is a "must see": | |||
'''[https://htmlpreview.github.io/?https://github.com/petrasovaa/FUTURES-CONUS-talk/blob/main/foss4g2022.html Tips for parallelization in GRASS GIS]''' | |||
This you should know about GRASS' behaviour concerning multiple jobs: | |||
* You can run '''multiple processes''' in '''multiple locations''' ([https://grass.osgeo.org/grass-stable/manuals/helptext.html what's that?]). ''Peaceful coexistence.'' | |||
* You can run multiple processes in the same mapset, but only if the region is untouched. If you are unsure, it's recommended to launch each job in its own mapset within the location. | |||
* You can run '''multiple processes''' in the '''same location''', but in '''different mapsets'''. ''Peaceful coexistence.'' | |||
=== Approaches === | |||
See also the [[Parallelizing Scripts]] wiki page | |||
== File locking == | |||
GRASS doesn't perform any locking on the files within a GRASS | GRASS doesn't perform any locking on the files within a GRASS | ||
Line 25: | Line 42: | ||
See below for ways around these limitations. | See below for ways around these limitations. | ||
== Working with tiles == | == Working with tiles == | ||
Line 80: | Line 82: | ||
The only parallelized library in GRASS >=6.3 is GRASS Partial Differential Equations Library (GPDE) and the gmath library in GRASS 7. Read more in [[OpenMP]]. | The only parallelized library in GRASS >=6.3 is GRASS Partial Differential Equations Library (GPDE) and the gmath library in GRASS 7. Read more in [[OpenMP]]. | ||
=== Python === | |||
[http://grass.osgeo.org/grass73/manuals/libpython/pygrass.modules.interface.html?highlight=parallelmodulequeue#pygrass.modules.interface.module.ParallelModuleQueue PyGRASS ParallelModuleQueue] | |||
=== pthreads === | === pthreads === | ||
Note: only used in the r.mapcalc ''parser''! | |||
Good for a single system with a multi-core CPU. | Good for a single system with a multi-core CPU. | ||
Line 88: | Line 96: | ||
./configure --with-pthread | ./configure --with-pthread | ||
The {{cmd|r.mapcalc | The ''parser'' of {{cmd|r.mapcalc}} in GRASS 7 has been parallelized using GNU {{wikipedia|pthreads}}. The computation itself is executed serially. | ||
=== Bourne and Python Scripts === | === Bourne and Python Scripts === | ||
Line 104: | Line 112: | ||
The {{AddonCmd|GIPE}} ''i.vi.mpi'' addon module has been created as a MPI ({{wikipedia|Message Passing Interface}}) implementation of the {{AddonCmd|GIPE}} ''i.vi'' addon module. | The {{AddonCmd|GIPE}} ''i.vi.mpi'' addon module has been created as a MPI ({{wikipedia|Message Passing Interface}}) implementation of the {{AddonCmd|GIPE}} ''i.vi'' addon module. | ||
* See also the [[Agriculture and HPC]] wiki page. | * See also the [[Agriculture and HPC]] wiki page. | ||
=== MPI Programming === | |||
There is a sample implementation at module level in [https://github.com/OSGeo/grass-addons/tree/grass8/src/imagery/i.vi.mpi i.vi.mpi] | |||
=== GPU Programming === | === GPU Programming === | ||
Line 113: | Line 125: | ||
* See [[GPU]] | * See [[GPU]] | ||
Configure GRASS | Configure GRASS GIS with: | ||
./configure --with-opencl | ./configure --with-opencl | ||
== Cluster and Grid computing == | == Cluster and Grid computing == | ||
A cluster or grid computing system consists of a number of computers that are tightly coupled together. The manager or master controls the utilization of compute nodes. | |||
=== Job scheduler === | |||
Common job schedulers are [https://slurm.schedmd.com/ SLURM], [http://www.adaptivecomputing.com/products/open-source/torque/ TORQUE], [http://www.pbspro.org/ PBS], [https://arc.liv.ac.uk/trac/SGE Son of Grid Engine], and [https://kubernetes.io/ kubernetes] | |||
See also [https://slurm.schedmd.com/rosetta.html Rosetta Stone of Workload Managers] | |||
A job consists of tasks, e.g. processing of a single raster map in a time series of many raster maps. Jobs are assigned to a queue and started as soon as a slot in the queue is free. Jobs are removed from the queue once they finished. | |||
=== GRASS on a cluster === | |||
If you want to launch several GRASS jobs in parallel, you might consider to launch each job in its own mapset. | |||
* set up chunks of data to be processed (temporal or spatial, temporal chunks are usually easier to handle) | |||
* write a script with the actual processing of one chunk | |||
* write a script that initializes GRASS, creates a unique mapset, executes the script with the actual processing, and copies the results to a common mapset | |||
* add that script as a task to a job, create one job for each data chunk | |||
The general concept is to create a script that | |||
# creates a unique temporary mapset | |||
# creates a unique temporary GISRC file for this mapset | |||
# does the processing in this mapset | |||
# changes to the target mapset by updating the GISRC file, verify with g.gisenv | |||
# copies results from the temporary mapset to the target mapset | |||
# deletes the temporary mapset and GISRC file | |||
Such a script should not call grassXY and must be executable outside GRASS, because it is establishing a GRASS session, performing some processing, and closing the GRASS session, all by itself. See also [[GRASS_and_Shell|GRASS and Shell]]. | |||
Such a script will take arguments to specify the particular data to be processed. | |||
A job specification for an HPC job scheduler would then contain this script with specific arguments. | |||
The common bottleneck when using GRASS on a cluster is often disk I/O. Try to start the jobs with nice/ionice to reduce strain on the storage devices. | |||
== Cloud computing == | == Cloud computing == | ||
GRASS GIS | GRASS GIS is running in the cloud as web processing service backend. Have a look at: | ||
{{YouTube|jg2pb_Xjq8Y|desc=GRASS 7 in the cloud (by Sören Gebbert)}} | {{YouTube|jg2pb_Xjq8Y|desc=GRASS 7 in the cloud (by Sören Gebbert)}} | ||
Line 386: | Line 179: | ||
* wps-grass-bridge latest svn | * wps-grass-bridge latest svn | ||
* QGIS 1.7 with a modified QWPS plugin | * QGIS 1.7 with a modified QWPS plugin | ||
For latest development, visit: https://github.com/actinia-org/actinia-core | |||
== GRASS GIS on VPS == | |||
Instructions to run GRASS GIS on a commercial VPS to do some memory-intensive operations: | |||
https://plantarum.ca/2014/08/19/medium-performance-cluster-computing/ | |||
== Hints for NFS users == | == Hints for NFS users == | ||
Line 394: | Line 195: | ||
* If all else fails, and the I/O load is not too great, consider using {{wikipedia|sshfs}} with ssh passkeys instead of NFS. | * If all else fails, and the I/O load is not too great, consider using {{wikipedia|sshfs}} with ssh passkeys instead of NFS. | ||
* In some situations it is necessary to preserve the same directory structure on all nodes, and symlinks are a nice way to do that, but some (closed source 3rd party which will remain nameless) software insists on expanding symlinks. In this situation the [http://code.google.com/p/bindfs/ bindfs] FUSE extension can help. It is safer to use than "mount" binds, and you don't have to be root to set them up. As with ''sshfs'' there is a performance penalty so it may not be appropriate in high I/O situations. | * In some situations it is necessary to preserve the same directory structure on all nodes, and symlinks are a nice way to do that, but some (closed source 3rd party which will remain nameless) software insists on expanding symlinks. In this situation the [http://code.google.com/p/bindfs/ bindfs] FUSE extension can help. It is safer to use than "mount" binds, and you don't have to be root to set them up. As with ''sshfs'' there is a performance penalty so it may not be appropriate in high I/O situations. | ||
== Error: Too many open files == | |||
When working with long time series and {{cmd|r.series}} starts to complain that files are missing/not readable or the message | |||
Too many open files | |||
For a solution, see [[Large_raster_data_processing#Number_of_open_files_limitation]] | |||
== Misc Tips & Tricks == | == Misc Tips & Tricks == | ||
See the poor-man's multi-processing script on the [[Parallelizing Scripts]] wiki page. This approach has been used in the {{cmd|r3.in.xyz}} script. | |||
== Workshop on Parallelization == | |||
Haedrich, C., Petrasova, A. Parallelization for big EO data processing, OpenGeoHub Summer School 2023 | |||
* https://doi.org/10.5446/63123 | |||
'''Schedule''' | |||
Session 1 | |||
* [Slides] Introduction to Parallelization, GRASS GIS and Python | |||
* [Lab] Introduction to Parallelization with GRASS GIS and Python Notebook | |||
Session 2 | |||
* [Lab] Parallelization Case Study: Urban Growth Modeling | |||
* | Jupyter Notebook: | ||
* https://github.com/ncsu-geoforall-lab/opengeohub-2023 | |||
** https://github.com/ncsu-geoforall-lab/opengeohub-2023/blob/main/01_intro_to_GRASS_parallelization.ipynb | |||
== See also == | == See also == | ||
This Wiki: | |||
* [[GRASS_and_Shell#Automated_batch_jobs:_Setting_the_GRASS_environmental_variables|GRASS batch jobs]] (by settings env. variables) | * [[GRASS_and_Shell#Automated_batch_jobs:_Setting_the_GRASS_environmental_variables|GRASS batch jobs]] (by settings env. variables) | ||
* The [[OpenMP]] wiki page. | * The [[OpenMP]] wiki page. | ||
* The [[Parallelizing Scripts]] wiki page. | * The [[Parallelizing Scripts]] wiki page. | ||
* [[GPU]] computing | |||
* [[GPU]] | |||
Elsewhere: | |||
* [https://neteler.org/blog/building-a-cluster-for-grass-gis-and-other-software-from-the-osgeo-stack/ Building a cluster for GRASS GIS and other software from the OSGeo stack] | |||
* [https://research.csc.fi/geocomputing Using supercomputers for spatial analysis] | |||
* [https://github.com/csc-training/geocomputing/tree/master/grass GRASS batch job and parallelization examples for a supercomputer with SLURM] | |||
[[Category:Parallelization]] | [[Category:Parallelization]] | ||
[[Category: massive data analysis]] | [[Category: massive data analysis]] |
Latest revision as of 19:43, 27 December 2023
Parallel GRASS jobs
The idea of parallel GRASS GIS jobs is to speed up the computation.
Background
This presentation from 2022 is a "must see":
Tips for parallelization in GRASS GIS
This you should know about GRASS' behaviour concerning multiple jobs:
- You can run multiple processes in multiple locations (what's that?). Peaceful coexistence.
- You can run multiple processes in the same mapset, but only if the region is untouched. If you are unsure, it's recommended to launch each job in its own mapset within the location.
- You can run multiple processes in the same location, but in different mapsets. Peaceful coexistence.
Approaches
See also the Parallelizing Scripts wiki page
File locking
GRASS doesn't perform any locking on the files within a GRASS database, so the user may end up with one process reading a file while another process is in the middle of writing it. The most problematic case is the WIND file, which contains the current region, although there are others.
If a user wants to run multiple commands concurrently, steps need to be taken to ensure that this type of conflict doesn't happen. For the current region, the user can use the WIND_OVERRIDE environment variable to specify a named region which should be used instead of the WIND file.
Or the user can use the GRASS_REGION environment variable to specify the region parameters (the syntax is the same as the WIND file, but with newlines replaced with semicolons). With this approach, the region can only be read, not modified.
Problems can also arise if the user reads files from another mapset while another session is modifying those files. The WIND file isn't an issue here, nor are the files containing raster data (which are updated atomically), but the various support files may be.
See below for ways around these limitations.
Working with tiles
Huge map reprojection example:
Q: I'd like to try splitting a large raster into small chunks and then projecting each one separately, sending the project command to the background. The problem is that, if the GRASS command changes the region settings, things might not work.
A: r.proj doesn't change the region.
Processing the map in chunks requires setting a different region for each command. That can be done by creating named regions and using the WIND_OVERRIDE environment variable, e.g.:
g.region ... save=region1
g.region ... save=region2
...
WIND_OVERRIDE=region1 r.proj ... &
WIND_OVERRIDE=region2 r.proj ... &
...
(for python see the grass.use_temp_region() function)
The main factor which is likely to affect parallelism is the fact that the processes won't share their caches, so there'll be some degree of inefficiency if there's substantial overlap between the source areas for the processes.
If you have more than one such map to project, processing entire maps in parallel might be a better choice (so that you get N maps projected in 10 hours rather than 1 map in 10/N hours).
Parallelized code
OpenMP
Good for a single system with a multi-core CPU.
Configure GRASS 7 with:
./configure --with-openmp
GPDE using OpenMP
The only parallelized library in GRASS >=6.3 is GRASS Partial Differential Equations Library (GPDE) and the gmath library in GRASS 7. Read more in OpenMP.
Python
pthreads
Note: only used in the r.mapcalc parser!
Good for a single system with a multi-core CPU.
Configure GRASS 7 with:
./configure --with-pthread
The parser of r.mapcalc in GRASS 7 has been parallelized using GNU pthreads. The computation itself is executed serially.
Bourne and Python Scripts
Good for a single system with a multi-core CPU.
Often very easy & can be done without modification to the main source code.
- See the Parallelizing Scripts wiki page
OpenMPI
Good for a multi-system cluster connected by a fast network.
The GIPE i.vi.mpi addon module has been created as a MPI (Message Passing Interface) implementation of the GIPE i.vi addon module.
- See also the Agriculture and HPC wiki page.
MPI Programming
There is a sample implementation at module level in i.vi.mpi
GPU Programming
Good for certain kinds of calculations (e.g. ray-tracing) on a single system with a fast graphics card.
There is a version of the r.sun module which has been modified to use OpenCL. (works; still experimental)
- See GPU
Configure GRASS GIS with:
./configure --with-opencl
Cluster and Grid computing
A cluster or grid computing system consists of a number of computers that are tightly coupled together. The manager or master controls the utilization of compute nodes.
Job scheduler
Common job schedulers are SLURM, TORQUE, PBS, Son of Grid Engine, and kubernetes
See also Rosetta Stone of Workload Managers
A job consists of tasks, e.g. processing of a single raster map in a time series of many raster maps. Jobs are assigned to a queue and started as soon as a slot in the queue is free. Jobs are removed from the queue once they finished.
GRASS on a cluster
If you want to launch several GRASS jobs in parallel, you might consider to launch each job in its own mapset.
- set up chunks of data to be processed (temporal or spatial, temporal chunks are usually easier to handle)
- write a script with the actual processing of one chunk
- write a script that initializes GRASS, creates a unique mapset, executes the script with the actual processing, and copies the results to a common mapset
- add that script as a task to a job, create one job for each data chunk
The general concept is to create a script that
- creates a unique temporary mapset
- creates a unique temporary GISRC file for this mapset
- does the processing in this mapset
- changes to the target mapset by updating the GISRC file, verify with g.gisenv
- copies results from the temporary mapset to the target mapset
- deletes the temporary mapset and GISRC file
Such a script should not call grassXY and must be executable outside GRASS, because it is establishing a GRASS session, performing some processing, and closing the GRASS session, all by itself. See also GRASS and Shell.
Such a script will take arguments to specify the particular data to be processed.
A job specification for an HPC job scheduler would then contain this script with specific arguments.
The common bottleneck when using GRASS on a cluster is often disk I/O. Try to start the jobs with nice/ionice to reduce strain on the storage devices.
Cloud computing
GRASS GIS is running in the cloud as web processing service backend. Have a look at:
This Open Cloud GIS has been set up in a private Amazon compatible cloud environment using:
- Ubuntu 10.04 LTS and 10.10 cloud server edition
- Eucalyptus Cloud
- GRASS GIS 7 latest svn
- PyWPS latest svn
- wps-grass-bridge latest svn
- QGIS 1.7 with a modified QWPS plugin
For latest development, visit: https://github.com/actinia-org/actinia-core
GRASS GIS on VPS
Instructions to run GRASS GIS on a commercial VPS to do some memory-intensive operations:
https://plantarum.ca/2014/08/19/medium-performance-cluster-computing/
Hints for NFS users
- AVOID script forking on the cluster (but inclusion via ". script.sh" works ok). This means that the GRASS_BATCH_JOB approach is prone to fail. It is highly recommended to simply set a series of environmental variables to define the GRASS session, see here how to do that.
- be careful with concurrent file writing (use "lockfile" locking, the lockfile command is provided by procmail);
- store as much temporary data as possible (even the final maps) on the local blade disks if you have.
- finally collect all results from local blade disks *after* the parallel job execution in a sequential collector job (I am writing that now) to not kill NFS. For example, Grid Engine offers a "hold" function to only execute the collector job after having done the rest.
- If all else fails, and the I/O load is not too great, consider using sshfs with ssh passkeys instead of NFS.
- In some situations it is necessary to preserve the same directory structure on all nodes, and symlinks are a nice way to do that, but some (closed source 3rd party which will remain nameless) software insists on expanding symlinks. In this situation the bindfs FUSE extension can help. It is safer to use than "mount" binds, and you don't have to be root to set them up. As with sshfs there is a performance penalty so it may not be appropriate in high I/O situations.
Error: Too many open files
When working with long time series and r.series starts to complain that files are missing/not readable or the message
Too many open files
For a solution, see Large_raster_data_processing#Number_of_open_files_limitation
Misc Tips & Tricks
See the poor-man's multi-processing script on the Parallelizing Scripts wiki page. This approach has been used in the r3.in.xyz script.
Workshop on Parallelization
Haedrich, C., Petrasova, A. Parallelization for big EO data processing, OpenGeoHub Summer School 2023
Schedule
Session 1
- [Slides] Introduction to Parallelization, GRASS GIS and Python
- [Lab] Introduction to Parallelization with GRASS GIS and Python Notebook
Session 2
- [Lab] Parallelization Case Study: Urban Growth Modeling
Jupyter Notebook:
See also
This Wiki:
- GRASS batch jobs (by settings env. variables)
- The OpenMP wiki page.
- The Parallelizing Scripts wiki page.
- GPU computing
Elsewhere: