Processing lidar and UAV point clouds in GRASS GIS (workshop at FOSS4G Boston 2017): Difference between revisions

From GRASS-Wiki
Jump to navigation Jump to search
(→‎Bleeding edge: add images)
(→‎See also: add Tangible Landscape immersion extension papers since we show it as a bleeding edge)
 
(36 intermediate revisions by 2 users not shown)
Line 9: Line 9:


Contributors: Robert S. Dzur and Doug Newcomb
Contributors: Robert S. Dzur and Doug Newcomb
Testers:


== Preparation ==
== Preparation ==
Line 20: Line 18:
'''OSGeo-Live'''
'''OSGeo-Live'''


All needed software is included.
All needed software is included in [http://live.osgeo.org/ OSGeo-Live].


''' Ubuntu '''
''' Ubuntu '''
Line 28: Line 26:
sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable
sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable
sudo apt-get update
sudo apt-get update
sudo apt-get install grass
sudo apt-get install grass grass-dev
</pre>
</pre>


''' Linux '''
''' Linux '''


For other Linux distributions other then Ubuntu, please try to find GRASS GIS in their package managers.
For other Linux distributions other then Ubuntu, please try to find GRASS GIS in their package managers or refer to  [https://grass.osgeo.org/download/software/linux/ grass.osgeo.org].


'''MS Windows'''
'''MS Windows'''
Line 44: Line 42:
<pre>
<pre>
brew tap osgeo/osgeo4mac
brew tap osgeo/osgeo4mac
brew install grass7
brew install numpy
brew install liblas --build-from-source
brew install grass7 --with-liblas
</pre>
</pre>


Note that currently (summer 2017) {{cmd|r.in.lidar}} is often not accessible on Mac OS, use {{cmd|r.in.ascii}} in combination with libLAS or PDAL command line tools to achieve the same. Note also that the 3D view may not be accessible.
If you have these already installed, you need to use <code>reinstall</code> instead of <code>install</code>.
 
Note that on Mac OS in some versions the {{cmd|r.in.lidar}} module is not accessible, so you need to check this and use {{cmd|r.in.ascii}} in combination with libLAS or PDAL command line tools to achieve the same or, preferably, use OSGeo-Live.
 
Note also that there is currently no recent version of installation on Mac OS where 3D view is accessible.


=== Addons ===
=== Addons ===
Line 67: Line 71:
=== Data ===
=== Data ===


* [http://courses.ncsu.edu/mea582/common/media/01/ortho_0793_016_ncspm.zip orthophoto]
We will need the two following ZIP files, downloaded and extracted:
* [http://courses.ncsu.edu/mea582/common/media/data/tile_0793_016_spm.las point cloud]
 
* [http://fatra.cnr.ncsu.edu/foss4g2017/nc_orthophoto_1m_spm.zip orthophoto]
* [http://fatra.cnr.ncsu.edu/foss4g2017/nc_tile_0793_016_spm.zip point cloud]
 
Later, for bonus task, download also this (much larger) file: 
 
* [http://fatra.cnr.ncsu.edu/foss4g2017/nc_uav_points_spm.zip UAV point cloud]


== Basic introduction to graphical user interface ==
== Basic introduction to graphical user interface ==
Line 104: Line 114:
=== Importing data ===
=== Importing data ===


In this step we import the provided data into GRASS GIS. In menu ''File - Import raster data'' select ''Common formats import'' and in the dialog browse to find the orthophoto file, change the name to <tt>ortho.tif</tt>, and click button ''Import''. All the imported layers should be added to GUI automatically, if not, add them manually. Point clouds will be imported later in a different way as part of the analysis.
In this step we import the provided data into GRASS GIS. In menu ''File - Import raster data'' select ''Common formats import'' and in the dialog browse to find the orthophoto file, change the name to <tt>ortho</tt>, and click button ''Import''. All the imported layers should be added to GUI automatically, if not, add them manually. Point clouds will be imported later in a different way as part of the analysis.


<center><gallery perrow=3 widths=400 heights=300>
<center><gallery perrow=3 widths=400 heights=300>
Line 151: Line 161:
  g.region raster=ortho -p
  g.region raster=ortho -p


Resolution can be set separately using the <code>res</code> parameter of the {{cmd|g.region}} module. The units are the units of the current location, in our case meters. This can be done in the ''Resolution'' tab of the {{cmd|g.region}} dialog or in the command line in the following way (using also the <code>-a</code> flag to print the new values):
Resolution can be set separately using the <code>res</code> parameter of the {{cmd|g.region}} module. The units are the units of the current location, in our case meters. This can be done in the ''Resolution'' tab of the {{cmd|g.region}} dialog or in the command line in the following way (using also the <code>-p</code> flag to print the new values):


  g.region res=3 -p
  g.region res=3 -p
Line 163: Line 173:
=== Modules ===
=== Modules ===


GRASS GIS functionality is available through ''modules'' (tools, functions, commands). Modules respect the following naming conventions:
GRASS GIS functionality is available through ''modules'' (which are sometimes called tools, functions, or commands). Modules respect the following naming conventions:


<center>
<center>
Line 196: Line 206:
Image:Wxgui_module_parameters_r_neighbors.png|Layout of a module dialog
Image:Wxgui_module_parameters_r_neighbors.png|Layout of a module dialog
Image:wxGUI_module_search.png|Search for a module in the ''Modules'' tab
Image:wxGUI_module_search.png|Search for a module in the ''Modules'' tab
Image:G search modules with c flag.png|Search for a module using advanced search with {{cmd|g.search.modules}}
</gallery></center>
</gallery></center>


Line 289: Line 300:
Finally, you may have noticed the first line of the script which says <code>#!/usr/bin/env python</code>. This is what Linux, Mac OS, and similar systems use to determine which interpreter to use. If you get something line <code>[Errno 8] Exec format error</code>, this line is probably incorrect or missing.
Finally, you may have noticed the first line of the script which says <code>#!/usr/bin/env python</code>. This is what Linux, Mac OS, and similar systems use to determine which interpreter to use. If you get something line <code>[Errno 8] Exec format error</code>, this line is probably incorrect or missing.


== Decide if to use GUI, command line, Python, or online Jupyter Notebook ==
== Decide if to use GUI, command line, Python, or Jupyter Notebook ==


* GUI: can be combined anytime with command line
* GUI: can be combined anytime with command line (recommended, especially for beginners, combine with copy and paste to Console)
* command line: can be any time combined with the GUI, most of the following instructions will be for command line (but can be easily transfered to GUI or Python), both system command line and Console tab in GUI will work well
* command line: can be any time combined with the GUI, most of the following instructions will be for command line (but can be easily transfered to GUI or Python), both system command line and Console tab in GUI will work well
* Python: you will need to change the syntax to Python
* Python: you will need to change the syntax to Python; use snippets from this static [https://github.com/wenzeslaus/Notebook-for-processing-point-clouds-in-GRASS-GIS/blob/master/notebooks/workshop_python.ipynb notebook]
* Jupyter Notebook online: link will be distributed by the instructor
* Jupyter Notebook locally: this is recommended only if you are using OSGeoLive or if you are on Linux and are familiar with Jupyter and GRASS GIS; download notebook from [https://raw.githubusercontent.com/wenzeslaus/Notebook-for-processing-point-clouds-in-GRASS-GIS/master/notebooks/workshop_python.ipynb github.com/wenzeslaus/Notebook-for-processing-point-clouds-in-GRASS-GIS]
* Jupyter Notebook locally: this is recommended only if you are using OSGeoLive or if you are on Linux and are familiar with Jupyter
* Jupyter Notebook online: link distributed by the instructor (backup option if other options fail)


== Binning of the point cloud ==
== Binning of the point cloud ==
Line 305: Line 316:
</gallery></center>
</gallery></center>


Fastest way to analyze basic properties of a point cloud is to use binning and create a raster map. We will now use {{cmd|r.in.lidar}} to create point count (point density) raster. At this point, we don't know the spatial extent of the point cloud, so we cannot set computation region ahead, but we can tell the {{cmd|r.in.lidar}} module to determine the extent first and use it for the output using the <code>-e</code> flag. We are using a course resolution to maximize the speed of the process.
Fastest way to analyze basic properties of a point cloud is to use binning and create a raster map. We will now use {{cmd|r.in.lidar}} to create point count (point density) raster. At this point, we don't know the spatial extent of the point cloud, so we cannot set computation region ahead, but we can tell the {{cmd|r.in.lidar}} module to determine the extent first and use it for the output using the <code>-e</code> flag. We are using a coarse resolution to maximize the speed of the process. Additionally, the <code>-r</code> flag sets the computational region to match the output.


  r.in.lidar input=nc_tile_0793_016_spm.las output=count_10 method=n -e resolution=10
  r.in.lidar input=nc_tile_0793_016_spm.las output=count_10 method=n -e -n resolution=10


Now we can see the distribution pattern, but let's examine also the numbers (in GUI using right click on the layer in ''Layer Manager'' and then Metadata or using {{cmd|r.info}} directly):
Now we can see the distribution pattern, but let's examine also the numbers (in GUI using right click on the layer in ''Layer Manager'' and then Metadata or using {{cmd|r.info}} directly):
Line 316: Line 327:
Since there is a lot of points per one cell, we can use a finer resolution:
Since there is a lot of points per one cell, we can use a finer resolution:


  r.in.lidar input=nc_tile_0793_016_spm.las output=count_1 method=n -e resolution=1
  r.in.lidar input=nc_tile_0793_016_spm.las output=count_1 method=n -e -n resolution=1


Look at the distribution of the values using a histogram. Histogram is accessible from the context menu of a layer in ''Layer Manager'', from ''Map Display'' toolbar ''Analyze map'' button or using the {{cmd|d.histogram}} module (<code>d.histogram map=count_1</code>).
Look at the distribution of the values using a histogram. Histogram is accessible from the context menu of a layer in ''Layer Manager'', from ''Map Display'' toolbar ''Analyze map'' button or using the {{cmd|d.histogram}} module (<code>d.histogram map=count_1</code>).
Line 396: Line 407:


Now we could visualize the resulting DSM by computing shaded relief with {{cmd|r.relief}} or by using 3D view (see the following sections).
Now we could visualize the resulting DSM by computing shaded relief with {{cmd|r.relief}} or by using 3D view (see the following sections).
<center><gallery perrow=3 widths=400 heights=270>
Image:Map Display with DSM and legend with histogram.png|DSM with legend and histogram.
</gallery></center>


== Terrain analysis ==
== Terrain analysis ==
Line 481: Line 496:
[[File:R in lidar explanation of zrange option.png|300px|thumb|right|The <code>zrange</code> parameter filters out points which don't fall into the specified range of Z values]]
[[File:R in lidar explanation of zrange option.png|300px|thumb|right|The <code>zrange</code> parameter filters out points which don't fall into the specified range of Z values]]


Although for many vegetation-related application, coarser resolution is more appropriate because more points are needed for the statistics, we will use just :  
Although for many vegetation-related application, coarser resolution is more appropriate because more points are needed for the statistics, we will use just:  


  g.region raster=count_1
  g.region raster=count_1
Line 501: Line 516:
  r.in.lidar -d input=nc_tile_0793_016_spm.las output=height_above_ground method=max base_raster=terrain zrange=0,100
  r.in.lidar -d input=nc_tile_0793_016_spm.las output=height_above_ground method=max base_raster=terrain zrange=0,100


In case we <code>if(height_above_ground > 2, height_above_ground, null())</code>.
Now get all the areas which are higher than 2 m.
In case we want to preserve the elevation in the output,
we could use <code>if(height_above_ground > 2, height_above_ground, null())</code> in the expression,
but we want just the areas, so we use constant (<code>1</code>).
   
   
  r.mapcalc "above_2m = if(height_above_ground > 2, 1, null())"
  r.mapcalc "above_2m = if(height_above_ground > 2, 1, null())"
Line 553: Line 571:


  r.mapcalc "trees = if(dsm_forms==2, 1, null())"
  r.mapcalc "trees = if(dsm_forms==2, 1, null())"
<center><gallery perrow=2 widths=400 heights=200>
Image:R geomorphon lidar dsm roofs.png|{{cmd|{{cmd|r.geomorphon}} settings which shows well shapes of roofs
Image:R geomorphon lidar dsm vegetation.png|Different {{cmd|r.geomorphon}} settings showing vegetated areas
Image:R geomorphon lidar dsm vegetation details.png|More common setting of {{cmd|r.geomorphon}} resulting in detailed description of DSM shapes
Image:R geomorphon lidar dsm trees.png|Selected peaks from {{cmd|r.geomorphon}} showing positions of trees (tree crowns)
</gallery></center>


== Bonus tasks ==
== Bonus tasks ==
Line 559: Line 584:


We can explore our study area in 3D view (use image on the right if clarification is needed for below steps):
We can explore our study area in 3D view (use image on the right if clarification is needed for below steps):
# Add raster <tt>dsm</tt> and '''uncheck or remove''' any other layers. Any layer in Layer Manager is interpreted as surface in 3D view.
# Add raster <tt>dsm</tt> and uncheck or remove any other layers. Note that unchecking (or removing) any other layers is important because any layer loaded in ''Layers'' is interpreted as a surface in 3D view.
# Switch to 3D view (in the right corner of Map Display).
# Switch to 3D view (in the right corner of Map Display).
# Adjust the view (perspective, height).
# Adjust the view (perspective, height).
Line 573: Line 598:
</gallery></center>
</gallery></center>


=== 3D visualization of point clouds ===
=== Analytical visualization of rasters in 3D ===


Unless you have already done that, familiarize yourself with the 3D view by doing the exercise above. Then import all points but this time do not add them to the ''Layer Manager'' (there is a check box at the bottom of the module dialog).
We can explore our study area in 3D view (use image on the right if clarification is needed for below steps):
# Add rasters <tt>terrain</tt> and  <tt>dsm</tt> and uncheck or remove any other layers. (Any layer in ''Layers'' will be interpreted as surface in 3D view.)
# Switch to 3D view (in the right corner of ''Map Display'').
# Adjust the view (perspective, height). Set z-exaggeration to 1.
# In ''Data'' tab, in Surface, set ''Fine mode resolution'' to 1 for both rasters.
# Set a different color for each surface.
# For the <tt>dsm</tt>, set the position in Z direction to 1. This is a position relative the actual position of the raster. This small offset will help is see relation in between the terrain and the DSM surfaces.
# Go to ''Analysis'' tab and activate the first cutting plane. Set ''Shading'' to ''bottom color'' to get the bottom color to color the space between the terrain and DSM.
# Start sliding the sliders for ''X'', ''Y'' and ''Rotation''.
# Go back to ''View'' tab in case you want to change the view.
# When finished, switch back to 2D view.


v.in.lidar -obt input=nc_tile_0793_016_spm.las output=points_all zrange=60,200
<center><gallery perrow=3 widths=380 heights=190>
 
Image:Nviz data tab dsm set relative positon.png|In the Data tab of 3D view, set the same fine mode resolution for both surfaces. Choose different colors. Set (relative) position of the DSM to 1 m (above its actual position).
Now we can explore point clouds in 3D view:
Image:Nviz cutting plane analysis tab.png|In the Analysis tab of 3D view, activate cutting plane, set shading to bottom color and start sliding the sliders for X, Y and rotation.
# Uncheck or remove any other layers. (Any layer in Layer Manager is interpreted as surface in 3D view.)
Image:Nviz_cutting_plane_dem_dsm.png|To see longer part of the transect, stretch the window horizontally.
# Switch to 3D view (in the right corner of Map Display).
</gallery></center>
# When 3D is active, ''Layer Manager'' has a new icon for default settings of the 3D which we need to open and change.
# In the settings, change settings for default point shape from sphere to X.
# Then add the <tt>points</tt> vector.
# You may need to adjust the size of the points if they are all merged together.
# Adjust the view (perspective, height).
# Go back to ''View'' tab and explore the different view directions using the green puck.
# Go to ''Appearance'' tab and change the light conditions (lower the light height, change directions).
# When finished, switch back to 2D view.


=== Classify ground and non-ground points ===
=== Classify ground and non-ground points ===
Line 595: Line 622:
UAV point clouds usually require classification of ground points (bare earth) when we want to create ground surface.
UAV point clouds usually require classification of ground points (bare earth) when we want to create ground surface.


Import all the points using the {{cmd|v.in.lidar}} command in the previous section, unless you already have them. Then use {{addonCmd|v.lidar.mcc}} to classify ground and non-ground points:
Import all the points using the {{cmd|v.in.lidar}} command in the previous section, unless you already have them. Since the metadata about projection are wrong, we use the <code>-o</code> flag to skip the projection consistency check.
 
r.in.lidar -e -n -o input=nc_uav_points_spm.las output=uav_density_05 method=n resolution=0.5
 
Set a small computational region to make all the computations faster (skip this to operate in the whole area):
 
g.region  n=219415 s=219370 w=636981 e=637039
 
Import the points using the {{cmd|v.in.lidar}} module but limit the extent just to computational region (<code>-r</code> flag), do not build topology (<code>-b</code> flag), do not create attribute table (<code>-t</code> flag), and do not assign categories (ids) to points (<code>-c</code> flag). There is more points than we need for interpolating the the resolution 0.5 m, so we will decimate (thin) the point cloud during import removing 75% of the points using <code>preserve=4</code> (which uses count-based decimation which assumes spatially uniform distribution of the points).


  v.lidar.mcc input=points_all ground=mcc_ground nonground=mcc_nonground
  v.in.lidar -t -c -b -r -o input=nc_uav_points_spm.las output=uav_points preserve=4


Set a small computational region:
Then use {{addonCmd|v.lidar.mcc}} to classify ground and non-ground points:


  g.region  n=223751 s=223418 w=639542 e=639899
  v.lidar.mcc input=points_all ground=mcc_ground nonground=mcc_nonground


Interpolate the ground surface:
Interpolate the ground surface:
Line 607: Line 642:
  v.surf.rst input=mcc_ground tension=20 smooth=5 npmin=100 elevation=mcc_ground
  v.surf.rst input=mcc_ground tension=20 smooth=5 npmin=100 elevation=mcc_ground


Compare with the previous dataset:
If you are using the UAV data, you can extract the RGB values from the points. First, import the points again, but this time with all the attributes:
 
v.in.lidar -r -o input=nc_uav_points_spm.las output=uav_points preserve=4
 
Then extract the RGB values (stored in attribute columns) into separate channels (rasters):
 
v.to.rast input=uav_points output=red use=attr attr_col=red
v.to.rast input=uav_points output=green use=attr attr_col=green
v.to.rast input=uav_points output=blue use=attr attr_col=blue
 
And then set the color table to grey:
 
r.colors red color=grey
r.colors green color=grey
r.colors blue color=grey
 
These channels could be used for some remote sensing tasks. Now we will just visualize them using {{cmd|d.rgb}}:
 
d.rgb red=red green=green blue=blue
 
If you are using the lidar data for this part, you can compare the new DEM with the previous dataset:


  r.mapcalc "mcc_lidar_differece = ground - mcc_ground"
  r.mapcalc "mcc_lidar_differece = ground - mcc_ground"
Line 614: Line 669:


  r.colors map=mcc_lidar_differece color=difference
  r.colors map=mcc_lidar_differece color=difference
=== 3D visualization of point clouds ===
Unless you have already done that, familiarize yourself with the 3D view by doing the exercise above. Then import all points but this time do not add them to the ''Layer Manager'' (there is a check box at the bottom of the module dialog).
v.in.lidar -obt input=nc_tile_0793_016_spm.las output=points_all zrange=60,200
Now we can explore point clouds in 3D view:
# Uncheck or remove any other layers. (Any layer in Layer Manager is interpreted as surface in 3D view.)
# Switch to 3D view (in the right corner of Map Display).
# When 3D is active, ''Layer Manager'' has a new icon for default settings of the 3D which we need to open and change.
# In the settings, change settings for default point shape from sphere to X.
# Then add the <tt>points</tt> vector.
# You may need to adjust the size of the points if they are all merged together.
# Adjust the view (perspective, height).
# Go back to ''View'' tab and explore the different view directions using the green puck.
# Go to ''Appearance'' tab and change the light conditions (lower the light height, change directions).
# When finished, switch back to 2D view.
<center><gallery perrow=3 widths=380 heights=190>
Image:WxGUI nviz with point cloud ground and non-ground data tab.png|UAV point cloud classified to group points (orange) and non-ground points cloud (green).
Image:Selection 409.png|Detail showing trees and outliers.
</gallery></center>


=== Alternative DSM creation ===
=== Alternative DSM creation ===
Line 653: Line 731:
Speed optimizations:
Speed optimizations:


* ''rasterize early'' (often less cells than points, natural spatial index, many algorithms use it)
* ''Rasterize early.''
* if you can rasterize, decimate
** For many use cases, there is less cells than points. Additionally, rasters will be faster, e.g. for visualization, because raster has a natural spatial index. Finally, many algorithms simply use rasters anyway.
* binning is much faster than interpolation
* If you cannot rasterize, see if you can decimate (thin) the point cloud (using {{cmd|v.in.lidar}}, {{cmd|r.in.lidar}} + {{cmd|r.to.vect}}, {{cmd|v.decimate}}).
* faster count-based decimation performs the same on a given point cloud for terrain as slower grid-based decimation (Petras et al. 2016)
* Binning (e.g. {{cmd|r.in.lidar}}, {{cmd|r3.in.lidar}}) is much faster than interpolation and can carry out part of the analysis.
* Faster count-based decimation performs often the same on a given point cloud for terrain as slower grid-based decimation (Petras et al. 2016).
* r.in.lidar
* r.in.lidar
** choose computation region extent and resolution ahead
** choose computation region extent and resolution ahead
Line 668: Line 747:
Memory requirements:
Memory requirements:


* r.in.lidar
* for r.in.lidar
** depend on the side of output and type of analysis
** depend on the side of output and type of analysis
** can be reduced by <tt>percent</tt> option
** can be reduced by <tt>percent</tt> option
** <tt>ERROR: G_malloc: unable to allocate ... bytes of memory</tt>
** <tt>ERROR: G_malloc: unable to allocate ... bytes of memory</tt> means that there is not enough memory for the output raster, use <tt>percent</tt> option, coarser resolution, or smaller extent
** on Linux available memory for process is RAM + SWAP partition
** on Linux available memory for process is RAM + SWAP partition, but when SWAP will be used, processing will be much slower
* v.in.lidar
* for v.in.lidar
** low when not building topology
** low when not building topology (<tt>-b</tt> flag)
** for topology and low memory use <tt>export GRASS_VECTOR_LOWMEM=1</tt> (<tt>os.environ['GRASS_VECTOR_LOWMEM'] = '1'</tt> in Python)
** to build topology but keeping low memory, use <tt>export GRASS_VECTOR_LOWMEM=1</tt> (<tt>os.environ['GRASS_VECTOR_LOWMEM'] = '1'</tt> in Python)


Limits:
Limits:
* vector features with topology: count limit is about 2 billion features per vector map 2^31 - 1
* Vector features with topology are limited to about 2 billion features per vector map (2^31 - 1)
* points without topology: count theoretically limited only by the 64bit architecture (may be 16 exbibytes per file but depends on the file system)
* Points without topology are limited only by disk space and what the modules are able to process later. (Theoretically, the count limited only by the 64bit architecture which would be 16 exbibytes per file, but the actual value depends on the file system.)
* for large vectors with attributes PostgreSQL backend is recommended
* For large vectors with attributes, PostgreSQL backend is recommended for attributes ({{cmd|v.db.connect}}).
* more limits for 32bit versions for operations which require memory
* There is more limits for the 32bit versions for operations which require memory. (Since 2016 there is 64bit version even for MS Windows. Even the 32bit versions have large file support (LFS) and can work with files exceeding the 32bit limitations.)
* read/write to disk often limits speed (faster for rasters in 7.1)
* Reading and writing to disk (I/O) usually limits speed. (Can be made faster for rasters since 7.2 using different compression algorithms set using the <tt>GRASS_COMPRESSOR</tt> variable.)
* number of open files limitation (system limit)
* There is limit for number of open files set by the operating system.
** often 1024, change using ''ulimit'' on Linux
** The limit is often 1024. On Linux, it can be changed using ''ulimit''.
* {{cmd|r.watershed}} 90,000 x 100,000 (9 billion cells) in 77.2 hours (2.93GHz)
* Individual modules may have their own limitations or behaviors with large data based on the algorithm they are using. However, in general modules are made to deal with large datasets. For example,
* {{cmd|v.surf.rst}}: minutes = 1.5e-11 * points^2 (test: 13 min for 1 million points and 201880 cells)
** {{cmd|r.watershed}} can process 90,000 x 100,000 (9 billion cells) in 77.2 hours (2.93GHz), and
* Read the documentation.
** {{cmd|v.surf.rst}} can process 1 million points and interpolate 202,000 cells in 13 minutes.
* Read the documentation as it provides descriptions of limitations for each module and ways to deal with them.
* Write to [https://lists.osgeo.org/listinfo/grass-user grass-user mailing list] or [http://trac.osgeo.org/grass/ GRASS GIS bug tracker] in case you hit limits. For example, if you get negative number as the number of points or cells, open a ticket.
* Write to [https://lists.osgeo.org/listinfo/grass-user grass-user mailing list] or [http://trac.osgeo.org/grass/ GRASS GIS bug tracker] in case you hit limits. For example, if you get negative number as the number of points or cells, open a ticket.


== Bleeding edge ==
== Bleeding edge ==


[[File:Tangible landscape and blender with water and trees.jpg|300px|thumb|right|Tangible Landscape uses r.in.kinect to scan a sand model which is processed and analyzed in GRASS GIS and optionally visualized in Blender]]
[[File:Seamless fusion of high-resolution DEMs from multiple sources.png|300px|thumb|right|Seamless fusion of high-resolution DEMs from multiple sources with r.patch.smooth: example of computing overlap zone for fusion]]
[[File:Raster3d example small data.png|300px|thumb|right|Vegetation fragmentation computed by {{addonCmd|r3.forestfrag}} from lidar point cloud represented as 3D raster]]
[[File:Raster3d example small data.png|300px|thumb|right|Vegetation fragmentation computed by {{addonCmd|r3.forestfrag}} from lidar point cloud represented as 3D raster]]


Line 703: Line 781:
** v.profile.points
** v.profile.points
* PDAL integration
* PDAL integration
** v.in.pdal, r.in.pdal, ...
** prototypes: v.in.pdal, r.in.pdal
** use PDAL as backend for import and export – v.in.pdal, v.in.points, or v.in.lidar
** goal: provide access to selected PDAL algorithms (filters)
** provide access to selected PDAL analyses – v.pdal or in one of v.in.pdal, v.in.points, v.in.lidar
* r.in.kinect
* r.in.kinect
** Has some features from r.in.lidar, v.in.lidar, and PCL (Point Cloud Library), but the point cloud is continuously updated from a Kinect scanner using OpenKinect libfreenect2 (used in [http://tangible-landscape.github.io/ Tangible Landscape]).
** Has some features from r.in.lidar, v.in.lidar, and PCL (Point Cloud Library), but the point cloud is continuously updated from a Kinect scanner using OpenKinect libfreenect2 (used in [http://tangible-landscape.github.io/ Tangible Landscape]).
* Seamless fusion of high-resolution DEMs from multiple sources (r.patch.smooth)
* Combining elevation data from different sources (e.g. global and lidar DEM or lidar and UAV DEM)
* d.points
** [https://github.com/petrasovaa/r.patch.smooth r.patch.smooth]: Seamless fusion of high-resolution DEMs from two different sources
** show large point clouds in Map Display (2D) d.points
** {{addonCmd|r.mblend}}: Blending rasters of different spatial resolution
* show large point clouds in Map Display (2D)
** prototype: d.points
* Store return and class information as category
* Store return and class information as category
** general support for color stored as category
** experimental implementation in v.in.lidar
* Decimations
** implemented: v.in.lidar, r.in.lidar
** v.decimate (with experimental features)


Contributions are certainly welcome. You can discuss them at the [https://lists.osgeo.org/listinfo/grass-dev grass-dev mailing list].
Contributions are certainly welcome. You can discuss them at the [https://lists.osgeo.org/listinfo/grass-dev grass-dev mailing list].


<center><gallery perrow=3 widths=400 heights=270>
<center><gallery perrow=3 widths=400 heights=270>
Image:Counting ground points per cell with r.in.lidar.png|''Map Display'' with legend created using {{cmd|d.legend}}, {{cmd|r.in.lidar}} dialog, ground point density pattern in the background
Image:Tangible landscape and blender with water and trees.jpg|Tangible Landscape uses r.in.kinect to scan a sand model which is processed and analyzed in GRASS GIS and optionally visualized in Blender
Image:v.in.lidar dialog do not add into layer tree.png|Unchecked ''Add created map(s) into layer tree'' in v.in.lidar
Image:Seamless fusion of high-resolution DEMs from multiple sources.png|Seamless fusion of high-resolution DEMs from multiple sources with r.patch.smooth: example of computing overlap zone for fusion
</gallery></center>
</gallery></center>


Line 738: Line 820:


* [http://wenzeslaus.github.io/grass-lidar-talks/ Processing lidar and general point cloud data in GRASS GIS] (presentation slides)
* [http://wenzeslaus.github.io/grass-lidar-talks/ Processing lidar and general point cloud data in GRASS GIS] (presentation slides)
* [https://petrasovaa.github.io/dem-fusion-talk Seamless fusion of high-resolution DEMs from multiple sources] (presentation slides)
* [https://www.liblas.org libLAS]
* [https://www.liblas.org libLAS]
* [http://pdal.io/ PDAL]
* [http://pdal.io/ PDAL]
Line 743: Line 826:
References:
References:


* Brovelli M. A., Cannata M., Longoni U.M.. 2004. ''LIDAR Data Filtering and DTM Interpolation Within GRASS.'' Transactions in GIS, April 2004, vol. 8, iss. 2, pp. 155-174(20), Blackwell Publishing Ltd. ([https://www.researchgate.net/publication/220606097_LiDAR_data_filtering_and_DTM_interpolation_within_GRASS full text at ResearchGate])
* Brovelli M. A., Cannata M., Longoni U.M. 2004. ''LIDAR Data Filtering and DTM Interpolation Within GRASS.'' Transactions in GIS, April 2004, vol. 8, iss. 2, pp. 155-174(20), Blackwell Publishing Ltd. ([https://www.researchgate.net/publication/220606097_LiDAR_data_filtering_and_DTM_interpolation_within_GRASS full text at ResearchGate])
* Evans, J. S., Hudak, A. T. 2007. ''A Multiscale Curvature Algorithm for Classifying Discrete Return LiDAR in Forested Environments.'' IEEE Transactions on Geoscience and Remote Sensing 45(4): 1029 - 1038. ([http://www.fs.fed.us/rm/pubs_other/rmrs_2007_evans_j001.pdf full text])
* Evans, J. S., Hudak, A. T. 2007. ''A Multiscale Curvature Algorithm for Classifying Discrete Return LiDAR in Forested Environments.'' IEEE Transactions on Geoscience and Remote Sensing 45(4): 1029 - 1038. ([http://www.fs.fed.us/rm/pubs_other/rmrs_2007_evans_j001.pdf full text])
* Hengl, T. 2006. ''Finding the right pixel size.'' Computers & Geosciences, 32(9), 1283-1298. ([https://scholar.google.com/scholar?q=Hengl+2006+Finding+the+right+pixel+size Google Scholar entries])
* Jasiewicz, J., Stepinski, T. 2013. ''Geomorphons - a pattern recognition approach to classification and mapping of landforms, Geomorphology.'' vol. 182, 147-156. [http://dx.doi.org/10.1016/j.geomorph.2012.11.005 DOI: 10.1016/j.geomorph.2012.11.005]
* Jasiewicz, J., Stepinski, T. 2013. ''Geomorphons - a pattern recognition approach to classification and mapping of landforms, Geomorphology.'' vol. 182, 147-156. [http://dx.doi.org/10.1016/j.geomorph.2012.11.005 DOI: 10.1016/j.geomorph.2012.11.005]
* Leitão, J.P., Prodanovic,  D., Maksimovic, C. 2016. ''Improving merge methods for grid-based digital elevation models.'' Computers & Geosciences, Volume 88, March 2016, Pages 115-131, ISSN 0098-3004. [http://doi.org/10.1016/j.cageo.2016.01.001 DOI: 10.1016/j.cageo.2016.01.001].
* Mitasova, H., Mitas, L. and Harmon, R.S., 2005, Simultaneous spline approximation and topographic analysis for lidar elevation data in open source GIS, IEEE GRSL 2 (4), 375- 379. ([http://www4.ncsu.edu/~hmitaso/gmslab/papers/IEEEGRSL2005.pdf full text])
* Mitasova, H., Mitas, L. and Harmon, R.S., 2005, Simultaneous spline approximation and topographic analysis for lidar elevation data in open source GIS, IEEE GRSL 2 (4), 375- 379. ([http://www4.ncsu.edu/~hmitaso/gmslab/papers/IEEEGRSL2005.pdf full text])
* Petras, V., Newcomb, D. J., Mitasova, H. 2017. ''Generalized 3D fragmentation index derived from lidar point clouds.'' In: Open Geospatial Data, Software and Standards. [http://dx.doi.org/10.1186/s40965-017-0021-8 DOI: 10.1186/s40965-017-0021-8]
* Petras, V., Newcomb, D. J., Mitasova, H. 2017. ''Generalized 3D fragmentation index derived from lidar point clouds.'' In: Open Geospatial Data, Software and Standards. [http://dx.doi.org/10.1186/s40965-017-0021-8 DOI: 10.1186/s40965-017-0021-8]
* Petras, V., Petrasova, A., Jeziorska, J., Mitasova, H. 2016. ''Processing UAV and lidar point clouds in GRASS GIS.'' ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7, 945–952, 2016 ([https://www.researchgate.net/publication/304340172_Processing_UAV_and_lidar_point_clouds_in_GRASS_GIS full text at ResearchGate])
* Petras, V., Petrasova, A., Jeziorska, J., Mitasova, H. 2016. ''Processing UAV and lidar point clouds in GRASS GIS.'' ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7, 945–952, 2016 ([https://www.researchgate.net/publication/304340172_Processing_UAV_and_lidar_point_clouds_in_GRASS_GIS full text at ResearchGate])
* Petrasova, A., Mitasova, H., Petras, V., Jeziorska, J. 2017. ''Fusion of high-resolution DEMs for water flow modeling.'' In: Open Geospatial Data, Software and Standards. [http://dx.doi.org/10.1186/s40965-017-0019-2 DOI: 10.1186/s40965-017-0019-2]
* Petrasova, A., Mitasova, H., Petras, V., Jeziorska, J. 2017. ''Fusion of high-resolution DEMs for water flow modeling.'' In: Open Geospatial Data, Software and Standards. [http://dx.doi.org/10.1186/s40965-017-0019-2 DOI: 10.1186/s40965-017-0019-2]
* Tabrizian, P., Harmon, B.A., Petrasova, A., Petras, V., Mitasova, H. Meentemeyer, R.K. ''Tangible Immersion for Ecological Design.'' In: ACADIA 2017: Proceedings of the 37th Annual Conference of the Association for Computer Aided Design in Architecture, 600-609. Cambridge, MA, 2017. ([https://www.researchgate.net/publication/318846696_Tangible_Immersion_for_Ecological_Design full text at ResearchGate])
* Tabrizian, P., Petrasova, A., Harmon, B., Petras, V., Mitasova, H., Meentemeyer, R. (2016, October). ''Immersive tangible geospatial modeling.'' In: Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (p. 88). ACM. [https://doi.org/10.1145/2996913.2996950 DOI: 10.1145/2996913.2996950] ([https://www.researchgate.net/publication/309458110_Immersive_Tangible_Geospatial_Modeling full text at ResearchGate])


[[Category: Tutorial]]
[[Category: Tutorial]]
[[Category: Workshops]]
[[Category: Workshops]]
[[Category: 2017]]
[[Category: 2017]]
[[Category: Lidar]]
[[Category: Terrain]]
[[Category: Vegetation]]

Latest revision as of 03:13, 2 May 2018

Description: GRASS GIS offers, besides other things, numerous analytical tools for point clouds, terrain, and remote sensing. In this hands-on workshop we will explore the tools in GRASS GIS for processing point clouds obtained by lidar or through processing of UAV imagery. We will start with a brief and focused introduction into GRASS GIS graphical user interface (GUI) and we will continue with short introduction to GRASS GIS Python interface. Participants will then decide if they will use GUI, command line, Python, or online Jupyter Notebook for the rest of the workshop. We will explore the properties of the point cloud, interpolate surfaces, and perform advanced terrain analyses to detect landforms and artifacts. We will go through several terrain 2D and 3D visualization techniques to get more information out of the data and finish with vegetation analysis.

Requirements: This workshop is accessible to beginners, but some basic knowledge of lidar processing or GIS is helpful for a smooth experience.

Authors: Vaclav Petras, Anna Petrasova, and Helena Mitasova from North Carolina State University

Contributors: Robert S. Dzur and Doug Newcomb

Preparation

Software

GRASS GIS 7.2 compiled with libLAS is needed (e.g. r.in.lidar should work).

OSGeo-Live

All needed software is included in OSGeo-Live.

Ubuntu

Install GRASS GIS from packages:

sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable
sudo apt-get update
sudo apt-get install grass grass-dev

Linux

For other Linux distributions other then Ubuntu, please try to find GRASS GIS in their package managers or refer to grass.osgeo.org.

MS Windows

Download the standalone GRASS GIS binaries from grass.osgeo.org.

Mac OS

Install GRASS GIS using Homebrew osgeo4mac:

brew tap osgeo/osgeo4mac
brew install numpy
brew install liblas --build-from-source
brew install grass7 --with-liblas

If you have these already installed, you need to use reinstall instead of install.

Note that on Mac OS in some versions the r.in.lidar module is not accessible, so you need to check this and use r.in.ascii in combination with libLAS or PDAL command line tools to achieve the same or, preferably, use OSGeo-Live.

Note also that there is currently no recent version of installation on Mac OS where 3D view is accessible.

Addons

You will need to install the following GRASS GIS Addons. This can be done through GUI, but for simplicity copy and paste and execute the following lines one by one in the command line:

g.extension r.geomorphon
g.extension r.skyview
g.extension r.local.relief
g.extension r.shaded.pca
g.extension r.area
g.extension r.terrain.texture
g.extension r.fill.gaps

For bonus tasks:

g.extension v.lidar.mcc

Data

We will need the two following ZIP files, downloaded and extracted:

Later, for bonus task, download also this (much larger) file:

Basic introduction to graphical user interface

GRASS GIS Spatial Database

Here we provide an overview of GRASS GIS. For this exercise it's not necessary to have a full understanding of how to use GRASS GIS. However, you will need to know how to place your data in the correct GRASS GIS database directory, as well as some basic GRASS functionality.

GRASS uses specific database terminology and structure (GRASS GIS Spatial Database) that are important to understand for working in GRASS GIS efficiently. You will create a new location and import the required data into that location. In the following we review important terminology and give step by step directions on how to download and place your data in the correct place.

  • A GRASS GIS Spatial Database (GRASS database) consists of directory with specific Locations (projects) where data (data layers/maps) are stored.
  • Location is a directory with data related to one geographic location or a project. All data within one Location has the same coordinate reference system.
  • Mapset is a collection of maps within Location, containing data related to a specific task, user or a smaller project.

Start GRASS GIS, a start-up screen should appear. Unless you already have a directory called grassdata in your Documents directory (on MS Windows) or in your home directory (on Linux), create one. You can use the Browse button and the dialog in the GRASS GIS start up screen to do that.

We will create a new location for our project with CRS (coordinate reference system) NC State Plane Meters with EPSG code 3358. Open Location Wizard with button New in the left part of the welcome screen. Select a name for the new location, select EPSG method and code 3358. When the wizard is finished, the new location will be listed on the start-up screen. Select the new location and mapset PERMANENT and press Start GRASS session.

Note that a current working directory is a concept which is separate from the GRASS database, location and mapset discussed above. The current working directory is a directory where any program (no just GRASS GIS) writes and reads files unless a path to the file is provided. The current working directory can be changed from the GUI using Settings → GRASS working environment → Change working directory or from the Console using the cd command. This is advantageous when we are using command line and working with the same file again and again. This is often the case when working with lidar data. We can change the directory to the directory with the downloaded LAS file. IN case we don't change the directory, we need to provide full path to the file. Note that command line and GUI have their own settings of current working directory, so it needs to be changed separately for each of them.

Importing data

In this step we import the provided data into GRASS GIS. In menu File - Import raster data select Common formats import and in the dialog browse to find the orthophoto file, change the name to ortho, and click button Import. All the imported layers should be added to GUI automatically, if not, add them manually. Point clouds will be imported later in a different way as part of the analysis.

The equivalent command is:

r.import input=nc_orthophoto_1m_spm.tif output=ortho

Computational region

Before we use a module to compute a new raster map, we must properly set the computational region. All raster computations will be performed in the specified extent and with the given resolution.

Computational region is an important raster concept in GRASS GIS. In GRASS a computational region can be set, subsetting larger extent data for quicker testing of analysis or analysis of specific regions based on administrative units. We provide a few points to keep in mind when using the computational region function:

  • defined by region extent and raster resolution
  • applies to all raster operations
  • persists between GRASS sessions, can be different for different mapsets
  • advantages: keeps your results consistent, avoid clipping, for computationally demanding tasks set region to smaller extent, check that your result is good and then set the computational region to the entire study area and rerun analysis
  • run g.region -p or in menu Settings - Region - Display region to see current region settings

The numeric values of computational region can be checked using:

g.region -p

After executing the command you will get something like this:

north:      220750
south:      220000
west:       638300
east:       639000
nsres:      1
ewres:      1
rows:       750
cols:       700
cells:      525000

Computational region can be set using a raster map:

g.region raster=ortho -p

Resolution can be set separately using the res parameter of the g.region module. The units are the units of the current location, in our case meters. This can be done in the Resolution tab of the g.region dialog or in the command line in the following way (using also the -p flag to print the new values):

g.region res=3 -p

The new resolution may be slightly modified in this case to fit into the extent which we are not changing. However, often we want the resolution to be the exact value we provide and we are fine with a slight modification of the extent. That's what -a flag is for. On the other hand, if alignment with cells of a specific raster is important, align parameter can be used to request the alignment to that raster (regardless of the the extent).

The following example command will use the extent from the raster named ortho, use resolution 5 meters, modify the extent to align it to this 5 meter resolution, and print the values of this new computational region settings:

g.region raster=ortho res=5 -a -p

Modules

GRASS GIS functionality is available through modules (which are sometimes called tools, functions, or commands). Modules respect the following naming conventions:

Prefix Function Example
r. raster processing r.mapcalc: map algebra
v. vector processing v.surf.rst: surface interpolation
i. imagery processing i.segment: image segmentation
r3. 3D raster processing r3.stats: 3D raster statistics
t. temporal data processing t.rast.aggregate: temporal aggregation
g. general data management g.remove: removes maps
d. display d.rast: display raster map

These are the main groups of modules. There is few more for specific purposes. Note also that some modules have multiple dots in their names. This often suggests further grouping. For example, modules staring with v.net. deal with vector network analysis. The name of the module helps to understand its function, for example v.in.lidar starts with v so it deals with vector maps, the name follows with in which indicates that the module is for importing the data into GRASS GIS Spatial Database and finally lidar indicates that it deals with lidar point clouds.

One of the advantages of GRASS GIS is the diversity and number of modules that let you analyze all manners of spatial and temporal data. GRASS GIS has over 500 different modules in the core distribution and over 300 addon modules that can be used to prepare and analyze data. The following table lists some of the main modules for point cloud analysis.

Module Function Alternatives
r.in.lidar binning into 2D raster, statistics r.in.xyz, v.vect.stats, r.vect.stats
v.in.lidar import, decimation v.in.ascii, v.in.ogr, v.import
r3.in.lidar binning into 3D raster r3.in.xyz, r.in.lidar
v.out.lidar export of point cloud v.out.ascii, r.out.xyz
v.surf.rst interpolation surfaces from points v.surf.bspline, v.surf.idw
v.lidar.edgedetection ground and object (edge) detection v.lidar.mcc, v.outlier
v.decimate decimate (thin) a point cloud v.in.lidar, r.in.lidar
r.slope.aspect topographic parameters v.surf.rst, r.param.scale
r.relief shaded relief computation r.skyview, r.local.relief
r.colors raster color table management r.cpt2grass, r.colors.matplotlib
g.region resolution and extent management r.in.lidar, GUI

Modules and their descriptions with examples can be found the documentation. The documentation is included in the local installation and is also available online.

Basic introduction to Python interface

The simplest way to execute the Python code which uses GRASS GIS packages is to use Simple Python Editor integrated in GRASS GIS (accessible from the toolbar or the Python tab in the Layer Manager). Another option is to use your favorite text editor and then run the script in GRASS GIS using the main menu File → Launch script.

We will use the Simple Python Editor to run the commands. You can open it from the Python tab. When you open Simple Python Editor, you find a short code snippet. It starts with importing GRASS GIS Python Scripting Library:

import grass.script as gscript

In the main function we call g.region to see the current computational region settings:

gscript.run_command('g.region', flags='p')

Note that the syntax is similar to command line syntax (g.region -p), only the flag is specified in a parameter. Now we can run the script by pressing the Run button in the toolbar. In Layer Manager we get the output of g.region.

In this example, we set the computational extent and resolution to the raster layer ortho which would be done using g.region raster=ortho in the command line. To use the run_command to set the computational region, replace the previous g.region command with the following line:

gscript.run_command('g.region', raster='ortho')

The GRASS GIS Python Scripting Library provides functions to call GRASS modules within Python scripts as subprocesses. All functions are in a package called grass and the most common functions are in grass.script package which is often imported import grass.script as gscript. The most often used functions include:

Here we use parse_command to obtain the statistics as a Python dictionary

region = gscript.parse_command('g.region', flags='g')
print region['ewres'], region['nsres']

The results printed are the raster resolutions in E-W and N-S directions.

In the above examples, we were calling the g.region module. Typically, the scripts (and GRASS GIS modules) don't change the computational region and often they don't need to even read it. The computational region can be defined before running the script so that the script can be used with different computational region settings.

The library also provides several convenient wrapper functions for often called modules, for example script.raster.raster_info() (wrapper for r.info), script.core.list_grouped() (one of the wrappers for g.list), and script.core.region() (wrapper for g.region).

When we want to run the script again, we need to either remove the created data beforehand using g.remove or we need to tell GRASS GIS to overwrite the existing data. This can be done adding overwrite=True as an additional argument to the function call for each module or we can do it globally using os.environ['GRASS_OVERWRITE'] = '1' (requires import os).

Finally, you may have noticed the first line of the script which says #!/usr/bin/env python. This is what Linux, Mac OS, and similar systems use to determine which interpreter to use. If you get something line [Errno 8] Exec format error, this line is probably incorrect or missing.

Decide if to use GUI, command line, Python, or Jupyter Notebook

  • GUI: can be combined anytime with command line (recommended, especially for beginners, combine with copy and paste to Console)
  • command line: can be any time combined with the GUI, most of the following instructions will be for command line (but can be easily transfered to GUI or Python), both system command line and Console tab in GUI will work well
  • Python: you will need to change the syntax to Python; use snippets from this static notebook
  • Jupyter Notebook locally: this is recommended only if you are using OSGeoLive or if you are on Linux and are familiar with Jupyter and GRASS GIS; download notebook from github.com/wenzeslaus/Notebook-for-processing-point-clouds-in-GRASS-GIS
  • Jupyter Notebook online: link distributed by the instructor (backup option if other options fail)

Binning of the point cloud

Fastest way to analyze basic properties of a point cloud is to use binning and create a raster map. We will now use r.in.lidar to create point count (point density) raster. At this point, we don't know the spatial extent of the point cloud, so we cannot set computation region ahead, but we can tell the r.in.lidar module to determine the extent first and use it for the output using the -e flag. We are using a coarse resolution to maximize the speed of the process. Additionally, the -r flag sets the computational region to match the output.

r.in.lidar input=nc_tile_0793_016_spm.las output=count_10 method=n -e -n resolution=10

Now we can see the distribution pattern, but let's examine also the numbers (in GUI using right click on the layer in Layer Manager and then Metadata or using r.info directly):

r.info map=count_10

We can do a quick check of some of the values using query tool in the Map Display. Since there is a lot of points per one cell, we can use a finer resolution:

r.in.lidar input=nc_tile_0793_016_spm.las output=count_1 method=n -e -n resolution=1

Look at the distribution of the values using a histogram. Histogram is accessible from the context menu of a layer in Layer Manager, from Map Display toolbar Analyze map button or using the d.histogram module (d.histogram map=count_1).

Now we have appropriate number of points in most of the cells and there are no density artifacts around the edges, so we can use this raster as the base for extent and resolution we will use from now on.

g.region raster=count_1 -p

However, this region has a lot of cells and some of them would be empty, esp. when we start filtering the point cloud. To account for that, we can modify just the resolution of the computational region:

g.region res=3 -ap

Use binning to obtain the digital surface model (DSM) (resolution and extent are taken from the computational region):

r.in.lidar input=nc_tile_0793_016_spm.las output=binned_dsm method=max

To understand what the map shows, compute statistics using r.report (Report raster statistics in the layer context menu):

r.report map=binned_dsm units=c

This shows that there is an outlier (at around 600 m). Change the color table to histogram equalized (-e flag) to see the contrast in the rest of the map (using the viridis color table):

r.colors map=binned_dsm color=elevation -e

Let's check the outliers also using minimum (which is method=min, we already used method=max for DSM):

r.in.lidar input=nc_tile_0793_016_spm.las output=minimum method=min

Compare this with range and mean of the values in the ground raster and decide what is the permissible range.

r.report map=minimum units=c

Again, to see the data, we can use histogram equalized color table:

r.colors map=minimum color=elevation -e
The zrange parameter filters out points which don't fall into the specified range of Z values

Now when we know the outliers and the expected range of values (60-200 m seems to be a safe and but broad enough range), use the zrange parameter to filter out any possible outliers (don't be confused with the intensity_range parameter) when computing a new DSM:

r.in.lidar input=nc_tile_0793_016_spm.las output=binned_dsm_limited method=max zrange=60,200

Interpolation

Now we will interpolate a digital surface model (DSM) and for that we can increase resolution to obtain as much detail as possible (we could use 0.5 m, i.e. res=0.5, for high detail or 2 m, i.e. res=2 -a, for speed):

g.region raster=count_1 -p

Before interpolating, let's confirm that the spatial distribution of the points allows us to safely interpolate. We need to use the same filters as we will use for the DSM itself:

r.in.lidar input=nc_tile_0793_016_spm.las output=count_dsm method=n return_filter=first zrange=60,200

First check the numbers:

r.report map=count_dsm units=h,c,p

Then check the spatial distribution. For that we will need to use histogram equalized color table (legend can be limited just to a specific range of values: d.legend raster=count_interpolation range=0,5).

r.colors map=count_dsm color=viridis -e

First we import LAS file as vector points, we keep only first return points and limit the import vertically to avoid using outliers found in the previous step. Before running it, uncheck Add created map(s) into layer tree in the v.in.lidar dialog if you are using GUI.

v.in.lidar -bt input=nc_tile_0793_016_spm.las output=first_returns return_filter=first zrange=60,200

Then we interpolate:

v.surf.rst input=first_returns elevation=dsm tension=25 smooth=1 npmin=80

Now we could visualize the resulting DSM by computing shaded relief with r.relief or by using 3D view (see the following sections).

Terrain analysis

Set the computational region based on an existing raster map:

g.region raster=count_1 -p

Check if the point density is sufficient (ground is class number 2, resolution and extent are taken from the computational region):

r.in.lidar input=nc_tile_0793_016_spm.las output=count_ground method=n class_filter=2 zrange=60,200

Import points (the -t flag disables creation of attribute table and the -b flag disables building of topology; uncheck Add created map(s) into layer tree):

v.in.lidar -bt input=nc_tile_0793_016_spm.las output=points_ground class_filter=2 zrange=60,200

The interpolation will take some time, but we can set the computational region to a smaller area to save some time before we determine what are the best parameters for interpolation. This can be done in the GUI from the Map Display toolbar or here just quickly using the prepared coordinates:

g.region n=223751 s=223418 w=639542 e=639899 -p

Now we interpolate the ground surface using regularized spline with tension (implemented in v.surf.rst) and at the same time we also derive slope, aspect and curvatures (the following code is one long line):

v.surf.rst input=points_ground tension=25 smooth=1 npmin=100 elevation=terrain slope=slope aspect=aspect pcurvature=profile_curvature tcurvature=tangential_curvature mcurvatur=mean_curvature

When we examine the results, especially the curvatures curvatures show a pattern which may be caused by some problems with the point cloud collection. We decrease the tension which will cause the surface to hold less to the points and we increase the smoothing. Since the raster map called range already exists, use the overwrite flag, i.e. --overwrite in the command line or checkbox in GUI to replace the existing raster by the new one. We use the --overwrite flag shortened to just --o.

v.surf.rst input=points_ground tension=20 smooth=5 npmin=100 elevation=terrain slope=slope aspect=aspect pcurvature=profile_curvature tcurvature=tangential_curvature mcurvatur=mean_curvature --o

When we are satisfied with the result, we get back to our desired raster extent:

g.region raster=count_1 -p

And finally interpolate the ground surface in the full extent:

v.surf.rst input=points_ground tension=20 smooth=5 npmin=100 elevation=terrain slope=slope aspect=aspect pcurvature=profile_curvature tcurvature=tangential_curvature mcurvatur=mean_curvature --o

Compute shaded relief:

r.relief input=terrain output=relief

Now combine the shaded relief raster and the elevation raster. This can be done in several ways. In GUI by changing opacity of one of them. A better result can be obtained using the r.shade module which combines the two rasters and creates a new one. Finally, this can be done on the fly without creating any raster using the d.shade module. The module can be used from GUI through toolbar or using the Console tab:

d.shade shade=relief color=terrain

Now, instead of using r.relief, we will use r.skyview:

r.skyview input=terrain output=skyview ndir=8 colorized_output=terrain_skyview

Combine the terrain and skyview also on the fly:

d.shade shade=skyview color=terrain

Analytical visualization based on shaded relief:

r.shaded.pca input=terrain output=pca_shade

Local relief model:

r.local.relief input=terrain output=lrm shaded_output=shaded_lrm

Pit density:

r.terrain.texture elevation=terrain thres=0 pitdensity=pit_density

Finally, we use automatic detection of landforms (using 50 m search window):

r.geomorphon -m elevation=terrain forms=landforms search=50

Vegetation analysis

The zrange parameter filters out points which don't fall into the specified range of Z values

Although for many vegetation-related application, coarser resolution is more appropriate because more points are needed for the statistics, we will use just:

g.region raster=count_1

We use all the points (not using the classification of the points), but with the Z filter to get the range of heights in each cell:

r.in.lidar input=nc_tile_0793_016_spm.las output=range method=range zrange=60,200

We use all the points, but with the Z filter to get the range of heights in each cell:

r.mapcalc "vegetation_by_range = if(range > 2, 1, null())"

Or lower res to avoid gaps (and possible smoothing during their filling).

r.in.lidar -d input=nc_tile_0793_016_spm.las output=height_above_ground method=max base_raster=terrain zrange=0,100

Now get all the areas which are higher than 2 m. In case we want to preserve the elevation in the output, we could use if(height_above_ground > 2, height_above_ground, null()) in the expression, but we want just the areas, so we use constant (1).

r.mapcalc "above_2m = if(height_above_ground > 2, 1, null())"

Use r.grow to extend the patches and fill in the holes:

r.grow input=above_2m output=vegetation_grow

We consider the result to represent the vegetated areas. We clump (connect) the individual cells into patches using r.clump):

r.clump input=vegetation_grow output=vegetation_clump

Some of the patches are very small. Using r.area we remove all patches smaller than the given threshold:

r.area input=vegetation_clump output=vegetation_by_height lesser=100

Now convert these areas to vector:

r.to.vect -s input=vegetation_by_height output=vegetation_by_height type=area

So far we were using elevation of the points, now we will use intensity. The intensity is used by r.in.lidar when -j (or -i) flag is provided:

r.in.lidar input=nc_tile_0793_016_spm.las output=intensity zrange=60,200 -j

With this high resolution, there are some cells without any points, so we use r.fill.gaps to fill these cells (r.fill.gaps also smooths the raster as part of the gap-filling process).

r.fill.gaps input=intensity output=intensity_filled uncertainty=uncertainty distance=3 mode=wmean power=2.0 cells=8

Grey scale color table is more appropriate for intensity:

r.colors map=intensity_filled color=grey

There are some areas with very high intensity, so to visually explore the other areas, we use histogram equalized color table:

r.colors -e map=intensity_filled color=grey

Let's use r.geomorphon again, but now with DSM and different settings. The following settings shows building roof shapes:

r.geomorphon elevation=dsm forms=dsm_forms search=7 skip=4

Different settings, especially higher flatness threshold (flat parameter) shows forested areas as combination of footslopes, slopes, and shoulders. Individual trees are represented as shoulders.

r.geomorphon elevation=dsm forms=dsm_forms search=12 skip=8 flat=10 --o

Lowering the skip parameter decreases generalization and brings in summits which often represents tree tops:

r.geomorphon elevation=dsm forms=dsm_forms search=12 skip=2 flat=10 --o

Now we extract summits using r.mapcalc raster algebra expression if(dsm_forms==2, 1, null()).

r.mapcalc "trees = if(dsm_forms==2, 1, null())"

Bonus tasks

3D visualization of rasters

We can explore our study area in 3D view (use image on the right if clarification is needed for below steps):

  1. Add raster dsm and uncheck or remove any other layers. Note that unchecking (or removing) any other layers is important because any layer loaded in Layers is interpreted as a surface in 3D view.
  2. Switch to 3D view (in the right corner of Map Display).
  3. Adjust the view (perspective, height).
  4. In Data tab, set Fine mode resolution to 1 and set ortho as the color of the surface (the orthophoto is draped over the DSM).
  5. Go back to View tab and explore the different view directions using the green puck.
  6. Go to Appearance tab and change the light conditions (lower the light height, change directions).
  7. Try also using the landforms raster as the color of the surface.
  8. When finished, switch back to 2D view.

Analytical visualization of rasters in 3D

We can explore our study area in 3D view (use image on the right if clarification is needed for below steps):

  1. Add rasters terrain and dsm and uncheck or remove any other layers. (Any layer in Layers will be interpreted as surface in 3D view.)
  2. Switch to 3D view (in the right corner of Map Display).
  3. Adjust the view (perspective, height). Set z-exaggeration to 1.
  4. In Data tab, in Surface, set Fine mode resolution to 1 for both rasters.
  5. Set a different color for each surface.
  6. For the dsm, set the position in Z direction to 1. This is a position relative the actual position of the raster. This small offset will help is see relation in between the terrain and the DSM surfaces.
  7. Go to Analysis tab and activate the first cutting plane. Set Shading to bottom color to get the bottom color to color the space between the terrain and DSM.
  8. Start sliding the sliders for X, Y and Rotation.
  9. Go back to View tab in case you want to change the view.
  10. When finished, switch back to 2D view.

Classify ground and non-ground points

UAV point clouds usually require classification of ground points (bare earth) when we want to create ground surface.

Import all the points using the v.in.lidar command in the previous section, unless you already have them. Since the metadata about projection are wrong, we use the -o flag to skip the projection consistency check.

r.in.lidar -e -n -o input=nc_uav_points_spm.las output=uav_density_05 method=n resolution=0.5

Set a small computational region to make all the computations faster (skip this to operate in the whole area):

g.region  n=219415 s=219370 w=636981 e=637039

Import the points using the v.in.lidar module but limit the extent just to computational region (-r flag), do not build topology (-b flag), do not create attribute table (-t flag), and do not assign categories (ids) to points (-c flag). There is more points than we need for interpolating the the resolution 0.5 m, so we will decimate (thin) the point cloud during import removing 75% of the points using preserve=4 (which uses count-based decimation which assumes spatially uniform distribution of the points).

v.in.lidar -t -c -b -r -o input=nc_uav_points_spm.las output=uav_points preserve=4

Then use v.lidar.mcc to classify ground and non-ground points:

v.lidar.mcc input=points_all ground=mcc_ground nonground=mcc_nonground

Interpolate the ground surface:

v.surf.rst input=mcc_ground tension=20 smooth=5 npmin=100 elevation=mcc_ground

If you are using the UAV data, you can extract the RGB values from the points. First, import the points again, but this time with all the attributes:

v.in.lidar -r -o input=nc_uav_points_spm.las output=uav_points preserve=4

Then extract the RGB values (stored in attribute columns) into separate channels (rasters):

v.to.rast input=uav_points output=red use=attr attr_col=red
v.to.rast input=uav_points output=green use=attr attr_col=green
v.to.rast input=uav_points output=blue use=attr attr_col=blue

And then set the color table to grey:

r.colors red color=grey
r.colors green color=grey
r.colors blue color=grey

These channels could be used for some remote sensing tasks. Now we will just visualize them using d.rgb:

d.rgb red=red green=green blue=blue

If you are using the lidar data for this part, you can compare the new DEM with the previous dataset:

r.mapcalc "mcc_lidar_differece = ground - mcc_ground"

Set the color table:

r.colors map=mcc_lidar_differece color=difference

3D visualization of point clouds

Unless you have already done that, familiarize yourself with the 3D view by doing the exercise above. Then import all points but this time do not add them to the Layer Manager (there is a check box at the bottom of the module dialog).

v.in.lidar -obt input=nc_tile_0793_016_spm.las output=points_all zrange=60,200

Now we can explore point clouds in 3D view:

  1. Uncheck or remove any other layers. (Any layer in Layer Manager is interpreted as surface in 3D view.)
  2. Switch to 3D view (in the right corner of Map Display).
  3. When 3D is active, Layer Manager has a new icon for default settings of the 3D which we need to open and change.
  4. In the settings, change settings for default point shape from sphere to X.
  5. Then add the points vector.
  6. You may need to adjust the size of the points if they are all merged together.
  7. Adjust the view (perspective, height).
  8. Go back to View tab and explore the different view directions using the green puck.
  9. Go to Appearance tab and change the light conditions (lower the light height, change directions).
  10. When finished, switch back to 2D view.

Alternative DSM creation

First returns not always represent top of the canopy, but they can simply represent first hit somewhere in the canopy. To account for this we create a raster with maximum for each cell instead of importing the raw points:

r.in.lidar input=nc_tile_0793_016_spm.las output=maximum method=maximum return_filter=first

The raster has many missing cells, that's why we want to interpolate, but at the same time, we also reduced the number of points by replacing all points in one cells just by this one cell. We can consider the cell to represent one point in the middle of the cell and use r.to.vect to create points from these cells:

r.to.vect input=maximum output=points_for_dsm type=point -b

Now we have a decimated (thinned) point cloud, decimated using binning into raster. We interpolate these points to get the DSM:

v.surf.rst input=first_returns elevation=dsm tension=25 smooth=1 npmin=80

Explore layers of vegetation in a 3D raster

Set the top and bottom of the computational region to match the height of vegetation (we are interested in vegetation from 0 m to 30 m above ground).

g.region -p3 b=0 t=30

Similarly to 2D, we can perform the binning also in three dimensions using 3D rasters. This is implemented in a module called r3.in.lidar:

r3.in.lidar -d input=nc_tile_0793_016_spm.las n=count sum=intensity_sum mean=intensity proportional_n=prop_count proportional_sum=prop_intensity base_raster=terrain

Convert the horizontal layers (slices) of 3D raster into 2D rasters:

r3.to.rast input=prop_count output=slices_prop_count

Open a second Map Display and add all the created rasters. Then set the a consistent color table to all of them.

r.colors slices_prop_count_001,slices_prop_count_002,... color=viridis

Now open the Animation Tool and add all to 2D rasters there and use slider to explore the layers.

Optimizations, troubleshooting and limitations

Speed optimizations:

  • Rasterize early.
    • For many use cases, there is less cells than points. Additionally, rasters will be faster, e.g. for visualization, because raster has a natural spatial index. Finally, many algorithms simply use rasters anyway.
  • If you cannot rasterize, see if you can decimate (thin) the point cloud (using v.in.lidar, r.in.lidar + r.to.vect, v.decimate).
  • Binning (e.g. r.in.lidar, r3.in.lidar) is much faster than interpolation and can carry out part of the analysis.
  • Faster count-based decimation performs often the same on a given point cloud for terrain as slower grid-based decimation (Petras et al. 2016).
  • r.in.lidar
    • choose computation region extent and resolution ahead
    • have enough memory to avoid using percent option (high memory usage versus high I/O)
  • v.in.lidar
    • -r limit import to computation region extent
    • -t do not create attribute table
    • -b do not build topology (applicable to other modules as well)
    • -c store only coordinates, no categories or IDs

Memory requirements:

  • for r.in.lidar
    • depend on the side of output and type of analysis
    • can be reduced by percent option
    • ERROR: G_malloc: unable to allocate ... bytes of memory means that there is not enough memory for the output raster, use percent option, coarser resolution, or smaller extent
    • on Linux available memory for process is RAM + SWAP partition, but when SWAP will be used, processing will be much slower
  • for v.in.lidar
    • low when not building topology (-b flag)
    • to build topology but keeping low memory, use export GRASS_VECTOR_LOWMEM=1 (os.environ['GRASS_VECTOR_LOWMEM'] = '1' in Python)

Limits:

  • Vector features with topology are limited to about 2 billion features per vector map (2^31 - 1)
  • Points without topology are limited only by disk space and what the modules are able to process later. (Theoretically, the count limited only by the 64bit architecture which would be 16 exbibytes per file, but the actual value depends on the file system.)
  • For large vectors with attributes, PostgreSQL backend is recommended for attributes (v.db.connect).
  • There is more limits for the 32bit versions for operations which require memory. (Since 2016 there is 64bit version even for MS Windows. Even the 32bit versions have large file support (LFS) and can work with files exceeding the 32bit limitations.)
  • Reading and writing to disk (I/O) usually limits speed. (Can be made faster for rasters since 7.2 using different compression algorithms set using the GRASS_COMPRESSOR variable.)
  • There is limit for number of open files set by the operating system.
    • The limit is often 1024. On Linux, it can be changed using ulimit.
  • Individual modules may have their own limitations or behaviors with large data based on the algorithm they are using. However, in general modules are made to deal with large datasets. For example,
    • r.watershed can process 90,000 x 100,000 (9 billion cells) in 77.2 hours (2.93GHz), and
    • v.surf.rst can process 1 million points and interpolate 202,000 cells in 13 minutes.
  • Read the documentation as it provides descriptions of limitations for each module and ways to deal with them.
  • Write to grass-user mailing list or GRASS GIS bug tracker in case you hit limits. For example, if you get negative number as the number of points or cells, open a ticket.

Bleeding edge

Vegetation fragmentation computed by r3.forestfrag from lidar point cloud represented as 3D raster
  • Processing point clouds as 3D rasters
    • r3.forestfrag
    • r3.count.categories
    • r3.scatterplot
  • Point profiles
    • v.profile.points
  • PDAL integration
    • prototypes: v.in.pdal, r.in.pdal
    • goal: provide access to selected PDAL algorithms (filters)
  • r.in.kinect
    • Has some features from r.in.lidar, v.in.lidar, and PCL (Point Cloud Library), but the point cloud is continuously updated from a Kinect scanner using OpenKinect libfreenect2 (used in Tangible Landscape).
  • Combining elevation data from different sources (e.g. global and lidar DEM or lidar and UAV DEM)
    • r.patch.smooth: Seamless fusion of high-resolution DEMs from two different sources
    • r.mblend: Blending rasters of different spatial resolution
  • show large point clouds in Map Display (2D)
    • prototype: d.points
  • Store return and class information as category
    • experimental implementation in v.in.lidar
  • Decimations
    • implemented: v.in.lidar, r.in.lidar
    • v.decimate (with experimental features)

Contributions are certainly welcome. You can discuss them at the grass-dev mailing list.

See also

External links:

References:

  • Brovelli M. A., Cannata M., Longoni U.M. 2004. LIDAR Data Filtering and DTM Interpolation Within GRASS. Transactions in GIS, April 2004, vol. 8, iss. 2, pp. 155-174(20), Blackwell Publishing Ltd. (full text at ResearchGate)
  • Evans, J. S., Hudak, A. T. 2007. A Multiscale Curvature Algorithm for Classifying Discrete Return LiDAR in Forested Environments. IEEE Transactions on Geoscience and Remote Sensing 45(4): 1029 - 1038. (full text)
  • Hengl, T. 2006. Finding the right pixel size. Computers & Geosciences, 32(9), 1283-1298. (Google Scholar entries)
  • Jasiewicz, J., Stepinski, T. 2013. Geomorphons - a pattern recognition approach to classification and mapping of landforms, Geomorphology. vol. 182, 147-156. DOI: 10.1016/j.geomorph.2012.11.005
  • Leitão, J.P., Prodanovic, D., Maksimovic, C. 2016. Improving merge methods for grid-based digital elevation models. Computers & Geosciences, Volume 88, March 2016, Pages 115-131, ISSN 0098-3004. DOI: 10.1016/j.cageo.2016.01.001.
  • Mitasova, H., Mitas, L. and Harmon, R.S., 2005, Simultaneous spline approximation and topographic analysis for lidar elevation data in open source GIS, IEEE GRSL 2 (4), 375- 379. (full text)
  • Petras, V., Newcomb, D. J., Mitasova, H. 2017. Generalized 3D fragmentation index derived from lidar point clouds. In: Open Geospatial Data, Software and Standards. DOI: 10.1186/s40965-017-0021-8
  • Petras, V., Petrasova, A., Jeziorska, J., Mitasova, H. 2016. Processing UAV and lidar point clouds in GRASS GIS. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B7, 945–952, 2016 (full text at ResearchGate)
  • Petrasova, A., Mitasova, H., Petras, V., Jeziorska, J. 2017. Fusion of high-resolution DEMs for water flow modeling. In: Open Geospatial Data, Software and Standards. DOI: 10.1186/s40965-017-0019-2
  • Tabrizian, P., Harmon, B.A., Petrasova, A., Petras, V., Mitasova, H. Meentemeyer, R.K. Tangible Immersion for Ecological Design. In: ACADIA 2017: Proceedings of the 37th Annual Conference of the Association for Computer Aided Design in Architecture, 600-609. Cambridge, MA, 2017. (full text at ResearchGate)
  • Tabrizian, P., Petrasova, A., Harmon, B., Petras, V., Mitasova, H., Meentemeyer, R. (2016, October). Immersive tangible geospatial modeling. In: Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (p. 88). ACM. DOI: 10.1145/2996913.2996950 (full text at ResearchGate)