FUTURES tutorial: Difference between revisions

From GRASS-Wiki
Jump to navigation Jump to search
m (update grass70 -> stable)
 
(55 intermediate revisions by 2 users not shown)
Line 1: Line 1:
[https://grass.osgeo.org/grass70/manuals/addons/r.futures.html r.futures.*] is an implementation of FUTure Urban-Regional Environment Simulation (FUTURES)
[https://grass.osgeo.org/grass-stable/manuals/addons/r.futures.html r.futures.*] is an implementation of FUTure Urban-Regional Environment Simulation (FUTURES)
which is a model for multilevel simulations of emerging urban-rural landscape structure.
which is a model for multilevel simulations of emerging urban-rural landscape structure.
FUTURES produces regional projections of landscape patterns using coupled submodels that integrate nonstationary
FUTURES produces regional projections of landscape patterns using coupled submodels that integrate nonstationary
Line 5: Line 5:
and the spatial structure of conversion events (PGA submodel).
and the spatial structure of conversion events (PGA submodel).


This tutorial shows how to prepare data and run the model. You can download sample dataset from here (TODO) and simulate urban growth
This tutorial shows how to prepare data and run the model. You can download [http://fatra.cnr.ncsu.edu/futures/FUTURES_triangle.zip sample dataset] and simulate urban growth
in [https://en.wikipedia.org/wiki/Research_Triangle The Triangle], region in the Piedmont of North Carolina in the United States with rapidly growing cities Raleigh, Durham and Chapel Hill.
in [https://en.wikipedia.org/wiki/Research_Triangle The Triangle], region in the Piedmont of North Carolina in the United States with rapidly growing cities Raleigh, Durham and Chapel Hill.
= Prerequisites =
This tutorial requires basic knowledge of GRASS GIS. If you haven't used GRASS GIS before, you may want to first go through some of the introductory tutorials or alternative workshops linked from the [[#See also|See also section]].


= Software =
= Software =
Required software includes:
Required software includes:
* GRASS GIS 7
* GRASS GIS 7
* r.futures addon
* addons
* R (≥ 3.0.2)
** {{AddonCmd|r.futures.pga}}
* SciPy (optional)
** {{AddonCmd|r.futures.potential}}
** {{AddonCmd|r.futures.demand}}
** {{AddonCmd|r.futures.devpressure}}
** {{AddonCmd|r.sample.category}}
* R (≥ 3.0.2) - needed for {{AddonCmd|r.futures.potential}}
* SciPy - useful for {{AddonCmd|r.futures.demand}}


= Input data used in this tutorial =
= Input data used in this tutorial =
Download [http://fatra.cnr.ncsu.edu/futures/FUTURES_triangle.zip sample dataset] containing:
* digital elevation model ([http://nationalmap.gov/elevation.html NED])
* digital elevation model ([http://nationalmap.gov/elevation.html NED])
* [http://www.mrlc.gov/ NLCD] 2001, 2011
* [http://www.mrlc.gov/ NLCD] 2001, 2011
Line 26: Line 36:


= Workflow =
= Workflow =
== Initial data preparation ==
Download sample data and unzip it. Create <tt>grassdata</tt> directory on your disk (or use the one you already have) and move there Location <tt>futures_ncspm</tt>.
Launch GRASS GIS and select your <tt>grassdata</tt> directory, Location <tt>futures_ncspm</tt> and Mapset <tt>practice1</tt>.
 
Install addons:
<source lang="bash">
g.extension r.futures
g.extension r.sample.category
</source>
 
Close GUI and restart it:
<source lang="bash">
g.gui
</source>
 
To install the required R packages, go to the command line terminal which was started with GRASS GIS and R:
 
<source lang="bash">
R
</source>
 
This starts the R command line where you need to execute the following code (and go through any dialogs about installation if R shows them):
 
<source lang="R">
install.packages(c("MuMIn", "lme4", "optparse", "rgrass7"))
</source>
 
== Initial steps and data preparation ==
First we will set computational region of our analyses to an extent covering our study area and so that the cells are aligned with one of our landuse rasters:
First we will set computational region of our analyses to an extent covering our study area and so that the cells are aligned with one of our landuse rasters:
<source lang="bash">g.region n=1648005 s=1480695 e=1624815 w=1462965 align=landuse_2001 -p</source>
<source lang="bash">g.region raster=landuse_2011 -p</source>


We will derive urbanized areas from NLCD dataset for year 2001 and 2011
We will derive urbanized areas from NLCD dataset for year 1992, 2001 and 2011
by extracting categories category 21 - 24 into a new binary map where developed is 1,
by extracting categories category 21 - 24 into a new binary map where developed is 1,
undeveloped 0 and NULL (no data) is area unsuitable for development (water, wetlands, protected areas).
undeveloped 0 and NULL (no data) is area unsuitable for development (water, wetlands, protected areas).
Line 37: Line 73:


<source lang="bash">
<source lang="bash">
v.to.rast input=protected output=protected use=val
v.to.rast input=protected_areas output=protected_areas use=val
r.null map=protected null=0
r.null map=protected_areas null=0
</source>
</source>


And then create rasters of developed/undeveloped areas using raster algebra:
And then create rasters of developed/undeveloped areas using raster algebra:


<source lang="bash">
<source lang="bash" style="white-space: pre-wrap;
r.mapcalc "urban_2001 = if(landuse_2001 >= 21 && landuse_2001 <= 24, 1, if(landuse_2001 == 11 || landuse_2001 >= 90 || protected, null(), 0))"
white-space: -moz-pre-wrap;
r.mapcalc "urban_2011 = if(landuse_2011 >= 21 && landuse_2011 <= 24, 1, if(landuse_2011 == 11 || landuse_2011 >= 90 || protected, null(), 0))"
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
r.mapcalc "urban_1992 = if(landuse_1992 >= 21 && landuse_1992 <= 24, 1, if(landuse_1992 == 11 || landuse_1992 >= 90 || protected_areas, null(), 0))"
r.mapcalc "urban_2001 = if(landuse_2001 >= 21 && landuse_2001 <= 24, 1, if(landuse_2001 == 11 || landuse_2001 >= 90 || protected_areas, null(), 0))"
r.mapcalc "urban_2011 = if(landuse_2011 >= 21 && landuse_2011 <= 24, 1, if(landuse_2011 == 11 || landuse_2011 >= 90 || protected_areas, null(), 0))"
</source>
</source>


Line 51: Line 92:


<source lang="bash">
<source lang="bash">
v.to.rast input=subregions_triangle type=area use=attr attribute_column=FIPS output=selected_counties_id
v.to.rast input=counties type=area use=attr attribute_column=FIPS output=counties
</source>
</source>


If we need to run FUTURES only for smaller number of counties, we can extract them based on the identifier:
If we need to run FUTURES only for smaller number of counties, we can extract them based on the identifier:


<source lang="bash">
<source lang="bash" style="white-space: pre-wrap;
v.to.rast input=subregions_triangle type=area where="FIPS in (37183, 37101)" use=attr attribute_column=FIPS output=selected_counties_id
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
v.to.rast input=counties type=area where="FIPS in (37183, 37101)" use=attr attribute_column=FIPS output=counties_subset
</source>
</source>
Before further steps, we will set our working directory so that the input population files and text files
we are going to create are saved in one directory and easily accessible.
You can do that from menu ''Settings'' → ''GRASS working environment'' → ''Change working directory''.
Select (or create) a directory and move there the downloaded files <tt>population_projection.csv</tt> and <tt>population_trend.csv</tt>.


== Potential submodel ==
== Potential submodel ==
Module {{AddonCmd|r.futures.potential}} implements POTENTIAL submodel as a part of FUTURES land change model. POTENTIAL is implemented using a set of coefficients that relate a selection of site suitability factors to the probability of a place becoming developed. This is implemented using the parameter table in combination with maps of those site suitability factors (mapped predictors). The coefficients are obtained by conducting multilevel logistic regression in R with package lme4 where the coefficients may vary by county. The best model is selected automatically using dredge function from package [https://cran.r-project.org/web/packages/MuMIn/index.html MuMIn].
First, we will derive couple of predictors.
=== Predictors ===
=== Predictors ===
==== Slope ====
==== Slope ====
We will derive slope in degrees from DEM using {{cmd|r.slope.aspect}} module:
We will derive slope in degrees from digital elevation model using {{cmd|r.slope.aspect}} module:
<source lang="bash">r.slope.aspect elevation=DEM slope=slope</source>
<source lang="bash">r.slope.aspect elevation=elevation_30m slope=slope</source>


==== Distance from lakes/rivers ====
==== Distance from lakes/rivers ====
Line 79: Line 133:


==== Distance from protected areas ====
==== Distance from protected areas ====
We will use raster protected of protected areas we already created.
We will use raster protected of protected areas we already created, but we will set NULL values to zero.
We compute the distance to protected areas with module {{cmd|r.grow.distance}} and set color table from green to red:
We compute the distance to protected areas with module {{cmd|r.grow.distance}} and set color table from green to red:
<source lang="bash">
<source lang="bash">
r.grow.distance input=protected distance=dist_to_protected
r.null map=protected_areas setnull=0
r.grow.distance input=protected_areas distance=dist_to_protected
r.colors map=dist_to_protected color=gyr
r.colors map=dist_to_protected color=gyr
</source>
</source>
Line 99: Line 154:
First we specify the speed on different types of roads.
First we specify the speed on different types of roads.
We copy the roads raster into our mapset so that we can change it
We copy the roads raster into our mapset so that we can change it
by adding a new attribute field <tt>speed</tt>. Then we assign speed values based on the type of road:
by adding a new attribute field <tt>speed</tt>. Then we assign speed values (km/h) based on the type of road:


<source lang="bash">
<source lang="bash">
g.copy vector=roads_2015,roads
g.copy vector=roads,myroads
v.db.addcolumn map=roads columns="speed double precision"
v.db.addcolumn map=myroads columns="speed double precision"
v.db.update map=roads column=speed value=50 where="MTFCC = 'S1400'"
v.db.update map=myroads column=speed value=50 where="MTFCC = 'S1400'"
v.db.update map=roads column=speed value=100 where="MTFCC IN ('S1100', 'S1200')"
v.db.update map=myroads column=speed value=100 where="MTFCC IN ('S1100', 'S1200')"
</source>
</source>


Now we rasterize the selected road types using the speed values from the attribute table as raster values.
Now we rasterize the selected road types using the speed values from the attribute table as raster values.


<source lang="bash">
<source lang="bash" style="white-space: pre-wrap;
v.to.rast input=roads type=line where="MTFCC IN ('S1100', 'S1200', 'S1400')" output=roads_speed use=attr attribute_column=speed
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
v.to.rast input=myroads type=line where="MTFCC IN ('S1100', 'S1200', 'S1400')" output=roads_speed use=attr attribute_column=speed
</source>
</source>


Line 132: Line 191:


<source lang="bash">
<source lang="bash">
v.to.rast input=roads_2015 output=roads use=val
v.to.rast input=roads output=roads use=val
r.null map=roads null=0
r.null map=roads null=0
r.neighbors -c input=roads output=road_dens size=25 method=average
r.neighbors -c input=roads output=road_dens size=25 method=average
Line 142: Line 201:


<source lang="bash">
<source lang="bash">
v.to.rast input=roads_2015 where="MTFCC = 'S1630'" output=interchanges use=val
v.to.rast input=roads type=line where="MTFCC = 'S1630'" output=interchanges use=val
r.grow.distance -m input=interchanges distance=dist_interchanges
r.grow.distance -m input=interchanges distance=dist_interchanges
</source>
</source>


=== Development pressure ===
=== Development pressure ===


Computing development pressure with {{AddonCmd|r.futures.devpressure}}:
We compute development pressure with {{AddonCmd|r.futures.devpressure}}. Development pressure is a predictor based on number of neighboring developed cells within search distance, weighted by distance. The development pressure variable plays a special role in the model, allowing for a feedback between predicted change and change in subsequent steps.


<source lang="bash">
<source lang="bash" style="white-space: pre-wrap;
r.futures.devpressure -n input=urban_2011 output=devpressure_0_5 method=gravity size=20 gamma=0.5 scaling_factor=0.001
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
r.futures.devpressure -n input=urban_2011 output=devpressure_1 method=gravity size=10 gamma=1 scaling_factor=1
r.futures.devpressure -n input=urban_2011 output=devpressure_0_5 method=gravity size=30 gamma=0.5 scaling_factor=0.1
</source>
</source>


When gamma increases, development influence decreases more rapidly with distance.
When gamma increases, development influence decreases more rapidly with distance.
Size is half the size of the moving window.
Size is half the size of the moving window. When gamma is low, local development influences more distant places.
To explore the effects of these parameters, you can view the development pressure as a surface with draped urban_2011 raster over it.
We will derive 2 layers with different gamma and size parameters for the potential statistical model.
 
TODO We should derive more layers with different gamma parameters for the potential statistical model.


=== Rescaling variables ===
=== Rescaling variables ===


First we will look at the ranges of our predictor variables by running a short Python code snippet in Python shell:
First we will look at the ranges of our predictor variables by running a short Python code snippet in Python tab in GUI:


<source lang="python">
<source lang="python" style="white-space: pre-wrap;
for name in ('slope', 'dist_to_water', 'dist_to_protected', 'forest_smooth', 'travel_time_cities', 'road_dens', 'dist_interchanges', 'devpressure_0_5'):
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
for name in ['slope', 'dist_to_water', 'dist_to_protected', 'forest_smooth', 'travel_time_cities', 'road_dens', 'dist_interchanges', 'devpressure_0_5', 'devpressure_1']:
     minmax = grass.raster_info(name)
     minmax = grass.raster_info(name)
     print name, minmax['min'], minmax['max']
     print name, minmax['min'], minmax['max']
Line 183: Line 247:


=== Sampling ===
=== Sampling ===
To sample only in the analyzed counties, we will clip development layer:
<source lang="bash">
r.mapcalc "urban_2011_clip = if(counties, urban_2011)"
</source>
To estimate the number of sampling points, we can use {{cmd|r.report}} to report number of developed/undeveloped cells and their ratio.
<source lang="bash">
r.report map=urban_2011 units=h,c,p
</source>
We will sample the predictors and the response variable with 10000 random points in undeveloped areas and 2000 points in developed area:
<source lang="bash" style="white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
r.sample.category input=urban_2011_clip output=sampling sampled=counties,devpressure_0_5,devpressure_1,slope,road_dens_perc,forest_smooth_perc,dist_to_water_km,dist_to_protected_km,dist_interchanges_km,travel_time_cities npoints=10000,2000
</source>
The attribute table can be exported as CSV file (not necessary step):
<source lang="bash" style="white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
v.db.select map=sampling columns=urban_2011_clip,counties,devpressure_0_5,devpressure_1,slope,road_dens_perc,forest_smooth_perc,dist_to_water_km,dist_to_protected_km,dist_interchanges_km,travel_time_cities separator=comma file=samples.csv
</source>
=== Development potential ===
=== Development potential ===
Now we find best model for predicting urbanization using {{AddonCmd|r.futures.potential}} which wraps an R script.
We can play with different combinations of predictors, for example:
<source lang="bash" style="white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
r.futures.potential input=sampling output=potential.csv columns=devpressure_0_5,slope,road_dens_perc,forest_smooth_perc,dist_to_water_km,travel_time_cities developed_column=urban_2011_clip subregions_column=counties
r.futures.potential input=sampling output=potential.csv columns=devpressure_1,road_dens_perc,dist_to_water_km,dist_to_protected_km,travel_time_cities developed_column=urban_2011_clip subregions_column=counties
</source>
Or we can run R [http://www.inside-r.org/packages/cran/MuMIn/docs/dredge dredge] function to find "best" model. We can specify minimum and maximum number of predictors the final model should use.
<source lang="bash" style="white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
r.futures.potential -d input=sampling output=potential.csv columns=devpressure_1,slope,road_dens_perc,forest_smooth_perc,dist_to_water_km,dist_to_protected_km,dist_interchanges_km,travel_time_cities developed_column=urban_2011_clip subregions_column=counties min_variables=4 max_variables=7
</source>
You can then open the output file <tt>potential.csv</tt>, which is a CSV file with tabs as separators.
For this tutorial, the final potential is created with:
<source lang="bash" style="white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
r.futures.potential  input=sampling output=potential.csv columns=devpressure_1,slope,forest_smooth_perc,dist_interchanges_km,travel_time_cities developed_column=urban_2011_clip subregions_column=counties --o
</source>
== Demand submodel ==
First we will mask out roads so that they don't influence into per capita land demand relation.
<source lang="bash">
v.to.rast input=roads type=line output=roads_mask use=val
r.mask roads_mask -i
</source>
We will use {{AddonCmd|r.futures.demand}} which derives the population vs. development relation. The relation can be linear/logarithmic/logarithmic2/exponential/exponential approach.
Look for examples of the different relations in the manual.
* linear: y = A + Bx
* logarithmic: y = A + Bln(x)
* logarithmic2: y = A + B * ln(x - C)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(requires SciPy)
* exponential: y = Ae^(BX)
* exp_approach: y = (1 - e^(-A(x - B))) + C&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(requires SciPy)
The format of the input population CSV files is described in the manual. It is important to have synchronized categories of subregions and the column headers of the CSV files (in our case FIPS number). How to simply generate the list of years (for which demand is computed) is described in {{AddonCmd|r.futures.demand}} manual, for example run this in Python console:
<source lang="python">
','.join([str(i) for i in range(2011, 2036)])
</source>
Then we can create the DEMAND file:
<source lang="bash" style="white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
r.futures.demand development=urban_1992,urban_2001,urban_2011 subregions=counties observed_population=population_trend.csv projected_population=population_projection.csv simulation_times=2011,2012,2013,2014,2015,2016,2017,2018,2019,2020,2021,2022,2023,2024,2025,2026,2027,2028,2029,2030,2031,2032,2033,2034,2035 plot=plot_demand.pdf demand=demand.csv separator=comma
</source>
In your current working directory, you should find files <tt>plot_demand.png</tt> and <tt>demand.csv</tt>.
If you have SciPy installed, you can experiment with other methods for fitting the functions:
<source lang="bash">
r.futures.demand ... method=logarithmic2
r.futures.demand ... method=exp_approach
</source>
If necessary, you can create a set of demand files produced by fitting each method separately and
then pick for each county the method which seems best and manually create a new demand file.
When you are finished, remove the mask as it is not needed for the next steps.
<source lang="bash">
r.mask -r
</source>
== Patch calibration ==
Patch calibration can be a very time consuming computation.
First we derive patches of new development by comparing historical and latest development.
We can run this on the entire area and keep only patches with minimum size 2 cells (1800 = 2 x 30 x 30 m).
<source lang="bash">
r.futures.calib development_start=urban_1992 development_end=urban_2011 subregions=counties patch_sizes=patches.txt patch_threshold=1800  -l
</source>
We obtained a file <tt>patches.txt</tt> (used later in the PGA) - a patch size distribution file - containing sizes of all found patches.
We can look at the distribution of the patch sizes:
<source lang="python">
from matplotlib import pyplot as plt
with open('patches.txt') as f:
    patches = [int(patch) for patch in f.readlines()]
plt.hist(patches, 2000)
plt.xlim(0,50)
plt.show()
</source>
At this point, we start the calibration to get best parameters of patch shape
(for this tutorial, this step can be skipped and the suggested parameters are used).
First we select only one county to
<source lang="bash">
v.to.rast input=counties type=area where="FIPS == 37183" use=attr attribute_column=FIPS output=calib_county
g.region raster=calib_county zoom=calib_county
</source>
<source lang="bash"
style="white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
r.futures.calib development_start=urban_1992 development_end=urban_2011 subregions=calib_county patch_sizes=patches.txt calibration_results=calib.csv patch_threshold=1800 repeat=5 compactness_mean=0.1,0.3,0.5,0.7,0.9 compactness_range=0.1,0.05 discount_factor=0.1,0.3,0.5,0.7,0.9 predictors=slope,forest_smooth_perc,dist_interchanges_km,travel_time_cities demand=demand.csv devpot_params=potential.csv num_neighbors=4 seed_search=2 development_pressure=devpressure_1 development_pressure_approach=gravity n_dev_neighbourhood=10 gamma=1 scaling_factor=1 --o
</source>
== FUTURES simulation ==
We will switch back to our previous region:
<source lang="bash">
g.region raster=landuse_2011 -p
</source>
Now we have all the inputs necessary for running {{AddonCmd|r.futures.pga}}:
The entire command is here:
<source lang="bash" style="white-space: pre-wrap;
white-space: -moz-pre-wrap;
white-space: -pre-wrap;
white-space: -o-pre-wrap;
word-wrap: break-word;">
r.futures.pga subregions=counties developed=urban_2011 predictors=slope,forest_smooth_perc,dist_interchanges_km,travel_time_cities devpot_params=potential.csv development_pressure=devpressure_1 n_dev_neighbourhood=10 development_pressure_approach=gravity gamma=1 scaling_factor=1 demand=demand.csv discount_factor=0.3 compactness_mean=0.2 compactness_range=0.1 patch_sizes=patches.txt num_neighbors=4 seed_search=2 random_seed=1 output=final output_series=final
</source>
=== Quick description ===
The command parameters and their values:
* raster map of counties with their categories
<code>
... subregions=counties ...
</code>
* raster of developed (1), undeveloped (0) and NULLs for undevelopable areas
<code>
... developed=urban_2011 ...
</code>
* predictors selected with {{AddonCmd|r.futures.potential}} and their coefficients in a <tt>potential.csv</tt> file. The order of predictors must match the order in the file and the categories of the counties must match raster <tt>counties</tt>.
<code>
... predictors=slope,forest_smooth_perc,dist_interchanges_km,travel_time_cities devpot_params=potential.csv ...
</code>
* initial development pressure computed by {{AddonCmd|r.futures.devpressure}}, it's important to set here the same parameters
<code>
... development_pressure=devpressure_1 n_dev_neighbourhood=10 development_pressure_approach=gravity gamma=1 scaling_factor=1 ...
</code>
* per capita land demand computed by {{AddonCmd|r.futures.demand}}, the categories of the counties must match raster <tt>counties</tt>.
<code>
... demand=demand.csv ...
</code>
* patch parameters from the calibration step
<code>
... discount_factor=0.3 compactness_mean=0.4 compactness_range=0.08 patch_sizes=patches.txt ...
</code>
* recommended parameters for patch growing algorithm
<code>
... num_neighbors=4 seed_search=2 ...
</code>
* set random seed  for repeatable results or set flag <tt>-s</tt> to generate seed automatically
<code>
... random_seed=1 ...
</code>
* specify final output map and optionally basename for intermediate raster maps
<code>
output=final output_series=final
</code>
=== Scenarios ===
Scenarios involving policies that encourage infill versus sprawl can be explored using the <tt>incentive_power</tt> parameter,
which uses a power function to transform the evenness of the probability gradient in POTENTIAL.
The default value is 1. You can change the power to a number between 0.25 and 4 to test urban sprawl/infill scenarios.
Higher power leads to infill behavior, lower power to urban sprawl.
= Postprocessing =
Follow a [[Creating_animation_from_FUTURES_output_in_GRASS_GIS| tutorial how to make an animation from the results]].
= See also =
* [[From GRASS GIS novice to power user (workshop at FOSS4G Boston 2017)]]
* [[Unleash the power of GRASS GIS at US-IALE 2017]]
* [[Workshop on urban growth modeling with FUTURES]]
* [[R statistics|GRASS GIS and R]]
* [[GRASS Location Wizard]]
[[Category: Tutorial]]
[[Category: FUTURES]]
[[Category: Urban]]

Latest revision as of 08:38, 6 September 2023

r.futures.* is an implementation of FUTure Urban-Regional Environment Simulation (FUTURES) which is a model for multilevel simulations of emerging urban-rural landscape structure. FUTURES produces regional projections of landscape patterns using coupled submodels that integrate nonstationary drivers of land change: per capita demand (DEMAND submodel), site suitability (POTENTIAL submodel), and the spatial structure of conversion events (PGA submodel).

This tutorial shows how to prepare data and run the model. You can download sample dataset and simulate urban growth in The Triangle, region in the Piedmont of North Carolina in the United States with rapidly growing cities Raleigh, Durham and Chapel Hill.

Prerequisites

This tutorial requires basic knowledge of GRASS GIS. If you haven't used GRASS GIS before, you may want to first go through some of the introductory tutorials or alternative workshops linked from the See also section.

Software

Required software includes:

Input data used in this tutorial

Download sample dataset containing:

  • digital elevation model (NED)
  • NLCD 2001, 2011
  • NLCD 1992/2001 Retrofit Land Cover Change Product
  • transportation network (TIGER)
  • county boundaries (TIGER)
  • protected areas (Secured Lands)
  • cities above 20000 (USGS)
  • county population past estimates and future projections (NC OSBM) per county (nonspatial)

Workflow

Download sample data and unzip it. Create grassdata directory on your disk (or use the one you already have) and move there Location futures_ncspm. Launch GRASS GIS and select your grassdata directory, Location futures_ncspm and Mapset practice1.

Install addons:

g.extension r.futures
g.extension r.sample.category

Close GUI and restart it:

g.gui

To install the required R packages, go to the command line terminal which was started with GRASS GIS and R:

R

This starts the R command line where you need to execute the following code (and go through any dialogs about installation if R shows them):

install.packages(c("MuMIn", "lme4", "optparse", "rgrass7"))

Initial steps and data preparation

First we will set computational region of our analyses to an extent covering our study area and so that the cells are aligned with one of our landuse rasters:

g.region raster=landuse_2011 -p

We will derive urbanized areas from NLCD dataset for year 1992, 2001 and 2011 by extracting categories category 21 - 24 into a new binary map where developed is 1, undeveloped 0 and NULL (no data) is area unsuitable for development (water, wetlands, protected areas). First we will convert protected areas from vector to raster and set NULLs to zeros (for simpler raster algebra expression in the next step):

v.to.rast input=protected_areas output=protected_areas use=val
r.null map=protected_areas null=0

And then create rasters of developed/undeveloped areas using raster algebra:

r.mapcalc "urban_1992 = if(landuse_1992 >= 21 && landuse_1992 <= 24, 1, if(landuse_1992 == 11 || landuse_1992 >= 90 || protected_areas, null(), 0))"
r.mapcalc "urban_2001 = if(landuse_2001 >= 21 && landuse_2001 <= 24, 1, if(landuse_2001 == 11 || landuse_2001 >= 90 || protected_areas, null(), 0))"
r.mapcalc "urban_2011 = if(landuse_2011 >= 21 && landuse_2011 <= 24, 1, if(landuse_2011 == 11 || landuse_2011 >= 90 || protected_areas, null(), 0))"

We will convert vector counties to raster with the values of the FIPS attribute which links to population file:

v.to.rast input=counties type=area use=attr attribute_column=FIPS output=counties

If we need to run FUTURES only for smaller number of counties, we can extract them based on the identifier:

v.to.rast input=counties type=area where="FIPS in (37183, 37101)" use=attr attribute_column=FIPS output=counties_subset


Before further steps, we will set our working directory so that the input population files and text files we are going to create are saved in one directory and easily accessible. You can do that from menu SettingsGRASS working environmentChange working directory. Select (or create) a directory and move there the downloaded files population_projection.csv and population_trend.csv.

Potential submodel

Module r.futures.potential implements POTENTIAL submodel as a part of FUTURES land change model. POTENTIAL is implemented using a set of coefficients that relate a selection of site suitability factors to the probability of a place becoming developed. This is implemented using the parameter table in combination with maps of those site suitability factors (mapped predictors). The coefficients are obtained by conducting multilevel logistic regression in R with package lme4 where the coefficients may vary by county. The best model is selected automatically using dredge function from package MuMIn.

First, we will derive couple of predictors.

Predictors

Slope

We will derive slope in degrees from digital elevation model using r.slope.aspect module:

r.slope.aspect elevation=elevation_30m slope=slope

Distance from lakes/rivers

First we will extract category Open Water from 2011 NLCD dataset:

r.mapcalc "water = if(landuse_2011 == 11, 1, null())"

Then we compute the distance to water with module r.grow.distance and set color table to shades of blue:

r.grow.distance input=water distance=dist_to_water
r.colors -n map=dist_to_water color=blues

Distance from protected areas

We will use raster protected of protected areas we already created, but we will set NULL values to zero. We compute the distance to protected areas with module r.grow.distance and set color table from green to red:

r.null map=protected_areas setnull=0
r.grow.distance input=protected_areas distance=dist_to_protected
r.colors map=dist_to_protected color=gyr

Forests

We will smooth the transition between forest and other land use, see NLCD category Open Water:

r.mapcalc "forest = if(landuse_2011 >= 41 && landuse_2011 <= 43, 1, 0)"
r.neighbors -c input=forest output=forest_smooth size=15 method=average
r.colors map=forest_smooth color=ndvi

Travel time to cities

Here we will compute travel time to cities (> population 20000) as cumulative cost distance where cost is defined as travel time on roads. First we specify the speed on different types of roads. We copy the roads raster into our mapset so that we can change it by adding a new attribute field speed. Then we assign speed values (km/h) based on the type of road:

g.copy vector=roads,myroads
v.db.addcolumn map=myroads columns="speed double precision"
v.db.update map=myroads column=speed value=50 where="MTFCC = 'S1400'"
v.db.update map=myroads column=speed value=100 where="MTFCC IN ('S1100', 'S1200')"

Now we rasterize the selected road types using the speed values from the attribute table as raster values.

v.to.rast input=myroads type=line where="MTFCC IN ('S1100', 'S1200', 'S1400')" output=roads_speed use=attr attribute_column=speed

We set the rest of the area to low speed and recompute the speed as time to travel through a 30m cell in minutes:

r.null map=roads_speed null=5
r.mapcalc "roads_travel_time = 1.8 / roads_speed"

Finally we compute the travel time to larger cities using r.cost:

r.cost input=roads_travel_time output=travel_time_cities start_points=cities_20000
r.colors map=travel_time_cities color=byr

Road density

We will rasterize roads and use moving window analysis (r.neighbors) to compute road density:

v.to.rast input=roads output=roads use=val
r.null map=roads null=0
r.neighbors -c input=roads output=road_dens size=25 method=average

Distance to interchanges

We will consider TIGER roads of type Ramp as interchanges, rasterize them and compute euclidean distance to them:

v.to.rast input=roads type=line where="MTFCC = 'S1630'" output=interchanges use=val
r.grow.distance -m input=interchanges distance=dist_interchanges

Development pressure

We compute development pressure with r.futures.devpressure. Development pressure is a predictor based on number of neighboring developed cells within search distance, weighted by distance. The development pressure variable plays a special role in the model, allowing for a feedback between predicted change and change in subsequent steps.

r.futures.devpressure -n input=urban_2011 output=devpressure_1 method=gravity size=10 gamma=1 scaling_factor=1
r.futures.devpressure -n input=urban_2011 output=devpressure_0_5 method=gravity size=30 gamma=0.5 scaling_factor=0.1

When gamma increases, development influence decreases more rapidly with distance. Size is half the size of the moving window. When gamma is low, local development influences more distant places. We will derive 2 layers with different gamma and size parameters for the potential statistical model.

Rescaling variables

First we will look at the ranges of our predictor variables by running a short Python code snippet in Python tab in GUI:

for name in ['slope', 'dist_to_water', 'dist_to_protected', 'forest_smooth', 'travel_time_cities', 'road_dens', 'dist_interchanges', 'devpressure_0_5', 'devpressure_1']:
    minmax = grass.raster_info(name)
    print name, minmax['min'], minmax['max']

We will rescale some of our input variables:

r.mapcalc "dist_to_water_km = dist_to_water / 1000"
r.mapcalc "dist_to_protected_km = dist_to_protected / 1000"
r.mapcalc "dist_interchanges_km = dist_interchanges / 1000"
r.mapcalc "road_dens_perc = road_dens * 100"
r.mapcalc "forest_smooth_perc = forest_smooth * 100"

Sampling

To sample only in the analyzed counties, we will clip development layer:

r.mapcalc "urban_2011_clip = if(counties, urban_2011)"

To estimate the number of sampling points, we can use r.report to report number of developed/undeveloped cells and their ratio.

r.report map=urban_2011 units=h,c,p

We will sample the predictors and the response variable with 10000 random points in undeveloped areas and 2000 points in developed area:

r.sample.category input=urban_2011_clip output=sampling sampled=counties,devpressure_0_5,devpressure_1,slope,road_dens_perc,forest_smooth_perc,dist_to_water_km,dist_to_protected_km,dist_interchanges_km,travel_time_cities npoints=10000,2000

The attribute table can be exported as CSV file (not necessary step):

v.db.select map=sampling columns=urban_2011_clip,counties,devpressure_0_5,devpressure_1,slope,road_dens_perc,forest_smooth_perc,dist_to_water_km,dist_to_protected_km,dist_interchanges_km,travel_time_cities separator=comma file=samples.csv

Development potential

Now we find best model for predicting urbanization using r.futures.potential which wraps an R script.

We can play with different combinations of predictors, for example:

r.futures.potential input=sampling output=potential.csv columns=devpressure_0_5,slope,road_dens_perc,forest_smooth_perc,dist_to_water_km,travel_time_cities developed_column=urban_2011_clip subregions_column=counties
r.futures.potential input=sampling output=potential.csv columns=devpressure_1,road_dens_perc,dist_to_water_km,dist_to_protected_km,travel_time_cities developed_column=urban_2011_clip subregions_column=counties

Or we can run R dredge function to find "best" model. We can specify minimum and maximum number of predictors the final model should use.

r.futures.potential -d input=sampling output=potential.csv columns=devpressure_1,slope,road_dens_perc,forest_smooth_perc,dist_to_water_km,dist_to_protected_km,dist_interchanges_km,travel_time_cities developed_column=urban_2011_clip subregions_column=counties min_variables=4 max_variables=7

You can then open the output file potential.csv, which is a CSV file with tabs as separators.

For this tutorial, the final potential is created with:

r.futures.potential  input=sampling output=potential.csv columns=devpressure_1,slope,forest_smooth_perc,dist_interchanges_km,travel_time_cities developed_column=urban_2011_clip subregions_column=counties --o

Demand submodel

First we will mask out roads so that they don't influence into per capita land demand relation.

v.to.rast input=roads type=line output=roads_mask use=val
r.mask roads_mask -i

We will use r.futures.demand which derives the population vs. development relation. The relation can be linear/logarithmic/logarithmic2/exponential/exponential approach. Look for examples of the different relations in the manual.

  • linear: y = A + Bx
  • logarithmic: y = A + Bln(x)
  • logarithmic2: y = A + B * ln(x - C)        (requires SciPy)
  • exponential: y = Ae^(BX)
  • exp_approach: y = (1 - e^(-A(x - B))) + C        (requires SciPy)


The format of the input population CSV files is described in the manual. It is important to have synchronized categories of subregions and the column headers of the CSV files (in our case FIPS number). How to simply generate the list of years (for which demand is computed) is described in r.futures.demand manual, for example run this in Python console:

','.join([str(i) for i in range(2011, 2036)])

Then we can create the DEMAND file:

r.futures.demand development=urban_1992,urban_2001,urban_2011 subregions=counties observed_population=population_trend.csv projected_population=population_projection.csv simulation_times=2011,2012,2013,2014,2015,2016,2017,2018,2019,2020,2021,2022,2023,2024,2025,2026,2027,2028,2029,2030,2031,2032,2033,2034,2035 plot=plot_demand.pdf demand=demand.csv separator=comma

In your current working directory, you should find files plot_demand.png and demand.csv.

If you have SciPy installed, you can experiment with other methods for fitting the functions:

r.futures.demand ... method=logarithmic2
r.futures.demand ... method=exp_approach

If necessary, you can create a set of demand files produced by fitting each method separately and then pick for each county the method which seems best and manually create a new demand file.

When you are finished, remove the mask as it is not needed for the next steps.

r.mask -r

Patch calibration

Patch calibration can be a very time consuming computation. First we derive patches of new development by comparing historical and latest development. We can run this on the entire area and keep only patches with minimum size 2 cells (1800 = 2 x 30 x 30 m).

r.futures.calib development_start=urban_1992 development_end=urban_2011 subregions=counties patch_sizes=patches.txt patch_threshold=1800  -l

We obtained a file patches.txt (used later in the PGA) - a patch size distribution file - containing sizes of all found patches.

We can look at the distribution of the patch sizes:

from matplotlib import pyplot as plt
with open('patches.txt') as f:
    patches = [int(patch) for patch in f.readlines()]
plt.hist(patches, 2000)
plt.xlim(0,50)
plt.show()

At this point, we start the calibration to get best parameters of patch shape (for this tutorial, this step can be skipped and the suggested parameters are used). First we select only one county to

v.to.rast input=counties type=area where="FIPS == 37183" use=attr attribute_column=FIPS output=calib_county
g.region raster=calib_county zoom=calib_county
r.futures.calib development_start=urban_1992 development_end=urban_2011 subregions=calib_county patch_sizes=patches.txt calibration_results=calib.csv patch_threshold=1800 repeat=5 compactness_mean=0.1,0.3,0.5,0.7,0.9 compactness_range=0.1,0.05 discount_factor=0.1,0.3,0.5,0.7,0.9 predictors=slope,forest_smooth_perc,dist_interchanges_km,travel_time_cities demand=demand.csv devpot_params=potential.csv num_neighbors=4 seed_search=2 development_pressure=devpressure_1 development_pressure_approach=gravity n_dev_neighbourhood=10 gamma=1 scaling_factor=1 --o

FUTURES simulation

We will switch back to our previous region:

g.region raster=landuse_2011 -p

Now we have all the inputs necessary for running r.futures.pga:

The entire command is here:

r.futures.pga subregions=counties developed=urban_2011 predictors=slope,forest_smooth_perc,dist_interchanges_km,travel_time_cities devpot_params=potential.csv development_pressure=devpressure_1 n_dev_neighbourhood=10 development_pressure_approach=gravity gamma=1 scaling_factor=1 demand=demand.csv discount_factor=0.3 compactness_mean=0.2 compactness_range=0.1 patch_sizes=patches.txt num_neighbors=4 seed_search=2 random_seed=1 output=final output_series=final

Quick description

The command parameters and their values:

  • raster map of counties with their categories

... subregions=counties ...

  • raster of developed (1), undeveloped (0) and NULLs for undevelopable areas

... developed=urban_2011 ...

  • predictors selected with r.futures.potential and their coefficients in a potential.csv file. The order of predictors must match the order in the file and the categories of the counties must match raster counties.

... predictors=slope,forest_smooth_perc,dist_interchanges_km,travel_time_cities devpot_params=potential.csv ...

  • initial development pressure computed by r.futures.devpressure, it's important to set here the same parameters

... development_pressure=devpressure_1 n_dev_neighbourhood=10 development_pressure_approach=gravity gamma=1 scaling_factor=1 ...

  • per capita land demand computed by r.futures.demand, the categories of the counties must match raster counties.

... demand=demand.csv ...

  • patch parameters from the calibration step

... discount_factor=0.3 compactness_mean=0.4 compactness_range=0.08 patch_sizes=patches.txt ...

  • recommended parameters for patch growing algorithm

... num_neighbors=4 seed_search=2 ...

  • set random seed for repeatable results or set flag -s to generate seed automatically

... random_seed=1 ...

  • specify final output map and optionally basename for intermediate raster maps

output=final output_series=final

Scenarios

Scenarios involving policies that encourage infill versus sprawl can be explored using the incentive_power parameter, which uses a power function to transform the evenness of the probability gradient in POTENTIAL.

The default value is 1. You can change the power to a number between 0.25 and 4 to test urban sprawl/infill scenarios. Higher power leads to infill behavior, lower power to urban sprawl.

Postprocessing

Follow a tutorial how to make an animation from the results.

See also