GRASS GSoC 2012 Image Segmentation: Difference between revisions

From GRASS-Wiki
Jump to navigation Jump to search
No edit summary
(r.seg --> r.smooth.seg)
 
(51 intermediate revisions by 7 users not shown)
Line 10: Line 10:
|-
|-
| Title: || '''Image Segmentation'''
| Title: || '''Image Segmentation'''
|-
|-
| Repository: || GRASS 7, browse at: [https://trac.osgeo.org/grass/browser/grass/trunk/imagery/i.segment i.segment]
|-
|-
|}
|}
Line 19: Line 22:


Output: To include segmented regions with statistics. This information can be directly used or taken as input to existing image classification modules.
Output: To include segmented regions with statistics. This information can be directly used or taken as input to existing image classification modules.
Update: This process will be split into two modules, the first will output a raster map with segments, the second will compute statistics for the segments.


=Background=
=Background=


Image classification techniques already implemented in GRASS GIS include supervised and unsupervised classification. Classification of images based on pixels can often be very noisy. By first segmenting the image, later classification of 'objects' can be more effective. Noise is reduced, classification speed is increased, and most importantly the classification is performed on objects instead of pixels. The {{cmd|i.smap}} module does include a segmentation step (based on Gaussian mixture distribution), but there does not exist a module intended to segment the image and provide segment data for general use. A summary of the existing methods implemented in GRASS are at [[Image classification]]. Furthermore, the module {{AddonCmd|r.seg}} in GRASS-addons uses internally the Mumford-Shah variational model for image segmentation.
Image classification techniques already implemented in GRASS GIS include supervised and unsupervised classification. Classification of images based on pixels can often be very noisy. By first segmenting the image, later classification of 'objects' can be more effective. Noise is reduced, classification speed is increased, and most importantly the classification is performed on objects instead of pixels. The {{cmd|i.smap}} module does include a segmentation step (based on Gaussian mixture distribution), but there does not exist a module intended to segment the image and provide segment data for general use. A summary of the existing methods implemented in GRASS are at [[Image classification]]. Furthermore, the module {{AddonCmd|r.smooth.seg}} in GRASS-addons uses internally the Mumford-Shah variational model for image segmentation.


== Segmentation Methods ==
== Segmentation Methods ==
Line 31: Line 36:
* Region Based
* Region Based
** multilevel thresholding technique +
** multilevel thresholding technique +
** region growing+  
** region growing +  
** mean-shift (does it fall under region based grouping?)
* Combined boundary/region (is this a correct category for these two?)
** mean-shift
** watershed


Carleer et.al. [1] reviewed 4 methods (marked with + above).  Boundary based methods are sensitive to noise and texture, and usually depend on good pre-processing.  (Does GRASS already have this pre-processing/filtering?)  Good results with urban zones, high contrast.  Both region based methods had difficulty with transition zones.  Region growing was less sensitive to texture (good for high resolution (1m) images).  Multi-level techniques are the only way to get all objects without over-segmentation.
Carleer et.al. [1] reviewed 4 methods (marked with + above).  Boundary based methods are sensitive to noise and texture, and usually depend on good pre-processing.  (Does GRASS already have this pre-processing/filtering?)  Good results with urban zones, high contrast.  Both region based methods had difficulty with transition zones.  Region growing was less sensitive to texture (good for high resolution (1m) images).  Multi-level techniques are the only way to get all objects without over-segmentation.


As additional algorithm's are added to the module, attention should be given to diversify so algorithm's with different strengths are implemented first.
I don't recall the source, but I read in one place that mean-shift could be difficult to apply to very large images, and elsewhere it was mentioned watershed sees more use in greyscale images.
 
As additional algorithms are added to the module, attention should be given to diversify so algorithms with different strengths are implemented first.
 
=== Region Growing Variations ===
 
Even within the region growing label, there are a number of approaches.  Here are two described in [5].
 
1.  Growing
 
Seeds (as a subset of the pixels) are selected (using image histogram, previous knowledge, or other methods).  Region growing is done by adding adjacent pixels.  No merging of segments is done, only unassigned pixels can be assigned to adjacent regions.
 
2.  Growing and Merging
 
Use all pixels as seeds, no need to have user figure out a reasonable starting seed selection.  Now adjacent segments can be merged.
 
Is there ever a case where someone may want to start with seeds, but still allow segment merging?  Or does that fall into the realm of classification to be done in the next step?
 
At this point, it seems both variations should be implemented.


== Segmentation Considerations ==
== Segmentation Considerations ==
Line 42: Line 67:
All(?) methods have some input parameter(s) that can be set.  These parameters influence if the algorithm will over-segment (one expected region is divided into 2 or more segments) or under-segment (putting two expected regions into one segment).  If the segments are used for later classification, over-segmentation should usually get preference to under-segmentation.  With extensive over-segmentation, some of the advantages provided by segmentation can be lost, but at least the classification can combine the segments into the expected region.  Under-segmentation is more critical, as the classification step will not divide the segment to recover the different regions.  (Based on a summary of a number of papers from [1])
All(?) methods have some input parameter(s) that can be set.  These parameters influence if the algorithm will over-segment (one expected region is divided into 2 or more segments) or under-segment (putting two expected regions into one segment).  If the segments are used for later classification, over-segmentation should usually get preference to under-segmentation.  With extensive over-segmentation, some of the advantages provided by segmentation can be lost, but at least the classification can combine the segments into the expected region.  Under-segmentation is more critical, as the classification step will not divide the segment to recover the different regions.  (Based on a summary of a number of papers from [1])


In order to respond to the issue of over/under-segmentation, a multiscalar approach would be interesting. This would mean either a top-down approach with a first coarse segmentation (under-segmentation) and the finer segmentation in selected segments, or a bottom-up approach with first a very fine-grained segmentation (over-segmentation) and the regrouping of segments to form higher levels. The first approach can be solved by doing a first segmentation, using certain segments as masks and then relaunching a second segmentation. The second approach requires an algorithm to decide which segments should be combined in a larger higher-level segments. A simple nearest neighbor or kmeans approach based on spectral mean can be used here. In terms of implementation in GRASS, this would probably call for several modules, one for the segmentation, and another for grouping of segments. The latter could be an all-purpose clustering module (and can also be emulated by simple data analysis in the attribute table + {{cmd|v.dissolve}}).
In order to respond to the issue of over/under-segmentation, a multiscalar approach would be interesting. This would mean either a top-down approach with a first coarse segmentation (under-segmentation) and the finer segmentation in selected segments, or a bottom-up approach with first a very fine-grained segmentation (over-segmentation) and the regrouping of segments to form higher levels. The first approach can be solved by doing a first segmentation, using certain segments as masks and then relaunching a second segmentation. {Or by using a vector map of the first segmentation as a boundary constraint in the second segmentation.} The second approach requires an algorithm to decide which segments should be combined in a larger higher-level segments. A simple nearest neighbor or kmeans approach based on spectral mean can be used here. In terms of implementation in GRASS, this would probably call for several modules, one for the segmentation, and another for grouping of segments. The latter could be an all-purpose clustering module (and can also be emulated by simple data analysis in the attribute table + {{cmd|v.dissolve}}).


It can sometimes be interesting to do a first segmentation on one band (e.g. panchromatic with higher resolution) and then regroup segments based on multispectral data (possibly weighting bands).
It can sometimes be interesting to do a first segmentation on one band (e.g. panchromatic with higher resolution) and then regroup segments based on multispectral data (possibly weighting bands).
Line 53: Line 78:


* General considerations
* General considerations
** The general principle in GRASS is KISS, with each module doing one thing. It is to be seen if the result of this project is one single module or rather more than one module each specialised in one task in a segmentation workflow.
** The general principle in GRASS is KISS, with each module doing one thing. It is to be seen if the result of this project is one single module or rather more than one module each specialised in one task in a segmentation workflow.
** As soon as code is to be (potentially) used in several modules, the use of a library should be envisaged.
** As soon as code is to be (potentially) used in several modules, the use of a library should be envisaged.
** Be able to process large images while being considerate of system memory
* Input
* Input
** in the GRASS logic, input should be an image group, or even image subgroup, which can contain any number of raster maps, but generally satellite or areal images that are pre-processed and ready for analysis (i.e. no pre-processing in the module)
** in the GRASS logic, input should be an image group, <del>or even image subgroup</del>, which can contain any number of raster maps, but generally satellite or areal images that are pre-processed and ready for analysis (i.e. no pre-processing in the module) (Update: subgroups are not often used, there use will not be implemented unless someone asks.)
*** This input group will define the feature space which can include spectral and other continuous (elevation, PCA layers, slope aspect...) and possibly (probably not initially) even discrete data (soil type, land cover...)
*** Default action will be to normalize/scale all input rasters to a 0-1 range.  The allows bands (0-255), NDVI, and other numbers to be compared on an equal basis in the distance formula without any preprocessing steps.  Since it gives equal weights to all rasters in the input group, a flag will give the user the option to skip this normalization step in case they want to use the actual values.
** optionally vector maps of existing features
** optionally vector maps of existing features
*** lines (be it linear features or boundary lines of polygons) should be used as constraints meaning that no segment boundary should cross such a line
*** lines (be it linear features or boundary lines of polygons) should be used as constraints meaning that no segment boundary should cross such a line
*** centroids/points to be used as initial seeds
** What segmentation algorithm to use
** Parameters for that algorithm
* Algorithm of segmentation
* Algorithm of segmentation
** in GSoC implementation of only one algorithm
** in GSoC implementation of only one algorithm
** code should be structured to allow easy implementation of additional algorithms
** code should be structured to allow easy implementation of additional algorithms
** multi-scalar segmentation can significantly improve results and should thus be implemented if possible (see i.smap code for example)
** multi-scalar segmentation can significantly improve results and should thus be implemented if possible (see i.smap code for example)
* Similarity measurement
** The squared euclidean distance will be the default similarity measurement.  If time allows, Manhattan distance will be added as an option.  (Using the square will give same results, we will also square the similarity threshold so the user doesn't need to worry about this detail.
** For the default scaling of the input, the similarity threshold will be 0 to 1.  This should be a good intuitive range for the user, 0 being the entire image is one segment and 1 being no segments can be formed.  (Internally, this number must be multiplied by the number of rasters in the image group, but again the user doesn't need to worry about the details.)  If the user selects the option to skip the normalization function, they will need to be careful how to select this parameter.
* Output
* Output
** raster map of segments (i.e. each pixel value represents id of segment the pixel belongs to)
** first (segmentation) module: raster map of segments (i.e. each pixel value represents id of segment the pixel belongs to)
** one vector map of segments per hierarchy level with a series of attributes (not all of these attributes should probably be calculated directly be the segmentation module)
** second (stats) module: one vector map of segments per hierarchy level with a series of attributes (not all of these attributes should probably be calculated directly be the segmentation module)
*** spectral attributes:  
*** spectral attributes:  
**** per spectral band: mean, min, max, skewness
**** per spectral band: mean, min, max, skewness
**** comination of bands: brightness, indices (i.e. results of multi-band calculations)
**** combination of bands: brightness, indices (i.e. results of multi-band calculations)
*** textural attributes: stdev (per-band and/or multi-band), mean difference to neighbor, Haralick texture features cf {{cmd|r.texture}}
*** textural attributes: stdev (per-band and/or multi-band), mean difference to neighbor, Haralick texture features cf {{cmd|r.texture}}
*** geometric/morphological attributes: area, perimeter, length/width measures, see also {{cmd|r.li}}
*** geometric/morphological attributes: area, perimeter, length/width measures, see also {{cmd|r.li}}
*** context attributes: mean difference to all other regions in the same upper hierarchical level, relative localisation within upper hierarchical level, absolute localisation, number of objects in lower level
*** context attributes: mean difference to all other regions in the same upper hierarchical level, relative localisation within upper hierarchical level, absolute localisation, number of objects in lower level
** depending on segmentation algorithm: raster map indicating for each pixel the probability of belonging to the segment it was put into, i.e. some measure of reliability of results
** depending on segmentation algorithm: raster map indicating for each pixel the probability of belonging to the segment it was put into, i.e. some measure of reliability of results
=== Questions ===
Number of modules:  Should the user run one module to create the segments (raster output), then if they are interested, run r.to.vect and run a second module (vector input/output) if they want to get the statistics.  (GUI glue to put them in one screen would be a low priority task for time remaining at the end of the summer.)  (I wonder if the stats module should take vector or raster as the input, it will also need the original raster.)
"Probability of belonging to the segment": For region growing - should this be the similarity measure when it was merged?  Or similarity measure of the pixel compared to the average?
/*ML: Not sure, but I would think that similarity between pixel and average of region it belongs to might be a good choice. Am not a specialist in statistics, but maybe it is possible to translate this into some form of probability of really "belonging" to that region (cf i.maxlik)*/
So this would be a comparison of the pixel to the final segment.  Does anyone have a standard measurement that should be used?
4 vs. 8 neighbors:  Should this be a user input option?  It seems 4 neighbors (no diagonals) is the normal definition for segmentation, but not for other GRASS modules.  Update: Using 4 neighbors as default, with optional flag to select 8 neighbors.
Null cells: Is it possible for some pixels inside the image to have null values?  If yes, should they just be excluded from the calculation, or merged into the nearest segment?  Update: current plan is to ignore all NULL values in the calculations.
Are there any examples of using linear features to constraint segment growth, or will it usually be polygons?
=== Lower priority ===
Add shape characteristics (smoothness, compactness) to the similarity measurement.  Similar to eCognition "Multiresolution Segmentation"
It may be useful to do some calculations for the color space (RGB, HSI, L*u*v*, L*a*b*)?  (I saw one paper [3] discussing pro/con of different systems, "best" answer is application dependent.)
ML: I would say leave decisions on color space (which is just one portion of feature space) to the user: one can group any kind raster maps with i.group and submit that to segmentation, and so the user can decide whether to use an image represented by different bands in a specific color space, plus any kind of other bands, indices, etc.


= Test Images =
= Test Images =
Line 83: Line 138:
Should check segmentation results on images from a few different resolutions and different numbers of bands against what is obtained in other software.
Should check segmentation results on images from a few different resolutions and different numbers of bands against what is obtained in other software.


Is there a benchmark for processing speed that should be considered?
Is there a benchmark for processing speed that should be considered? [4]


=Project Plan=
=Project Plan=
Line 90: Line 145:


* May 21: Start coding, 8 weeks until Midterm Evaluation  
* May 21: Start coding, 8 weeks until Midterm Evaluation  
* Week 1: Develop pseudocode to outline the work  
* Week 1: Develop pseudocode to outline the work [http://lists.osgeo.org/pipermail/soc/2012-May/001747.html Report 1]
* Week 2-4: Implement the main algorithm
* Week 2-4: Implement the main algorithm [http://lists.osgeo.org/pipermail/soc/2012-June/001779.html Report 2] [http://lists.osgeo.org/pipermail/soc/2012-June/001804.html Report 3] [http://lists.osgeo.org/pipermail/soc/2012-June/001826.html Report 4]
* Week 5: Add vector maps as a constraint to the segmentation
* Week 5: Add vector maps as a constraint to the segmentation [http://lists.osgeo.org/pipermail/soc/2012-June/001854.html Report 5]
* Week 6: Validation  
* Week 6: Validation [http://lists.osgeo.org/pipermail/soc/2012-June/001879.html Report 6]
* Week 7: Debugging  
* Week 7: Debugging [http://lists.osgeo.org/pipermail/soc/2012-July/001898.html Report 7]
* Week 8: Contingency time for finishing the above, ensure a solid main program.  
* Week 8: Contingency time for finishing the above, ensure a solid main program. [http://lists.osgeo.org/pipermail/soc/2012-July/001921.html Report 8]
* July 9: Midterm Evaluation: Evaluate the existing program, determine the plan for the remaining 3-4 weeks. Options include:
* July 9: Midterm Evaluation: Evaluate the existing program, determine the plan for the remaining 3-4 weeks. Options include:
* Improving the main algorithm
* Improving the main algorithm
Line 103: Line 158:
* Adding a second image segmentation algorithm
* Adding a second image segmentation algorithm


= Program Steps =
[http://lists.osgeo.org/pipermail/soc/2012-July/001941.html Report 9], [http://lists.osgeo.org/pipermail/soc/2012-July/001960.html Report 10], [http://lists.osgeo.org/pipermail/soc/2012-August/001986.html Report 11], [http://lists.osgeo.org/pipermail/soc/2012-August/002002.html Report 12], [http://lists.osgeo.org/pipermail/soc/2012-August/002041.html Report 13]
 
= Region Growing Algorithm =


Here is (a start!) for the processing steps, based on SPRING [2]
Here is (a start!) for the processing steps, based on SPRING [2]


Image Growing, bottom up processing.
Region Growing, bottom up processing.  Main improvement compared to simple algorithm is to slowly lower the similarity function, so only best matches are made first.  This prevents the "first" segment from taking over any unclear areas between it and the next clear segment.


1. Input:
1. Input:
** Seeds:  all pixels  (Later modifications - alternate seed methods)
** Seeds:  all pixels  (Later addition can be alternate seeding methods)
** Comparison function T(t)... as t increases, threshold for similarity is lower
** Similarity Threshold T(t)... as t increases, threshold for similarity is lower.  SPRING used:  <math>T(t) = T(0) alpha^t  </math>, where T(0) > 0, t =0,1,2... and alpha <1
** Size of smallest allowed area
** Size of smallest allowed area (Is this wanted or needed ???)
2. For each region i in candidate region set:
2. Loop for t
** Compare Ri with neighbors
** initialize candidate regions, save mean value vector and neighboring regions  (Not sure why this needs to be calculated/saved ahead of time ??)
** If it exists, Rk is best neighbor if smallest D and D < T.
3. For each region i in candidate region set (first pass this equals the seeds):
** Check Rk neighbors.
** Compare Ri with neighbors (Question: should neighbors include or exclude those regions that were already matched?
** If it exists, Rk is best neighbor if smallest D of all neighbors and and D < T.
** Check Rk's neighbors.
** Merge IF Ri is Rk's best neighbor
** Merge IF Ri is Rk's best neighbor
** remove from candidate region set.  (give all "small" regions a chance to merge with best neighbor before growing larger regions)
** remove from candidate region set.  (give all "small" regions a chance to merge with best neighbor before growing larger regions)
** update segment values
** update segment values
** next i
** next i
3. Increment t and repeat step 2, with all segments returned to candidate region set, until no regions can be merged
3. next t, with all segments returned to candidate region set, until no regions can be merged


4. Force a merge of regions that are too small
4. Force a merge of regions that are too small
= ToDo List =
The following list was developed at the "mid" point review, with about 1 month left.  Rating system is 1: must do, 2: would be nice, 3: probably only will be finished if it is a quick task.
The list has been updated for the status at the end of GSoC 2012.  Remaining items to be developed later are added to the TODO manual section.  Further progress and new features requests will not be added to this listing.
== Functionality ==
<del>1: Implement the 8 neighbor option (currenly only the 4 pixel neighbors are considered).</del> (done)
<del>1: Starting seed pixels for the segments</del> (done)
<del>1: handle null cells in the optional boundary constraints raster.</del> (done)
<del>2: Current input limit is 2 billion starting segments, constrained by "int" data type for segment ID.  Consider long int, and/or dynamic allocation of different storage depending on what is needed. (MM: Unfortunately you are stuck with the largest integer type that a GRASS raster supports with is 32 bit integer. Internally you could use larger integer types, but then you can not save the results... EM: Hmm, if the segments are renumbered sequenctial at the end, it would be possible to then save them if the resulting number of segments is less then 2 billion...  Does anyone want to segment a raster with more than 2 billion pixels?  As a work around, larger maps could be processed, if a random selection of pixels are used as seeds... At a minimum, put this limitation in as error checking and the documents.)</del> (documented limitation)
2: Check input parameters for mean-shift and other segmentation algorithms, try to make input parameters "generic" so they could be used for any/other algorithms.
2: Add shape characteristics (smoothness, compactness) to the similarity measurement. Similar to eCognition "Multiresolution Segmentation".  Check Baatz and Shape paper.  Adds two input parameters (weight of radiometric to shape, and weight of compactness to smoothness.) (Maybe use the ratio of the number of edge cells to the total number of cells as a proxy for compactness, which could be easily obtained as a side-product when finding neighbors.)
2: Alternate similarity measurements (<del>Manhattan</del>, Malahanobis)  (Added Manhattan distance)
3: Adding a parameter to make it easier to merge smaller segments and harder to merge large segments. (Preliminary testing is not promising, low priority)
3: Estimating the threshold value. (at least add to docs) (1 to 5% of the max difference gave me (MM) subjectively good results.)
?: Adding control for what scale the segmentation is performed at.  (EM: I'm not certain what is meant/needed for this, but I think it is a different concept from just using g.region.)
== Statistics/metrics ==
1: i.segment.stats
(It should do more then just statistics...  .evaluation .metrics .data  Maybe i.segment.metrics?)
(Will need to evaulate what is already available from other GRASS modules, what is easy, what is hard.  Start from the specifications for what is desired.)
1: Providing updates to i.maxlik to ensure the segmentation output can be used as input for the existing classification functionality.
1: Integration/workflow for r.fuzzy.
== Speed ==
<del>2: Neighbor finding, keep a tree structure of found neighbor segments to reduce the number of neighbor pixels that the similarity function will be run on.</del> implemented, ~20% speed reduction
<del>2: Search continuation.  If Ri isn't Rk's best neighbor, then use Rk as the next Ri.  (Skips one neighbor finding routine.)</del> TODO: need to decide if this should stay in or out, or as an option.  Currently implemented as an option.  Need to do some speed tests to see if it is faster or slower after finishing some other changes.
<del>2: Consider peano or other ordering for pixel processing (instead of row major order), should help processing time if an entire "row" of segments are not in RAM.</del> implemented.  No significant time change on the small maps it was tested on.  Probably this will be helpful if disk I/O becomes limiting.
3: neighbor finding: When checking for Rk's neighbors, account for already knowing Ri and skip those pixels.
<del>?: change candidate flag to int (compare with pass number) to avoid resetting each time. (32x RAM requirement for the flag, is it worth it?)</del> Decided not to change this.
?: RAM storage of the segment membership and the neighboring segments (calculate first the requirements, if this is even possible for reasonable (what size?) maps).  (check what % of the processing time is spent finding neighbors.)
== Memory ==
1: User input for how much RAM can be used.
2: Consider putting the optional boundary constraints raster into RAM (dependent on available RAM).
<del>2: Use "zero" for segment ID's of Null cells, discard the NULL flag.  (Need to check speed impact with Seg ID in SEG storage.)</del> Decided not to do this.
3: Check input map type(s), currently storing in DCELL sized SEG file, could reduce this dynamically depending on input map time. (Could only reduce to FCELL, since will be storing mean we can't use CELL.  Might not be worth the added code complexity.)
<del>3: Put segment ID in SEG instead of RAM. (Possibly make this dependent on available RAM?) (Only consider if we run into RAM limitations.)</del> Implemented
== Polish ==
<del>1: Add error traps.  (Certainly for memory allocation, Minimum number of non-NULL cells in the input bands?anything else?)</del> Added some additional checks.
<del>2: Make the output segment ID's sequential (currently they have what ID the "first" pixel in the segment had).</del> Decided not to do this.
<del>2: There are many small TODO scattered in the code. Resolve some easy questions to clean up the code.</del> Resolved most.  Some are left as enhancement suggestions for later.
<del>2: Change G_percentage: estimate total number of passes expected from histogram and threshold.  (If this isn't reliabe, maybe change to show 1% for each pass, i.e. % complete out of first 100 passes, then % complete out of next 100 passes, etc.)</del> Using % complete of the input number of passes threshold.
<del>3: GUI (to combine i.segment with the stats module)</del> Wait until there is user demand for this.
== Documentation ==
How to choose parameters, what their impact is.
Typical workflow:
* {{cmd|i.group}}
* i.segment
* {{cmd|r.to.vect}}
* i.segment.metrics and/or {{cmd|i.maxlik}} and/or {{AddonCmd|r.fuzzy}}
= Workflow =
Todo: Some typical workflow examples, type of data, GRASS modules used before and after the image segmentation.
= Results =
== Ortho-photo ==
The [http://grass.osgeo.org/wiki/where_are_these_data link to ortho data is missing] has 3 bands, and the computational region is 1,120,080 cells, at 1-m resolution.
Here's the code used to generate this segmentation result:
<source lang="bash">
i.segment group=ortho output=ortho_segs_ha threshold=0.02 endt=10000 final_mean=ortho_segs_mean_ha min=20 --o
</source>
The segmentation performed in 22m53.255s on a Intel i5 laptop with 4Go RAM. The memory consumption was around 38 Mo.
[[File:Raglan_ortho_seg.png]]
== SPOT5 scene ==
The [http://grass.osgeo.org/wiki/where_are_these_data link to SPOT data is missing] has 4 bands, and the computational region is 4,444,517 cells, at 10-m resolution.
Here's the code used to generate this segmentation result:
<source lang="bash">
i.segment group=spot output=spot_seg threshold=0.01 endt=10000 min=30 --o
</source>
The segmentation performed in 87m3.870s on a Intel Core 2 workstation with 8Go RAM. The memory consumption was around 170 Mo.
[[File:Taranaki spot seg.png]]


= References =
= References =
Line 129: Line 309:
TODO: complete references with links.
TODO: complete references with links.


[1] Carleer, et al: Assessment of Very High Spatial Resolution Satellite Image Segmentations
[http://www.armurs.ulb.ac.be/images/8/86/PERS_Carleer_05.pdf] Carleer, et al: Assessment of Very High Spatial Resolution Satellite Image Segmentations, 2005 (Evaluates 2 boundary and 2 region based algorithms.)
 
[http://marte.dpi.inpe.br/col/sid.inpe.br/deise/1999/02.05.09.30/doc/T205.pdf] Bins, et al: Satellite Imagery Segmentation: A Region Growing Approach, 1996  (Describes approach taken in SPRING software.)
 
[http://www.sciencedirect.com/science/article/pii/S0031320300001497] Cheng et. al.: Color image segmentation: advances and prospect, 2000  (survey of segmentation methods and color spaces)
 
[http://www.isprs.org/proceedings/XXXV/congress/comm4/papers/506.pdf] G. Meinel, M. Neubert: A Comparison of Segmentation Programs for High Resolution Remote Sensing Data, 20?? (Includes timing to complete segmentation)
 
[http://www.wiley.com/WileyCDA/WileyTitle/productCd-0471377392.html?0471377392=] Pitas, I:  Digital Image Processing Algorithms and Applications, 2000 (Textbook, including 1 chapter on segmentation methods.)
 
eCognition Reference Manual
 
Kurtz et. al: Hierarchical Segmentation of Multiresolution Remote Sensing Images, 2011


[2] Bins, et al: Satellite Imagery Segmentation: a region growing approach
Comaniciu, Dorin: Mean Shift: A Robust Approach Toward Feature Space Analysis, 2002

Latest revision as of 08:02, 22 December 2015

(See also other GRASS GSoC 2012 projects)

Student Name: Eric Momsen
Organization: OSGeo - Open Source Geospatial Foundation
Mentor Name: Mentor: Markus Metz (backup mentors: M Lennert, P Roudier)
Title: Image Segmentation
Repository: GRASS 7, browse at: i.segment

Abstract

GRASS GIS has many imagery related processing capabilities, but the field is rapidly developing and many techniques are not yet implemented. The goal of this GSoC project is to implement the region growing image segmentation algorithm.

Input: Raster map(s) to be segmented (plus optional vector map for a constraint)

Output: To include segmented regions with statistics. This information can be directly used or taken as input to existing image classification modules.

Update: This process will be split into two modules, the first will output a raster map with segments, the second will compute statistics for the segments.

Background

Image classification techniques already implemented in GRASS GIS include supervised and unsupervised classification. Classification of images based on pixels can often be very noisy. By first segmenting the image, later classification of 'objects' can be more effective. Noise is reduced, classification speed is increased, and most importantly the classification is performed on objects instead of pixels. The i.smap module does include a segmentation step (based on Gaussian mixture distribution), but there does not exist a module intended to segment the image and provide segment data for general use. A summary of the existing methods implemented in GRASS are at Image classification. Furthermore, the module r.smooth.seg in GRASS-addons uses internally the Mumford-Shah variational model for image segmentation.

Segmentation Methods

  • Boundary Based
    • optimal edge detector +
    • watershed +
  • Region Based
    • multilevel thresholding technique +
    • region growing +
  • Combined boundary/region (is this a correct category for these two?)
    • mean-shift
    • watershed

Carleer et.al. [1] reviewed 4 methods (marked with + above). Boundary based methods are sensitive to noise and texture, and usually depend on good pre-processing. (Does GRASS already have this pre-processing/filtering?) Good results with urban zones, high contrast. Both region based methods had difficulty with transition zones. Region growing was less sensitive to texture (good for high resolution (1m) images). Multi-level techniques are the only way to get all objects without over-segmentation.

I don't recall the source, but I read in one place that mean-shift could be difficult to apply to very large images, and elsewhere it was mentioned watershed sees more use in greyscale images.

As additional algorithms are added to the module, attention should be given to diversify so algorithms with different strengths are implemented first.

Region Growing Variations

Even within the region growing label, there are a number of approaches. Here are two described in [5].

1. Growing

Seeds (as a subset of the pixels) are selected (using image histogram, previous knowledge, or other methods). Region growing is done by adding adjacent pixels. No merging of segments is done, only unassigned pixels can be assigned to adjacent regions.

2. Growing and Merging

Use all pixels as seeds, no need to have user figure out a reasonable starting seed selection. Now adjacent segments can be merged.

Is there ever a case where someone may want to start with seeds, but still allow segment merging? Or does that fall into the realm of classification to be done in the next step?

At this point, it seems both variations should be implemented.

Segmentation Considerations

All(?) methods have some input parameter(s) that can be set. These parameters influence if the algorithm will over-segment (one expected region is divided into 2 or more segments) or under-segment (putting two expected regions into one segment). If the segments are used for later classification, over-segmentation should usually get preference to under-segmentation. With extensive over-segmentation, some of the advantages provided by segmentation can be lost, but at least the classification can combine the segments into the expected region. Under-segmentation is more critical, as the classification step will not divide the segment to recover the different regions. (Based on a summary of a number of papers from [1])

In order to respond to the issue of over/under-segmentation, a multiscalar approach would be interesting. This would mean either a top-down approach with a first coarse segmentation (under-segmentation) and the finer segmentation in selected segments, or a bottom-up approach with first a very fine-grained segmentation (over-segmentation) and the regrouping of segments to form higher levels. The first approach can be solved by doing a first segmentation, using certain segments as masks and then relaunching a second segmentation. {Or by using a vector map of the first segmentation as a boundary constraint in the second segmentation.} The second approach requires an algorithm to decide which segments should be combined in a larger higher-level segments. A simple nearest neighbor or kmeans approach based on spectral mean can be used here. In terms of implementation in GRASS, this would probably call for several modules, one for the segmentation, and another for grouping of segments. The latter could be an all-purpose clustering module (and can also be emulated by simple data analysis in the attribute table + v.dissolve).

It can sometimes be interesting to do a first segmentation on one band (e.g. panchromatic with higher resolution) and then regroup segments based on multispectral data (possibly weighting bands).

Main Goal

Implement an image segmentation method to extend the available options for image processing in GRASS. The region growing method has been selected as a robust general purpose method. An important contribution of the new method will be to include vector maps (for example road networks) as a constraint in growing the segments. Output from the module will include Spectral (mean/variance/range/ect) and Spatial (area/shape/location/etc) data for each region.

Specifications

  • General considerations
    • The general principle in GRASS is KISS, with each module doing one thing. It is to be seen if the result of this project is one single module or rather more than one module each specialised in one task in a segmentation workflow.
    • As soon as code is to be (potentially) used in several modules, the use of a library should be envisaged.
    • Be able to process large images while being considerate of system memory
  • Input
    • in the GRASS logic, input should be an image group, or even image subgroup, which can contain any number of raster maps, but generally satellite or areal images that are pre-processed and ready for analysis (i.e. no pre-processing in the module) (Update: subgroups are not often used, there use will not be implemented unless someone asks.)
      • This input group will define the feature space which can include spectral and other continuous (elevation, PCA layers, slope aspect...) and possibly (probably not initially) even discrete data (soil type, land cover...)
      • Default action will be to normalize/scale all input rasters to a 0-1 range. The allows bands (0-255), NDVI, and other numbers to be compared on an equal basis in the distance formula without any preprocessing steps. Since it gives equal weights to all rasters in the input group, a flag will give the user the option to skip this normalization step in case they want to use the actual values.
    • optionally vector maps of existing features
      • lines (be it linear features or boundary lines of polygons) should be used as constraints meaning that no segment boundary should cross such a line
      • centroids/points to be used as initial seeds
    • What segmentation algorithm to use
    • Parameters for that algorithm
  • Algorithm of segmentation
    • in GSoC implementation of only one algorithm
    • code should be structured to allow easy implementation of additional algorithms
    • multi-scalar segmentation can significantly improve results and should thus be implemented if possible (see i.smap code for example)
  • Similarity measurement
    • The squared euclidean distance will be the default similarity measurement. If time allows, Manhattan distance will be added as an option. (Using the square will give same results, we will also square the similarity threshold so the user doesn't need to worry about this detail.
    • For the default scaling of the input, the similarity threshold will be 0 to 1. This should be a good intuitive range for the user, 0 being the entire image is one segment and 1 being no segments can be formed. (Internally, this number must be multiplied by the number of rasters in the image group, but again the user doesn't need to worry about the details.) If the user selects the option to skip the normalization function, they will need to be careful how to select this parameter.
  • Output
    • first (segmentation) module: raster map of segments (i.e. each pixel value represents id of segment the pixel belongs to)
    • second (stats) module: one vector map of segments per hierarchy level with a series of attributes (not all of these attributes should probably be calculated directly be the segmentation module)
      • spectral attributes:
        • per spectral band: mean, min, max, skewness
        • combination of bands: brightness, indices (i.e. results of multi-band calculations)
      • textural attributes: stdev (per-band and/or multi-band), mean difference to neighbor, Haralick texture features cf r.texture
      • geometric/morphological attributes: area, perimeter, length/width measures, see also r.li
      • context attributes: mean difference to all other regions in the same upper hierarchical level, relative localisation within upper hierarchical level, absolute localisation, number of objects in lower level
    • depending on segmentation algorithm: raster map indicating for each pixel the probability of belonging to the segment it was put into, i.e. some measure of reliability of results

Questions

Number of modules: Should the user run one module to create the segments (raster output), then if they are interested, run r.to.vect and run a second module (vector input/output) if they want to get the statistics. (GUI glue to put them in one screen would be a low priority task for time remaining at the end of the summer.) (I wonder if the stats module should take vector or raster as the input, it will also need the original raster.)

"Probability of belonging to the segment": For region growing - should this be the similarity measure when it was merged? Or similarity measure of the pixel compared to the average? /*ML: Not sure, but I would think that similarity between pixel and average of region it belongs to might be a good choice. Am not a specialist in statistics, but maybe it is possible to translate this into some form of probability of really "belonging" to that region (cf i.maxlik)*/ So this would be a comparison of the pixel to the final segment. Does anyone have a standard measurement that should be used?

4 vs. 8 neighbors: Should this be a user input option? It seems 4 neighbors (no diagonals) is the normal definition for segmentation, but not for other GRASS modules. Update: Using 4 neighbors as default, with optional flag to select 8 neighbors.

Null cells: Is it possible for some pixels inside the image to have null values? If yes, should they just be excluded from the calculation, or merged into the nearest segment? Update: current plan is to ignore all NULL values in the calculations.

Are there any examples of using linear features to constraint segment growth, or will it usually be polygons?

Lower priority

Add shape characteristics (smoothness, compactness) to the similarity measurement. Similar to eCognition "Multiresolution Segmentation"

It may be useful to do some calculations for the color space (RGB, HSI, L*u*v*, L*a*b*)? (I saw one paper [3] discussing pro/con of different systems, "best" answer is application dependent.)

ML: I would say leave decisions on color space (which is just one portion of feature space) to the user: one can group any kind raster maps with i.group and submit that to segmentation, and so the user can decide whether to use an image represented by different bands in a specific color space, plus any kind of other bands, indices, etc.

Test Images

The results of the implemented algorithm should be compared against the results of a similar algorithm implement in other software. The North Carolina GRASS sample location will be used for documentation and manuals.

Carleer [1] used images with 1m resolution from Ikonos, panchromatic band from 08 June 2000, Brussels area.

Should check segmentation results on images from a few different resolutions and different numbers of bands against what is obtained in other software.

Is there a benchmark for processing speed that should be considered? [4]

Project Plan

Preparation: Gather ideas from the community! Feature requests, image segmentation literature, and any other ideas and suggestions.

  • May 21: Start coding, 8 weeks until Midterm Evaluation
  • Week 1: Develop pseudocode to outline the work Report 1
  • Week 2-4: Implement the main algorithm Report 2 Report 3 Report 4
  • Week 5: Add vector maps as a constraint to the segmentation Report 5
  • Week 6: Validation Report 6
  • Week 7: Debugging Report 7
  • Week 8: Contingency time for finishing the above, ensure a solid main program. Report 8
  • July 9: Midterm Evaluation: Evaluate the existing program, determine the plan for the remaining 3-4 weeks. Options include:
  • Improving the main algorithm
  • Adding control for what scale the segmentation is performed at
  • Providing updates to i.maxlik to ensure the segmentation output can be used as input for the existing classification functionality
  • GUI
  • Adding a second image segmentation algorithm

Report 9, Report 10, Report 11, Report 12, Report 13

Region Growing Algorithm

Here is (a start!) for the processing steps, based on SPRING [2]

Region Growing, bottom up processing. Main improvement compared to simple algorithm is to slowly lower the similarity function, so only best matches are made first. This prevents the "first" segment from taking over any unclear areas between it and the next clear segment.

1. Input:

    • Seeds: all pixels (Later addition can be alternate seeding methods)
    • Similarity Threshold T(t)... as t increases, threshold for similarity is lower. SPRING used: , where T(0) > 0, t =0,1,2... and alpha <1
    • Size of smallest allowed area (Is this wanted or needed ???)

2. Loop for t

    • initialize candidate regions, save mean value vector and neighboring regions (Not sure why this needs to be calculated/saved ahead of time ??)

3. For each region i in candidate region set (first pass this equals the seeds):

    • Compare Ri with neighbors (Question: should neighbors include or exclude those regions that were already matched?
    • If it exists, Rk is best neighbor if smallest D of all neighbors and and D < T.
    • Check Rk's neighbors.
    • Merge IF Ri is Rk's best neighbor
    • remove from candidate region set. (give all "small" regions a chance to merge with best neighbor before growing larger regions)
    • update segment values
    • next i

3. next t, with all segments returned to candidate region set, until no regions can be merged

4. Force a merge of regions that are too small

ToDo List

The following list was developed at the "mid" point review, with about 1 month left. Rating system is 1: must do, 2: would be nice, 3: probably only will be finished if it is a quick task.

The list has been updated for the status at the end of GSoC 2012. Remaining items to be developed later are added to the TODO manual section. Further progress and new features requests will not be added to this listing.

Functionality

1: Implement the 8 neighbor option (currenly only the 4 pixel neighbors are considered). (done)

1: Starting seed pixels for the segments (done)

1: handle null cells in the optional boundary constraints raster. (done)

2: Current input limit is 2 billion starting segments, constrained by "int" data type for segment ID. Consider long int, and/or dynamic allocation of different storage depending on what is needed. (MM: Unfortunately you are stuck with the largest integer type that a GRASS raster supports with is 32 bit integer. Internally you could use larger integer types, but then you can not save the results... EM: Hmm, if the segments are renumbered sequenctial at the end, it would be possible to then save them if the resulting number of segments is less then 2 billion... Does anyone want to segment a raster with more than 2 billion pixels? As a work around, larger maps could be processed, if a random selection of pixels are used as seeds... At a minimum, put this limitation in as error checking and the documents.) (documented limitation)

2: Check input parameters for mean-shift and other segmentation algorithms, try to make input parameters "generic" so they could be used for any/other algorithms.

2: Add shape characteristics (smoothness, compactness) to the similarity measurement. Similar to eCognition "Multiresolution Segmentation". Check Baatz and Shape paper. Adds two input parameters (weight of radiometric to shape, and weight of compactness to smoothness.) (Maybe use the ratio of the number of edge cells to the total number of cells as a proxy for compactness, which could be easily obtained as a side-product when finding neighbors.)

2: Alternate similarity measurements (Manhattan, Malahanobis) (Added Manhattan distance)

3: Adding a parameter to make it easier to merge smaller segments and harder to merge large segments. (Preliminary testing is not promising, low priority)

3: Estimating the threshold value. (at least add to docs) (1 to 5% of the max difference gave me (MM) subjectively good results.)

?: Adding control for what scale the segmentation is performed at. (EM: I'm not certain what is meant/needed for this, but I think it is a different concept from just using g.region.)

Statistics/metrics

1: i.segment.stats (It should do more then just statistics... .evaluation .metrics .data Maybe i.segment.metrics?) (Will need to evaulate what is already available from other GRASS modules, what is easy, what is hard. Start from the specifications for what is desired.)

1: Providing updates to i.maxlik to ensure the segmentation output can be used as input for the existing classification functionality.

1: Integration/workflow for r.fuzzy.

Speed

2: Neighbor finding, keep a tree structure of found neighbor segments to reduce the number of neighbor pixels that the similarity function will be run on. implemented, ~20% speed reduction

2: Search continuation. If Ri isn't Rk's best neighbor, then use Rk as the next Ri. (Skips one neighbor finding routine.) TODO: need to decide if this should stay in or out, or as an option. Currently implemented as an option. Need to do some speed tests to see if it is faster or slower after finishing some other changes.

2: Consider peano or other ordering for pixel processing (instead of row major order), should help processing time if an entire "row" of segments are not in RAM. implemented. No significant time change on the small maps it was tested on. Probably this will be helpful if disk I/O becomes limiting.

3: neighbor finding: When checking for Rk's neighbors, account for already knowing Ri and skip those pixels.

?: change candidate flag to int (compare with pass number) to avoid resetting each time. (32x RAM requirement for the flag, is it worth it?) Decided not to change this.

?: RAM storage of the segment membership and the neighboring segments (calculate first the requirements, if this is even possible for reasonable (what size?) maps). (check what % of the processing time is spent finding neighbors.)

Memory

1: User input for how much RAM can be used.

2: Consider putting the optional boundary constraints raster into RAM (dependent on available RAM).

2: Use "zero" for segment ID's of Null cells, discard the NULL flag. (Need to check speed impact with Seg ID in SEG storage.) Decided not to do this.

3: Check input map type(s), currently storing in DCELL sized SEG file, could reduce this dynamically depending on input map time. (Could only reduce to FCELL, since will be storing mean we can't use CELL. Might not be worth the added code complexity.)

3: Put segment ID in SEG instead of RAM. (Possibly make this dependent on available RAM?) (Only consider if we run into RAM limitations.) Implemented

Polish

1: Add error traps. (Certainly for memory allocation, Minimum number of non-NULL cells in the input bands?anything else?) Added some additional checks.

2: Make the output segment ID's sequential (currently they have what ID the "first" pixel in the segment had). Decided not to do this.

2: There are many small TODO scattered in the code. Resolve some easy questions to clean up the code. Resolved most. Some are left as enhancement suggestions for later.

2: Change G_percentage: estimate total number of passes expected from histogram and threshold. (If this isn't reliabe, maybe change to show 1% for each pass, i.e. % complete out of first 100 passes, then % complete out of next 100 passes, etc.) Using % complete of the input number of passes threshold.

3: GUI (to combine i.segment with the stats module) Wait until there is user demand for this.

Documentation

How to choose parameters, what their impact is.

Typical workflow:

Workflow

Todo: Some typical workflow examples, type of data, GRASS modules used before and after the image segmentation.

Results

Ortho-photo

The link to ortho data is missing has 3 bands, and the computational region is 1,120,080 cells, at 1-m resolution.

Here's the code used to generate this segmentation result:

i.segment group=ortho output=ortho_segs_ha threshold=0.02 endt=10000 final_mean=ortho_segs_mean_ha min=20 --o

The segmentation performed in 22m53.255s on a Intel i5 laptop with 4Go RAM. The memory consumption was around 38 Mo.

SPOT5 scene

The link to SPOT data is missing has 4 bands, and the computational region is 4,444,517 cells, at 10-m resolution.

Here's the code used to generate this segmentation result:

i.segment group=spot output=spot_seg threshold=0.01 endt=10000 min=30 --o

The segmentation performed in 87m3.870s on a Intel Core 2 workstation with 8Go RAM. The memory consumption was around 170 Mo.

References

TODO: complete references with links.

[1] Carleer, et al: Assessment of Very High Spatial Resolution Satellite Image Segmentations, 2005 (Evaluates 2 boundary and 2 region based algorithms.)

[2] Bins, et al: Satellite Imagery Segmentation: A Region Growing Approach, 1996 (Describes approach taken in SPRING software.)

[3] Cheng et. al.: Color image segmentation: advances and prospect, 2000 (survey of segmentation methods and color spaces)

[4] G. Meinel, M. Neubert: A Comparison of Segmentation Programs for High Resolution Remote Sensing Data, 20?? (Includes timing to complete segmentation)

[5] Pitas, I: Digital Image Processing Algorithms and Applications, 2000 (Textbook, including 1 chapter on segmentation methods.)

eCognition Reference Manual

Kurtz et. al: Hierarchical Segmentation of Multiresolution Remote Sensing Images, 2011

Comaniciu, Dorin: Mean Shift: A Robust Approach Toward Feature Space Analysis, 2002