{"id":848,"date":"2015-01-15T20:14:56","date_gmt":"2015-01-15T20:14:56","guid":{"rendered":"https:\/\/dronesforearth.org\/?p=848"},"modified":"2020-11-13T00:06:25","modified_gmt":"2020-11-13T00:06:25","slug":"namibias-savanna-classified","status":"publish","type":"post","link":"https:\/\/dronesforearth.org\/index.php\/2015\/01\/15\/namibias-savanna-classified\/","title":{"rendered":"Classifying Namibia\u2019s savanna: Turning drone imagery into vegetation base maps"},"content":{"rendered":"\n<p>Timoth\u00e9e Produit of EPFL\u2019s <a href=\"http:\/\/lasig.epfl.ch\">LASIG<\/a> lab was part of our <a href=\"https:\/\/dronesforearth.org\/?p=733\"><strong>Namibian mission in May 2014<\/strong><\/a>. During the mission, Tim gave lectures both at the <a href=\"http:\/\/www.polytechnic.edu.na\/\">Polytechnic of Namibia<\/a> as well as at the <a href=\"http:\/\/www.gobabebtrc.org\/\">Gobabeb Research &amp; Training Center<\/a> on how to use the acquired drone imagery to classify terrain. Once all the imagery of the mission had been processed back home in Switzerland, Tim went on to use our data for classification purposes.<\/p>\n\n\n\n<p>In this blog, we explore how to use multi-spectral imagery acquired by the <a href=\"https:\/\/www.sensefly.com\/drones\/ebee.html\">eBee<\/a>, processed into orthomosaics using <a href=\"http:\/\/www.pix4d.com\/products\">Pix4Dmapper<\/a>, to create vegetation base maps. <\/p>\n\n\n\n<p><a href=\"https:\/\/dronesforearth.org\/?p=848\">READ MORE<\/a><\/p>\n\n\n\n<!--more-->\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/66.media.tumblr.com\/1d392e0bcf305918de4428f535a95ad0\/tumblr_inline_ni0qgoMFEC1sm7rx8.jpg\" alt=\"image\" width=\"583\" height=\"389\"\/><figcaption><em>Tim giving a lecture at Gobabeb Research &amp; Training Center on Vegetation Indexes<\/em><\/figcaption><\/figure><\/div>\n\n\n\n<p><strong>A \u201cquick and dirty\u201d classification of the Namibian savanna<\/strong><\/p>\n\n\n\n<p>Aerial photographs, such as those obtained with ultralight drones offer a unique perspective on landscape dynamics. Through a process called orthorectification, raw aerial images are transformed into orthoimages \u2013 images with the same geometric properties as maps (i.e. where optical deformations such as those due to perspective are corrected). Multiple orthoimages can be stitched together to form an orthomosaic \u2013 a paramount product to update maps and provide a detailed view of the landscape. However, without further processing to produce more abstracted (read less detailed) representations, orthoimagery may only have a limited value for casual users such as farmers. Indeed, the power of maps resides in the simplification and abstraction of details to highlight specific topics (e.g. social, environmental, economic, etc).<\/p>\n\n\n\n<p>In this post, we will focus on the creation of a savanna vegetation basemap by employing supervised classification. In this context, the purpose of classification is to associate each pixel of the orthoimage with a specific land cover class. We will illustrate a simple scheme to classify a NIR (i.e. Near Infra-Red) orthomosaic into four basic classes (trees, bare soil, grass and shadow).<\/p>\n\n\n\n<p>Several open source tools offer pre-implemented algorithms for image classification: <a href=\"http:\/\/www.orfeo-toolbox.org\/otb\/\">Orfeo toolbox<\/a>, <a href=\"http:\/\/www.saga-gis.org\/\">SAGA<\/a>, <a href=\"https:\/\/engineering.purdue.edu\/~biehl\/MultiSpec\/\">multispec<\/a> or the <a href=\"https:\/\/plugins.qgis.org\/plugins\/SemiAutomaticClassificationPlugin\/\">semi-supervised classification plugin for QGIS<\/a> are a few examples. We used SAGA which is a C++ based GIS offering powerful raster\/vector processing and analysis tools. Although it has a relatively steep learning curve, it is very efficient to process large data sets \u2013 a particularly important feature when working with very high resolution imagery (in our case 5 cm).<\/p>\n\n\n\n<p><strong>Creating our savanna vegetation basemap<\/strong><\/p>\n\n\n\n<p>A NIR image has three spectral bands: the near-infrared band replaces the blue band, the two other bands measure the green and red responses. A vegetation index is based on the fact that plants reflect NIR radiation, which is too low in energy for the photosynthesis, but absorb the visible light. In other words, this means that vegetation has a very recognizable spectral signature in multispectral imagery. Usually, a variant of the ratio of red and NIR bands is used to create vegetation indexes that provide an indication of photosynthetic activity. Similarly, classification algorithms can use such spectral signature differences to detect the most probable class of a pixel.<\/p>\n\n\n\n<p>The overall process is divided into three main steps:<br>First, we create&nbsp;a vegetation index map that will give us an indication of the photosynthetic activity in each cell of the orthomosaic.<br>We then associate each pixel with one class (for instance: tree, soil, grass and shadow) for the analysis of the land cover partition<br>Finally and based on the this classification, we quantify the number of trees<\/p>\n\n\n\n<p>As its names implies, a supervised classification requires supervision by an expert user. This user manually provides examples of areas for each considered land cover class. The pixels contained in those areas form a training set. Then, a classification algorithm automatically attributes the remaining pixels to each class, based on their similarity to the training samples.<\/p>\n\n\n\n<p><strong><em><a href=\"https:\/\/www.dropbox.com\/s\/axmzrcsbyo9i7qf\/savanna_land_cover_classification_tutorial.pdf?dl=0\">Click here to download<\/a> the guide that shows you step by step how we created our Vegetation basemap<\/em><\/strong>.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/66.media.tumblr.com\/3e2bff67ad83181e5cc9e0eb8ac94c89\/tumblr_inline_ni0qpcm0Sb1sm7rx8.jpg\" alt=\"image\" width=\"587\" height=\"559\"\/><figcaption><em>Classification of each pixel in 4 land cover classes: orange = bare soil \/ light green = grass \/ dark green = trees \/ black = shadows<\/em><\/figcaption><\/figure><\/div>\n\n\n\n<p><strong>Counting trees<\/strong><br>The Namibian savanna is a textbook case for imagery classification. Land cover classes are well separated in the spectral and spatial dimensions which explains the good results of the acacias detection. With some extra work, on the basis of this initial rough classification, a specialist can expand this basemap by labeling more vegetation classes (e.g. types of bushes, grasses, acacias, etc).<\/p>\n\n\n\n<p>As implied in the title, this is really a \u201cquick and dirty\u201d approach. Indeed, the integration of structural 3D information in the classification pipeline would largely increase the quality and detail of the classification procedure.<\/p>\n\n\n\n<p>In summary, supervised classification provides an easy way to produce vegetation maps by automatically reproducing expert visual interpretation based on training examples. The end product can be used by farmers to quickly evaluate changes in vegetation patterns over time.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/67.media.tumblr.com\/90f2ad9117c3a703c8fb7d735beecfc1\/tumblr_inline_ni0qpwWLZP1sm7rx8.jpg\" alt=\"image\" width=\"589\" height=\"561\"\/><figcaption><em>Yellow shapes correspond to pixels of the &#8220;Tree&#8221; class that were merged to create a vector layer storing the trees. We ensured that isolated (wrongly classified) pixels were deleted.<\/em><\/figcaption><\/figure><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Timoth\u00e9e Produit of EPFL\u2019s LASIG lab was part of our Namibian mission in May 2014. During the mission, Tim gave lectures both at the Polytechnic of Namibia as well as at the Gobabeb Research &amp; Training Center on how to use the acquired drone imagery to classify terrain. Once all the imagery of the mission [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":850,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"ngg_post_thumbnail":0,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-848","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/posts\/848","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/comments?post=848"}],"version-history":[{"count":7,"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/posts\/848\/revisions"}],"predecessor-version":[{"id":1048,"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/posts\/848\/revisions\/1048"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/media\/850"}],"wp:attachment":[{"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/media?parent=848"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/categories?post=848"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dronesforearth.org\/index.php\/wp-json\/wp\/v2\/tags?post=848"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}