Head GISDTDA

Visual interpretation

Visual interpretation

Elements of interpretation are as follows.

1. Tone/Color: The difference level of one Tone is related to reflected signal value of wavelengths and color composite in each wavelengths, for example, water in the period of which the infrared radiation is closed to be absorbed and appeared in black color. In the image, the mixed color of florae appears in red when specify the infrared radiation close to red color. The red wavelength is specified as green color, and the green wavelength is specified as blue color.
2. Size: Size of an object appeared in satellite data depends on the object size, and the data scale from the satellite, for example, length, width or area indicates the difference of the object size between river and canal.
3. Shape: The individual object shape could be regular or irregular. Most of the object, which is created by human, is in geometric shape, i.e., an airport, rice field, canal road, and dam, etc.  
4. Texture or roughness and detailed of the object surface, is resulted from variation or constancy of the object, for example, water has an even surface, and forest has rugged characteristic, etc.    
5. Pattern: The object arrangement is clearly appeared between the difference of the nature and what human has created such as river, canal and irrigation canal, pond and dam, etc.
6. Height and Shadow: Shadow of the object is important in calculating height and a bird’s-eye view of the sun, for example, shadow in a mountain area or a cliff, shadow of cloud, etc.
7. Site: or location of the object which is naturally founded, for example, mangrove forest in kelp-shore area, an airport nearby the community, etc.      
8. Association means the association of all 7 elements described above, for example, villages are usually located in the forest cluster area, mobile plantation is located in forest area on the mountain, shrimp farming is on the coastal and mangrove area, etc.

Visual interpretation to successfully and correctly classifying the object depends on either only one or all of the described elements; relying on difficulty and different scales which is usually uncertain.
Shape, color and size could be used as element in visual interpretation for one area or one group. The same elements might also have to be used for the other zones of the same area. Moreover, it is necessary to bring other 3 characteristics of satellite data to use in consideration, which are as follows.

– Spectral characteristic, which is related to wavelength of light in each band. Various objects unequally reflect in each wavelength resulting on the difference of the object color in each band in white-black level, which also causing the color difference in mixed color image.      
– Spatial characteristic is different according to scale and detail from satellite date, for example, MSS object or area of 80x80 meters will appear in the image, and PLA system size is 10x10 meters when acquaint to the spatial characteristic; making us understand the simulation characteristic in satellite image.    
– Temporal characteristic which makes the status of various objects change, for example, seasonal change, yearly change or period change, etc. The mentioned change causing the difference of color level in black and white image, and mixed color image, which makes us able to use satellite data that has been repeatedly took in the different time in order to track changes, for example, being able to track forest trespass, vegetation growth since planting till harvesting, etc.

 

Pre-processing

Image processing with computer

Step 1 Pre-processing

The purpose of image adjustment is to adjust data error, noise, and Geometric Distortion which occur between photography processes. There are 2 adjusting processing in signal recording, electromagnetic wave reflection, signal transmitting, and satellite orbit, which have to be done as follows.

Radiometric correction

Before sending data from long distance to the user, these data need to be at least examined and corrected the wave from satellite ground station. However, there are some mistakes appear in the wave because of many factors, for example, disturbance in the ionosphere or defected signal receiver causing the strip/noise which appear on satellite data. Other side of the problem which the wave is needed to be examined and correct is when the data is needed to be use in various period in order to study the change of certain phenomenon. It requires Sun elevation correction that changes according to period of time and season. The mentioned problem can be solved by detect and correct the wave. It is also requires the detail on Parameters regarding signal receiving, Solar illumination angles, Irradiance, Path radiance, Reflectance of target, atmospheric transmission, etc. And also weather information while recording the data. The correction has complicated processing which requires specific software for detecting and correcting the wave, which the following method is normally requires in order to correct the defected wave.

1) Haze compensation occurred from the scattered light in the atmosphere causing light haze which resulted in unclear and unsharpened image. It can be fixed by reducing the scattered light in the atmosphere by comparing the general brightness value to the zero reflectance value, which mostly is the object that absorb large amount of energy such as clear water has high absorption in infrared radiation.

2) Conversion of digital numbers to absolute radiance value is another radiance correction by converting brightness value into radiance value; calculating by using data from the highest radiance value and the lowest radiance value in each wavelength by following the formula below.

L = × DN + LMIN
Whereas          L = Spectral radiance
LMAX = the highest radiance value which converted from the lowest brightness value in the certain wavelength. (DN = 255)
LMIN = the lowest radiance value which converted from the lowest brightness value in the certain wavelength. (DN = 0)
DN = Reflection Value of Digital number

3) Noise removal The result of defected signal receiver has caused noise in data image or some data in signal is missing and appear as strip blended in the image content or as Salt and pepper effect. To solve this issue, it requires Mean or median filters to calculate mean from other pixel that surrounded the area of the missing signal. And the particular filter used for filtering the filter spot is still keep the range and image content of data spot.

Geometric correction

Before making use of satellite data, it is necessary to arrange Geometric Correction since the coordinate of each object varies from the reality due to the defected signal receiver and the object characteristic. Geometric Correction is highly essential when it comes to the use of data from long distance with other geographical data in order to be able to overlapping. Or in case of studying the certain phenomenon in various period of time, data comparing in each period of time requires the same coordinate system to be able to tightly overlap the data of each period of time. Therefore, Geometric Correction is needed.

1) The cause of Geometric Distortion
Geometric distortion is a mistake caused by the image coordinate that is inexact to the map coordinate system. There are two types of Geometric Distortion as follows.

1.1) Internal distortion caused from the defected signal equipment such as the distortion in the halo of lens affected on the area that is far from the center of the image which creates more distortion. The lens’s tangential distortion, the mistake of focus length, the image’s incline plane, the image’s plane instability, the mistake of arranging panel line for linear array sensor, the instability of sampling rate, the mistake of random sampling, the instability of sweep lens’s speed, etc.

14_

ตารางแสดงสาเหตุของชนิดของความบิดเบี้ยวภายใน

1(185)

ภาพแสดงความบิดเบี้ยวภายในรูปแบบต่างๆ, ที่มา : Mikio, T. and Haruhisa, S. (1991)

1.2) External distortion caused by many factors, for example, balance of the meter, the instability of balance, and the Earth’s curvature and movement, the Earth’s rotation, atmosphere and atmospheric refraction, the mistake at plane level of the spacecraft, the mistake at the craft’s height level, the orbital motion, the Earth surface’s curvature, the height of the surface, and the spatial characteristic, etc.


ที่มา : Mikio, T. and Haruhisa, S. (1991)

2)  Geometric examination has principle of correction by building connection of coordinate system between the data of rectified image with geographic coordinate system of reference image in order that the coordinate data which need to be examined and corrected will be changed to the new coordinate system according to coordinate reference system. The coordinate reference system could also be information from long distance as well, for example, in case of wanting to study the information in same area in the various periods, it is called Image to image correction, or reference information could be the geographic map or specific coordinate system map. If you want to use the long distance data to study in cooperate with other data map or in order to compare with the actual state in the study area, it is called Image to map correction. There are 3 methods in geometric examination as follows.

2.1) Systematic correction is the correction referring various geometric reference values which has been identified previously. The occurred mistake could be systematically corrected, for example, geometry of lens camera will be specified by Co-linear equation with adjusted focus length. Tangent correction of Optical Mechanical Scanner is classified as systematic correction. Generally, this type of correction could fix all the mistakes by geometric calculating for the long distance data. It is usually operated on one level from the receiving station and has been transmitted called Bulk level, which is correcting the both internal and external distortion. However, the data is still the coordinate system in row and column of image content according to Raster data structure.

2.2) Non-systematic correction is the correction from coordinate of image system to coordinate system by using Polynomial equation. This method requires finding the value of Ground Control Point GCP known from geographic map, specific coordinate system map or actual coordinate, which measured form satellite system specifying position on the Earth. Ground Control Point uses it as information in calculating Mathematical equation in order to compare between coordinate image coordinate system and geographic coordinate system. The calculation uses the least square method and could specific accuracy from Order of Polynomial equation, quantity, and ground control point distribution.

2.3) Combined method is the correction by using the combination of the two described methods, which is normally popular use with long distance data. The Systematic correction will always be calculated at the receiving station prior to distributing the data to the user and the Non-systematic correction is the part that followed by the user in order to meet the object in integrating. In case of Vertical aerial photographs, it is generally accepted that in the correction there can be only one mistake of the pixel of that actual location.

3) The accuracy of Ground Control Point finding and Geometric Correction would be correct more or less depends on choosing Ground Control Point, which is the whichever location on satellite data and clearly appears as single spot on the reference  information. A good Ground Control Point is generally a point that has stability of shape, especially between the time that has data from long distance and reference information must be an easily observed spot, which is the object’s intersection (i.e., intersection of road, a plot of field, building’s corner, etc., an outstanding point (i.e., trees in rice field, a house in the middle of salt field, an outcrop area on the top of the mountain) by observing the difference of brightness value between different object and use in consideration. There should be enough Ground Control Point and constantly distribution in the studying area; as many point as possible in order to control the consistency of Geometric Coordinate Transformation in all over area. Should Ground Control Point gather only in some area, the correctness in that area would be more than the area where has less Ground Control Point.

4) There are two procedure of calculation of Geometric Correction as follows.

4.1) Geometric Coordinate Transformation between the staring image (x1,y1) and Geographic Coordinate (x,y) has Geometric Correction by using Linear equation in order to locate the new coordinate, which is called Spatial interpolation process. The equation is as follows.

x1 = a0 + a1x + a2y

y1 = b0 + b1x + b2y

Whereas  x1 = Coordinate of column of the original input image
y1 = Coordinate of row of input image data
x = Coordinate of column of output image data
y = Coordinate of row of output image data

The examination of correctness of the correction is calculated by the Least square regression method. The correctness value is calculated by square root mean of dislocation from each Ground Control Point (Root method), which the root mean square error (RMSerror) formula is as follows.

001

โดยที่  RMSerror = Correctness value of Ground Control Point

xorig, yorig = Coordinate value of Ground Control Point before correction

 

 RMSerrorvalue could tell whether Ground Control Point has any coordinate location close to reference coordinate (unit as digital number). It is mostly accept addition and subtraction value which is not more than 1 digital number. If RMSerror value is high, it means that there is still much dislocation. It can be calculated in metric by multiply RMSerror value with digital number size.

4.2) Digital number interpolation After coordinate transformation, it is necessary to interpolate digital number or converse digital number by Intensity interpolation, which there are 3 method of calculation as follows. 
4.2.1) Nearest Neighbor interpolation: NN The new brightness value will be specified from brightness of pixel before the correction by choosing the nearest position. The advantage of this method is that it is able to maintain brightness value to be mostly similar to the image before the correction or barely change at all.

4.2.2) Bi-Linear interpolation: BL is the interpolation of new brightness value by calculating the distance through weight between 4 points that is around the area which the nearby pixel will provide more weight than those which is more far away as follows.

 

Whereas  BVwt = The new brightness value

Zk = Brightness value of pixel which is near to the area
Dk = Distance of the point to other surrounding point 
By using this method to calculate the new brightness value, the data image is smooth with decreased detail from initial image data.

4.2.3) Cubic Convolution interpolation: CC The new brightness value will be interpolated by cubic function using 16 surrounding pixels. This way of calculating provides good result in both sharpness and continuousness. However, it requires more time to calculate when comparing to other method. And the new brightness value will be average from all surrounding pixels which could be different from the previous brightness value.

 

Image enhancement

Step 2 Image enhancement 

Image enhancement is the process of pixel value conversion or value of grey color level in order to increase more detail, distinctiveness of data image or increase the difference level between objects, therefore, the object boundary could be noticed more easily or could be emphasized on sharpness especially the area that is need to be studied. It also helps object interpretation to be easier and use the result to interpret by studying Visual interpretation in order to specify data type before using it for data classification. Techniques used in image enhancement depend on the following factors.

- Pixel value which consists of data from many wavelengths of satellite data that is specifically designed. User has to get to know form of interaction between energy in each wavelength and the Earth surface object.
- Image enhancement object is the changing information in order to see better in detail of subject which is needed to be studied.  
- Expecting result from enhancing data image

– Fundamental of an analyst requires an experience on analyzing and data enhancement technique in difference level between object in data image which can be measured from proportion of Contrast ratio level by using the highest brightness value divided by the lowest reflected value as the following formula.

Cr = Bmax / Bmin

Whereas  Cr     = Proportion of Contrast ratio level

Bmax   = The highest brightness value of data image
Bmin   = The lowest brightness value of data image

If the proportion of Contrast ratio level is high, it indicates that the image is very lucid. The object boundary could be distinctively distinguished. And if the proportion of Contrast ratio is low, the sharpness is decreased. The object boundary of each type of data is not clear.

The brightness value or value of grey color level in each wavelength could be used to improve value of grey color level to be more different, which produces an effect on receiving more detail of the image. There are many techniques of image enhancement which could enhance the image detail to be more different. Therefore, it requires the user to have sufficient knowledge on image enhancement method in order to choose the right method for beneficial use to the proper task. The image enhancement method is as follows.

1) Spectral enhancement is considering value of Individual pixel in the image without considering the nearby pixel. The objective of this type of enhancement is to make types of data that wants to appear to be clearer, which depends on characteristic of target area that is wanted to enhance. The enhancement method that could be able to be used with a certain wavelength might not be suitable for another wavelength.  
Image enhancement generally could not sharpen all images. In one image, its sharpness might be good for certain area and it could lose its sharpness in some certain areas.

1.1) Contrast stretch is one of image enhancement technique by magnifying satellite data in certain wavelength from the lowest value till the highest value; expanding an antinode in full level. The grey color value according to data bits, for example, 8 bits of data could display much to 256 levels and extend the former lowest value to zero value, and extend the former highest value much to value at 255. This method is also called Gray scale stretching or Contrast stretching which caused more sharpened image. In this case, the database will not be changed. The data will be filed in Look Up Table: LUT or can also be permanently filed.  There are many format of image sharpening in order to adjust the image sharpness, especially to magnify our interested area in charts that we want to specifically study, for example, the requirement of specifically studying water would be done by stretching the chart in the lowest period, etc.

 

1.2) Histogram equalization is sharpness magnification which is not linear type that has value distribution of pixel in order that the number of pixel would be equal in each period of pixel. The result is having smoother chart, and image sharpness would be better on the top of the chart.   
1.3) Spectral ratio means numerical image enhancement which uses the result of pixel value from other different band in the same location and give result in the same location. Scale choosing between one certain band depends on the object of applying with work. The applying of image which is wanted apart from single band, scale of different band on band summation is still popular as well.     
1.4) Linear combination is another popular image enhancement and data in many bands are able to be used; Linear combination of various bands by each band will be specified its coefficient to increase or reduce weight in each bands by the following format.

A = a1x1 + a2x2 + a3x3 + a4x4 + …..
Whereas A = Linear combination
a1, a2, a3 ……. = Coefficient
x1, x2, x3 ….. = Value of pixel in each band in the same location

Summation of Linear combination or A value will be the new pixel by replacing former location, creating the new image which occurred from the described method. The image sharpness depends on Coefficient weight a1, a2, a3 ……….which has been specified in each band. This Coefficient might be minus or plus value. From Linear combination equation, the combination (A) which is mostly received would have low value of pixel. Hence, it is necessary to adjust the Histogram to be in the middle range of grey color level (0-255) and might use the method of magnifying the image sharpness as well.

1.5) Principle Component Analysis: PCA is another image quality development method which could bring a large amount of data to use in mathematic calculation in order to reduce amount of data and still keep the outstanding characteristic of most data. The analyst result of Principle Component would contribute the new set of data in format of principle that occurred from Linear combination. Principle Component Analysis could be integrated with long distance perception. It creates new wavelength which is substitute of entire data received from the old wavelengths. The new wavelength will cover almost all of data from the old wavelength but the remaining in number of wavelength or less component. Therefore, it can help reduce wavelength that will be calculated in order to classify type of data and reduce time in calculation or if use the new wavelength to create mixed color image, it would help improving quality of image to have more detail and easy for interpretation.

Example of Principle Component Analysis is such as bringing 7 wavelengths of satellite data from LANDSAT TM system to calculate data in each wavelength and receive the new wavelength as follows.

1.5.1) Graphic Statistic Calculation in each wavelengths are the highest-lowest Reflection value, Arithmetic mean, variance, and standard deviation.    
1.5.2) Variance-covariance matrix calculation from statistic in each wavelength has variance calculation (specifically off-diagonal of each wavelength) and Covariance, which indicates relation between 2 wavelengths by one pair at a time and sees how much or little is the relation. Plentiful Covariance means that pair of wavelength has quite a lot of relation or share a lot in common of characteristic.

1.5.3) Correlation Calculation to classify new set of data following Correlation between data that has high relation, for example, wavelength 1 2 and 3 has high relation because all 3 wavelengths have qualification as seeable wave, therefore, data has the same characteristic in object reflection. Also, wavelength 5 and 7 has high Correlation because they are similar middle infrared. As for wavelength 4, it will have low relation with other wavelength due to it is a near-infrared. When considering from former set of data (6 wavelengths), we have 3 large groups of data from Correlation which are seeable wave group (wavelength 1 2 and 3), middle-infrared group (wavelength 5 and 7), and near-infrared (wavelength 4). Principle Component Analysis will create the new data group or new component according to sharing characteristic of wavelength.     

1.5.4) Transformation Calculation by grouping high relation wavelength together in the new

1.6) Vegetation index is the calculation using wavelengths that are related to vegetation and scale each other. Then make the result in area classification that is Biomass with non-vegetation as beneficial in following up more or less of vegetation, and environmental situation in studying area, vegetation-related wavelength, which are seeable red wavelength that has qualification in measuring reflection value from a part that has energy absorbing in leaf or chlorophyll-containing. And near-infrared has qualification in classifying vegetation and measuring amount of Biomass, for example, in case of LANDSAT satellite data in TM system is wavelength 3 and 4, or SPOT satellite case is wavelength 2 and 3 onwards. There are many ways to calculate Vegetation index; the important ones are as follows.

1.6.1) Ratio Vegetation Index (RVI) is easy scaling between 2 wavelengths by divide near-infrared by seeable red wavelength as the following formula.

RVI = NIR / RED
Whereas    RVI = Vegetation index
NIR = Near-infrared
RED = Seeable red wavelength
Vegetation index calculation is normally given the value between -1 and 1. If you want to adjust into normal Reflection value in range of 0-255, it could be calculated by using the following formula.
NDVI = ((NIR – RED) / (NIR + RED) × 128) + 128

1.6.3) Soil Adjustment Vegetation Index (SAVI) is vegetation index which is created for vegetation calculation in the studying area that has low quantity of vegetation. The calculation formula is quite similar to NDVI but increase Constant (L) in order to reduce influence of reflection value from ground which is of bottom of vegetation. If Constant is zero, it means that SAVI is equal to NDVI, but if the vegetation scaling cover enough, Constant will be around 0.5. The calculation formula is as follows.

SAVI = ((NIR – RED) / (NIR + RED + L)) × (1 + L)
Whereas    SAVI = Vegetation index
NIR = Near-infrared
RED = Seeable red wavelength
L = Constant

1.6.4) Transformation Vegetation Index (TVI) is vegetation index calculation which is created in order to apply with vegetation interpolation in the grassland area. The calculation formula is as follows.

TVI = ((NIR – RED) / (NIR + RED + L) + 0.5)1/2

Reflection value in image data which is the result of Vegetation index will be true that could group pixel which reflection value is alike and interpret as quantity of vegetation when comparing with actual condition.

2) Spatial enhancement is image enhancement by considering value on surrounding pixel. It is different from enhancement of wavelength which considers only one value. This enhancement is normally considered from frequency of nearby data group or difference of the highest and lowest pixel value of surrounding data image group. Meaning that changing pixel value per one distance unit for some part of image that specified value average procedure of certain amount of data image’s small group through one image in order to change frequency of that pixel group is called Convolution filtering. This procedure uses   Convolution kernel which is numerical matrix used in changing each pixel value by surrounding pixel value in specific format. Quantity of number in matrix will be a determinant of weight in specific pixel value adjustment. It is usually called Coefficient because it uses format and mathematical equations method. Example of Spatial enhancement is such as Image filtering as described.

2.1) Image filtering is value adjusting of few image spot with many image spot or eliminate the minority of image spot out of the image. In order to understand about value adjusting of image spot that doesn’t require loop system will use complicated matrix overlapped on file of image spot. The value changed image spot will be on a window. Image filter used with satellite data is a grid square (Matrix). The quantity of image spot both in vertical and horizontal are always in odd number, for example, 3×3 5×5 and 7×7, etc in order to bring Symmetry with image spot at the center of the filter. The new converted value of image spot by image filtering could be done by moving filter frame along with image data from left to right by single row until finish the whole image. While the filter is moving, it will calculate the new value for image spot at the center of the filter frame. From the describe movement, the image filter is called Kernel.

Image filter can be directly applied with spatial data which is in numeric form, for example, satellite data and called loop process, which means, a mathematic procedure that newly calculate numerical data value or Radiometric of each middle image spot by using value from all image spots that fall within filter frame. While scanning image data as fundamental value and convert those values with various mathematical methods, for example, multiplication, division, Arithmetic Mean, median, mode or Local standard deviation, etc. And sometimes weighting is also included.

Low pass or Smoothing filter possesses an outstanding qualification in improving image data to be smooth or reduce the difference of object boundary. There are many methods for adopting as follows.

2.1.1) Arithmetic Mean Filter is the filter that uses the principle of Arithmetic Mean by all filters is positive. The original value of middle image spot could be more or equal to other filters. This kind of image filter will smoothen the image. It is popular used with single spot image that has different value from the surrounding images, and appear to interfere between many image spots that have similar value. It is called Image noise. When using the filter through low frequency, the image will be smoother (Figure 3.47b), for example, using with the image after it has been classified the type of information in order to reduce single image spot or in case if there is any damaged image spot, this method could help improve a better quality of the image data.

2.1.2) Median filter is the finding of median value of the image spot by arranging from least to greatest or in reverse order within the filter. This mathematic method helps reduce an outstanding value of the image spot which is extremely different from the image spot value from the surrounding image spots. The image data that uses median filter has better qualification in noise reduction than Arithmetic Mean Filter. The image will be more smoothen but the filter will not remove or smoothen edge or other lined part of the image as noise signal. To change median value, criterion is required for controlling, for example, to create median value change only when the original value is different from the new ones.

2.1.3) Mode filter is Mood method using statistical approach which is another way of finding median value of the image spot by considering least to greatest frequency from all available image spot within Mode filter frame. By applying the principle that the middle of image spot is related to the surrounding image spot; the value is similar to each other. Therefore, when the image spot that has outstanding value and different from other surrounding image spot, it might be concluded that image spot is the result of noise signal. The describe case is usually found in analyzed and classified data which is the single image that has value in each image spot  showing code or symbol that has been translated. Thus, it is different from spectral value of image spot in normal satellite data. Therefore, the use of Arithmetic Mean method or Median method using statistical approach does not convey any meaning to converted data and the result is not good. In this case, Mood method using statistical approach is more suitable. Regulation of value conversion could also be set, for example, converting the middle of image spot when there are more than 4 image spots of the surrounding ones which have the similar value.

ภาพแสดงภาพที่ผ่านตัวกรองแบบความถี่ต่ำหรือตัวกรองทำให้เรียบ ขนาด 5 X 5

ภาพแสดงภาพที่ผ่านตัวกรองแบบความถี่ต่ำหรือตัวกรองทำให้เรียบ ขนาด 5 X 5

2.1.4) High frequency kernel: In data analyzing from satellite, most of users want to see the object edge as clear as possible that it could be enhanced in order to classify pattern edge of various objects comfortably. Therefore, technique of enhancing the distinct of satellite data has been invented to improve the quality of image to be most distinct, which is also useful in searching location by using point of intersection of lined pattern and use Edge enhancement or High pass filter. Image enhancement method has complex matrix or central which using high frequency. Low image value would be lower. And high value ones would be even higher. At first, a few difference of image value would make the image have more different value. High frequency center would be function of Edge enhancers. This technique is highly popular applied in Geological survey.

 

 

Quality enhancement of image which has component as line pattern could be done via Edge detection filter. It will give a result that helps enhancing edges of all image components. There have been many invention of this kind of filters such as Laplacian filter. In addition, there is a filter that enhances edge only one direction in the image. It is able to choose to use the filter specifically in one certain direction by observing image data that has component as line pattern which mostly lean to a particular side, for example, vertical, horizontal, northeast, northwest, i.e. Sobel filter, and Prewitt filter, etc.

 

2.1.5) Zero sum kernel means the sum of the coefficients in the center is zero. When the center is used, the sum that is zero will be substituted by 1 because there is no result for any number that is divided by zero. This center number will affect the result as follows.

   - Count for zero in the area where data value is equal. (no edge)
   - Count for low value in the area where there is low frequency of data. (Low spatial frequency)
    - Count for very high value for the value which is already high and very low value for the already low ones in the area where there is high frequency of data. (High spatial frequency)

2.1.6) Image operation is proportion calculation between image data in many wavelengths. The principle of calculation is using data from varied wavelengths to calculate by mathematical method to get the sum which has more detail in some subject matter. Nevertheless, it depends on qualification of wavelength used in the calculation. The widespread mathematical method being used is for example, adding, subtracting, multiplying, and dividing between image data in various wavelengths. The popular proportion calculations between image data in varied wavelengths are Vegetation Index calculation, Brightness Index calculation, and shaping between image data, etc.

Image classification

The third step is Image classification which is statistical evaluation in order to separate all image spots, which combine as studying area, into smaller group by using statistical characteristics as determination of the different between image spot groups. The image spots which have been grouped in the same group will have statistical characteristics that are specifically in the similar direction. Each classified image spot group will show certain kind of land cover differently.

In other words, Image classification means separating image spot that has similar reflection qualification into group or level, which is called Type or Class, for the purpose of separating a variety of object that show in the image. The Image classification requires implementer to use decision rule or statistical knowledge to help because there is a large quantity of image spot that combined as studying area. To statistically calculate by oneself using a calculator is therefore difficult, time-consuming, and might have probability of error. Consequently, computer capability is being used helping in the evaluation, which provides a quick result. And its correctness could be examined immediately.

Image classification by computer system is divided into 2 methods; Supervised classification and Unsupervised classification. For an efficient result of both classification methods, it requires to study statistical image data in each wavelength before beginning Image classification in order to obtain suitable wavelength for Image classification. The fundamental statistic value which has been used for choosing the suitable wavelength is as follows.

– Minimum-Maximum value of each wavelength is the value indicating image data reflection in each wavelength. It will show where the reflection fall into which part between value 0-255. If the value is very near to 0, then the wavelength will provide data about object which absorb high energy. If the value tend to 255 meaning that wavelength will provide data about object which has high reflection on energy. And if there is a part of wide value which has both lowest value near to 0 and highest value near to 255, then there is data of object which both absorb and reflect energy.  This wavelength, therefore, has a variety of information.

– Mean is an average value of all reflection in each wavelength. It can represent overall image of one certain wavelength. The mean can be calculated by using the sum of all Reflection value and divide by the all of Digital numbers as the following formula.

 

Whereas  x̄ = Mean

X = Reflection Value of each Digital number

N = All Digital numbers

Mean is mostly and usually used in measuring average value of image reflection. The mean would be excellent when there is distribution of all Digital number’s Reflection Value in symmetry or distribution of zero Skewness, which the distribution feature can be examined by using Reflection Value of all Digital numbers to create an image chart as in the figure.

- Median is one of measurement of tendency towards the center by sorting image reflectance from least to greatest. The median is the value in the center of the whole set of data, therefore, it represents of reflectance of all image spots in certain wavelength which identify the amount of image spot that has more reflectance and less than the median at around 50 percent.
- Mode is another way of measuring tendency towards the center by considering from the highest frequency of Reflection Value. It is popularly used with nominal data, for example, the value of data type after classification is considered as Reflection Value which shows a variety of land usage, not the object reflection anymore.  
- Standard Deviation : S.D. is the most popular measurement method in calculating the Square of the Difference between all image reflectance in each wavelength and the mean of the certain wavelength. The formula is as follows.

 

Whereas S.D. = Standard Deviation
X  = Reflection Value of each Digital number
x̄  = Mean
N = All Digital numbers

Variance – σ is distribution measurement same as Standard Deviation. It is calculated from the average summation of the Square of Deviation.

005

Whereas     Xi =  Reflection Value of  Digital number from 1- N
μ = Mean of all Digital Number
N = All Digital numbers

     Correlation is relation measurement between more than 2 set of data. It is measured by calculating from Correlation coefficient that has boundary from 0 till  + 1.00. When Correlation coefficient value between two wavelengths of image data is near to 1.00, then that 2 set of data has high relationship to each other, which might have direct relationship. (Negative Correlation coefficient) And when Correlation coefficient value is near to 0, then both two wavelength data has low relationship or different. It is useful in selecting wavelength in classification.

      In studying relationship between two types of wavelengths, it can be done by showing the Reflectance distribution on double axis graph. The first axis is Reflection of the first wavelength and the second axis is Reflection of the second wavelength or also called Cross tabulation. The distribution format of Reflection will identify relationship characteristic of data in two wavelengths similarly to Correlation coefficient.

แสดงความสัมพันธ์ของข้อมูลในระดับต่ำระหว่างช่วงคลื่นที่ 4 และ 3 ของดาวเทียม LANDSAT ระบบ TM

แสดงความสัมพันธ์ของข้อมูลในระดับต่ำระหว่างช่วงคลื่นที่ 4 และ 3 ของดาวเทียม LANDSAT ระบบ TM

แสดงความสัมพันธ์ของข้อมูลในระดับสูงระหว่างช่วงคลื่นที่ 2 และ 3 ของดาวเทียม LANDSAT ระบบ TM

The above statistical approach is very beneficial in selecting the suitable wavelength for the studying subject. When the suitable wavelength has been selected, then those wavelengths could be classified the data type in order to acquire the effective sum.

1) Supervised classification

Supervised classification is data type classification which the user specifies the characteristic of data by selecting sample of data type to a machine, therefore, this type of classification is called the supervised method which has to be closely controlled by the analyst. Representative data or sample data, which has been specified by the user, is obtained from correct visual interpretation of satellite image base on experience, understanding, and existing knowledge, including various processes in interpretation, for example, field survey, satellite map usage, and other statistics, etc.    
Anyhow, to obtain the correct data according to classification system, the selected sampling should be statistical data which specified one certain characteristic of data. The computer will evaluate statistical characteristics of the sampling area and classify each image spot of satellite data in accordance with data type as the user has specified according to sampling area.   
The correctness and credibility of this method of classification depends on characteristic of sampling area whether it has variety coverage in all data types and whether it is Population sample of all data types or not. This method requires the user to perfectly have knowledge in the studying area by studying from additional data, including observing Physical characteristics of the described data type.

1.1) Sampling of training sites/ areas is essential for Supervised classification. Being observant in various type of land usage is necessary in selecting sampling area and categorizes sort of sampling thoroughly in all Physical characteristics of land usage and land cover. The principle of sampling area selection is as follows. 

- Choose the sampling that represents type of land usage and all sort of land cover in the studying area,

- Well-scattered choose panel of sampling area of the same land usage for the purpose of representative type of land usage.

- Choose more than 30 image spot per each type of land usage in order to have representative statistics that has characteristic of normal distribution.

- Choose sampling that has same color group or Homogeneous so that it will reduce the blending with other type of sampling. Similar sample area which is highly homogeneous is counted as good sample data.   

Sampling area could be selected by setting the edge of wanted sampling area from the computer screen. When obtain the edge of all types of sampling area of land usage, next is to choose sampling from wavelength which will be used in evaluation. Sample from each wavelength will have statistical value which could be used in statistical analysis in order to evaluate whether the selected sampling from each wavelength has credibility and good representative or not. The important statistic is the highest-lowest reflectance of each type of land usage, an average reflectance, variance, covariance table, and correlation table.

1.2) The studying on statistical characteristics of sampling area

There are 3 dimensions of each image spot from satellite data, which are vertical coordinate (i), horizontal coordinate (j), and brightness value of image spot (BV). Each image spot has different brightness value in accordance with each wavelength in vector characteristic of image spot as follows.

17_

If sampling area is selected, each sampling will have an average of each type of data (Class) in vector form as follows.

 

 

18_

Assuming sample; choosing 3 classes of sampling area in land usage which are water, forest, and build up area of LANDSAT satellite TM system with 6 wavelengths (bands 1, 5 and 7). When the sampling area has been chosen, then we will obtain statistical data of sampling area.

ภาพตัวอย่างการศึกษาลักษณะทางสถิติของพื้นที่ตัวอย่าง

Figure showing example of studying statistical characteristics of sampling area.

The following are statistical value of sampling area which is popularly used in analysis.

1.2.1) Average reflectance from image spot of each sampling area is used to analyze characteristic indicator or Spectral signature in order to find difference or similarity between various data.
From Spectral signature picture of all 3 classes of land usage, the characteristic is exactly as theoretical reflection value and no similar characteristic, meaning there is highest reflectance in build up area. Line of Spectral signature is not overlapped other land usage. Spectral signature characteristic of ground-mineral, especially high reflectance in  wavelength 3 (wavelengths which is able to perceive by eyesight) and 5 (Short-wavelength infrared) because buildings are high energy reflected object (Cement, concrete). As for the forest, it shows vegetation reflectance which has highest value in wavelength 4 (Near-infrared wavelength). Vegetation has highest reflectance while water has water reflectance characteristic which is lower value (due to Energy Absorption) in infrared wavelength except wetland that wavelength 5 has a litter higher value since it is adulterated water.

1.2.2) Variance is measurement of data difference. Generally, object variance of water will have lowest value since it is more homogenous. Vegetation variance is increased and variance in build up area is highest because there are many types of object blended together.

1.2.3) Correlation is useful in choosing wavelength that will be used in calculation. Wavelength that has high relation will provide the same data, for example, wavelength in perceivable by eyesight. Hence, it is not necessary to use all Middle-infrared wavelengths. It is able to choose Low-relation wavelength and reduce number of data in calculation, which will help calculation to be faster.

When the user has specified sampling area and analyzes statistical value of sampling area, next step the user has to specify image classification method to the computer by using the popular image classification method in the following.

1.3) Classification decision rules

1.3.1) Classification decision rules in Minimum distance to means are the easiest classification decision rules and work the fastest. It consists of 3 steps as follows. 

– DN of sample data from all wavelengths, the average is called Mean vector.

– All DN in sample data, which will be classified, is sorted in data set close to Mean vector of that set.

– Data boundary is specified to be around an average of vector. Therefore, if there’s any image spot outside the boundary, it will be classified as Unknown.


ที่มา : Aronoff, S. (2005)

From the picture, the Unknown Pixel shows in number 1. The distance between this pixel and average data layer is shown in dash line. After the calculation is finished, the Unknown Pixel will be managed into data layer that has distance near to that average data layer. In this case is C. If the pixel which couldn’t be verify has longer distance from average data layer than it has been specified by the analyst, the image couldn’t be categorized in any layer, therefore, it is categorized as Unclassified pixel.

Data classification in low distance type provides better result than Parallelepiped classification. And all pixels will be classified but there will be more calculation and disadvantage if the lowest distance of the pixel is close to a more-than-1-group data type. Pixel classification into one certain type of data could be a mistake.

However, this type of data classification has many limitations. Most important of all is ineffectiveness with data that has highly difference in Variance, for example, pixel of picture number 2 is close to U but is classified in S group instead of being classified in U group because there is higher Variability value in data layer resulting a longer distance from the center. Therefore, it is unpopular to use in data layer which has similar Reflection value and high Varience. (Lillessand and Kiefer, 1994)

1.3.2) Parallelepiped classification or Box classifier

This classification method is highly popular used in data evaluation because it works fast with high capability in calculation. The process of this method is classifying pixels by specifying the lowest and highest number of each wavelength or use S.D. as putting a square around. Data layers in sampling data of pixel will be classified according to group that has fallen in certain square edge. As in picture number 3.45 2 pixels will be classified in U, etc. Parallelepiped classification has benefit in the fast calculation due to uncomplicated calculation but has disadvantaged which is resulted from highest confounding because some highest part of each data type will be in the same loop which a machine could not classify into any group, causing lots of classified data. In other words, if square edge is overlap, it will cause difficulty in making decision on classifying pixels into certain data layer. There is high chance of overlapped square that could occur with data which is                  Highcorrelation or High covariance. Data However, this problem could be solved by adjusting size of square frame to be smaller as the steps in the showing picture.

 

1.3.3) Maximum likelihood classifier


ที่มา : Aronoff, S. (2005)

This is the most correct method but consumes plenty of time in calculation when comparing with other methods Curran, 1985. The principle is that it requires Mean Vector, Varience, and Correlation of the wavelength used in data layer classification from sample data based on assumption that each data layer will have Normal Distribute. Pixels distribute surrounding Mean could be describe by using “Probability Function”, for example Picture number 3.56, number 1Pixel will be classified in C layer, etc.

The disadvantage of this type of classification is time consuming in calculation in order to sort each pixel into certain type of data, especially if working with wavelength data or using data which has large amount of different Reflection value. Therefore, working process is slower than the described method. The mentioned problem could be solved by using a variety of method, for example, reducing data size before classification, etc.

2) Unsupervised classification

In this classification method, the analyst does not require to specify sampling area of each data type to a computer. It is usually used in case there is insufficient data in classification area or the user has no basic knowledge in the studying area. This method could be done by and then bring the data to classify into various type; each type has the same characteristic of wavelength by using Clustering technique, which can be divided into 2 types as follows.

2.1) Hierarchical clustering

In this method, pixel will be classified into similar group by using distance as measurement tool. In the beginning, assume that each pixel is 1 group. Pixel which has the least distance will be gathered together. Then continue the grouping of pixel until obtain amount of specified group.

2.2) Non-hierarchical clustering

It starts with dividing data into certain temporary group, and then the member in each group will be examined by using the chosen Variable or distance to manage the new location in the group that suits more. The group dividing is better and more distinct. The example of this group gathering method is ISODATA and K-mean. There are many definitions of distance used in measurement, National Research Council of Thailand, 1997 as in the following example.

- Nearest neighbor method : The nearest neighbor which has the lowest distance will gather into the new group.

- Furthest neighbor method : The nearest neighbor which has the furthest distance will gather into the new group. 

- Centroid method : The distance from Center of Gravity of the two groups will be measure to see whether it could merge into a new group or not.

- Group average method : Root mean square of the distance between every couple of data in two different groups will be used as criteria in group building.

- Ward method : Gathering old group into group by trying to make least increasing of root mean square of the distance between Center of Gravity and all data which is member

 

ที่มา : ตำราเทคโนโลยีอวกาศและภูมิสารสนเทศศาสตร์

Admin 4/11/2016 5455 0
Share :