About the pan-sharpen process
Panchromatic sharpening uses a higher-resolution panchromatic image (raster band) to fuse with a lower-resolution, multiband raster dataset. Panchromatic sharpening increases the spatial resolution and provides better visualization of a multiband image using the high-resolution, single-band image where the two rasters fully overlap. Several imaging companies provide low-resolution, multiband images and higher-resolution panchromatic images of the same scenes.
Original color image (240 cm resolution)
|
Panchromatic image (60 cm resolution)
|
Pan-sharpened color image (60 cm resolution)
|
Pan-sharpened infrared image (60 cm resolution)
|
This process is applied to the panchromatic (single-band) image. The panchromatic image is assumed to be the base image, which is colored with the multiband image. This method is used to preserve the resolution of the panchromatic image throughout the panchromatic-sharpening process.
The ArcGIS Image Server provides three image fusion methods from which to choose to create the pan-sharpened image: the Brovey transformation, Intensity-Hue-Saturation (IHS) transformation, and ESRI pan-sharpening transformation. Each of these methods uses different models to improve the spatial resolution while maintaining the color, and each is adjusted to include a weighting so that a fourth band can be included (such as the near-infrared band available in many multispectral image sources). By adding the weighting and enabling the infrared component, it has been found to improve the visual quality in the output colors.
The Brovey transformation is based on spectral modeling and was developed to increase the visual contrast in the high and low ends of a data histogram. It uses a method that multiplies each resampled, multispectral pixel by the ratio of the corresponding panchromatic pixel intensity to the sum of all the multispectral intensities. It assumes that the spectral range spanned by the panchromatic image is essentially the same as that covered by the multispectral channels.
In the Brovey transformation, the general equation uses red, green, and blue (RGB) and the panchromatic bands as inputs to output new red, green, and blue bands, as in the following example:
Red_out = Red_in / [(blue_in * green_in * red_in) * Pan]
However, by using weights and the near-infrared band (when available), the adjusted equation for each band becomes
DNF = (P - IW * I) / (RW * R + GW * G + BW * B)
Red_out = R * DNF
Green_out = G * DNF
Blue = B * DNF
Infrared_out = I * DNF
where the inputs are
P = panchromatic image
R = red band
G = green band
B = blue band
I = near infrared
W = weight
The IHS transformation is a transformation of RGB and intensity, hue, and saturation. Each coordinate is represented by a 3D coordinate position within the color cube. Pixels having equal components of red, green, and blue lie on the gray line, a line from the cube to the opposite corner (Lillesand and Keifer, 2000). Hue is the actual color; it describes the shade of the color and where it is found in the color spectrum. Blue, orange, red, and brown are words that describe hue. Saturation describes the value of lightness (or whiteness) measured in percentage from 0 percent to 100 percent. For example, when mixing red with a saturation of 0 percent it will be as red as it can be. As the saturation percentage is increased, more white is added and the red changes to pink. If the saturation is 100 percent, the hue is meaningless (red loses its color and turns to white). Intensity describes a value of brightness based on the amount of light emanating from the color. A dark red has less intensity than a bright red. If the intensity is 0 percent, the hue and saturation are meaningless (the color is lost and becomes black).
The IHS transformation converts the color image from an RGB color model to an IHS color model. It then replaces the intensity values with values obtained from the panchromatic image being used to sharpen the image, a weighting value, and the value from an optional, near-infrared band. The resultant image is then output using the RGB color mode. The equation used to derive the altered intensity value is as follows
Intensity = P - I * IW
The ESRI pan-sharpening transformation uses weighted averaging (WA) and the additional near-infrared band (optional) to create its pan-sharpened output bands. The weighted average is calculated by using the following formula:
WA = (R * RW + G * GW + B * BW + I * IW) / (RW + GW + BW + IW)
The result of the weighted average is used to create an adjustment value (ADJ), which is then used in calculating the output values, as shown in the following example:
ADJ = pan image - WA
Red_out = R + ADJ
Green_out = G + ADJ
Blue_out = B + ADJ
Near_Infrared_out = I + ADJ
For the ESRI pan-sharpening transformation, the weight values of 0.166, 0.167, 0.167, 0.5 (R, G, B, I) provide good results when using QuickBird imagery. It has been found that by changing the near-infrared weight value, the green output can be made more or less vibrant.
Pansharpening can be applied when some raster datasets are added to the image service definition, such as Landsat and QuickBird Basic. Pansharpening can also be applied as part of the process chain.
The pan-sharpen method can also be used to fuse different types of data, such as the hillshade of an elevation model, with a color image.
Pansharpening can be applied to some raster types when the raster data is added to the image service definition. Additionally, the pan-sharpen process can be added to any of the process chains after the data has been added.
There are two check boxes on the Pan-sharpen Process Definition dialog box that manage the resampling of the images being pansharpened. By default, these are both checked,and you will likely keep the default. You only uncheck these if you know that the panchromatic and multispectral images were imaged at the exact same time and therefore are perfectly registered to one another, such as in the case of some digital camera imaging systems. By unchecking the options Resample multispectral image and Resample additional band image, you can increase the speed in which the pan-sharpen process occurs, because normally each input (the multispectral image, panchromatic image, and sometimes an infrared image) is resampled or rectified, then the images are merged. When unchecked, the images' inputs are not resampled or rectified because it is assumed they will properly align, and they are only merged to create the pan-sharpened product.