header ad

Digital image processing(Filtering)

Digital image processing is the use of computer algorithms to perform image processing on digital images. As a subcategory or field of digital signal processing, digital image processing has many advantages over analog image processing. It allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing. Since images are defined over two dimensions (perhaps more) digital image processing may be modeled in the form of multidimensional systems.
What Is Image Filtering in the Spatial Domain?
Filtering is a technique for modifying or enhancing an image. For example, you can filter an image to emphasize certain features or remove other features. Image processing operations implemented with filtering include smoothing, sharpening, and edge enhancement.
Filtering is a neighborhood operation, in which the value of any given pixel in the output image is determined by applying some algorithm to the values of the pixels in the neighborhood of the corresponding input pixel. A pixel's neighborhood is some set of pixels, defined by their locations relative to that pixel. (See Neighborhood or Block Processing: An Overview for a general discussion of neighborhood operations.) Linear filtering is filtering in which the value of an output pixel is a linear combination of the values of the pixels in the input pixel's neighborhood
Filtering
SPATIAL FILTERING:
A characteristic of remotely sensed images is a parameter called spatial frequency defined as number of changes in Brightness Value per unit distance for any particular part of an image. If there are very few changes in Brightness Value once a given area in an image, this is referred to as low frequency area. Conversely, if the Brightness Value change dramatically over short distances, this is an area of high frequency.
Spatial filtering is the process of dividing the image into its constituent spatial frequencies, and selectively altering certain spatial frequencies to emphasize some image features. This technique increases the analyst's ability to discriminate detail. The three types of spatial filters used in remote sensor data processing are : Low pass filters, Band pass filters and High pass filters.                                        

        A.   LOW PASS (SMOOTHING) FILTERS
Low pass filtering (aka smoothing), is employed to remove high spatial frequency noise from a digital image. The low-pass filters usually employ moving window operator which affects one pixel of the image at a time, changing its value by some function of a local region (window) of pixels. The operator moves over the image to affect all the pixels in the image.
Smoothing filters are typically classified into two groups depending on the calculation method of pixel lying in the spatial window. If the filter combines pixel in linear fashion i.e. weighted summation, it is known as Linear Filter, otherwise called Non-Linear Filter. As given in fig. 1, we will consider Box (mean) and Gaussian filter under Linear filter, and Minimum/Maximum and Median filter under Non-Linear filter category.
I.LINEAR FILTER:
Linear filters process time-varying input signals to produce output signals, subject to the constraint of linearity. This results from systems composed solely of components (or digital algorithms) classified as having a linear response. Most filters implemented in analog electronics, in digital signal processing, or in mechanical systems are classified as causal, time invariant, and linear signal processing filters.
The general concept of linear filtering is also used in statistics, data analysis, and mechanical engineering among other fields and technologies. This includes non-causal filters and filters in more than one dimension such as those used in image processing; those filters are subject to different constraints leading to different design methods
(a)BOX FILTER:
Box filtering is basically an average-of-surrounding-pixel kind of image filtering. It is actually a convolution filter which is a commonly used mathematical operation for image filtering. A convolution filters provide a method of multiplying two arrays to produce a third one. In box filtering, image sample and the filter kernel are multiplied to get the filtering result. The filter kernel is like a description of how the filtering is going to happen, it actually defines the type of filtering. The power of box filtering is one can write a general image filter that can do sharpen, emboss, edge-detect, smooth, motion-blur, etcetera. Provided approriate filter kernel is used. 

Now that I probably had wet your appetite let us see further the coolness of box filtering and its filter kernel. A filter kernel defines filtering type, but what exactly is it? Think of it as a fixed size small box or window larger than a pixel. Imagine that it slides over the sample image through all positions. While doing so, it constantly calculates the average of what it sees through its window.
 

The minimum standard size of a filter kernel is 3x3, as shown in above diagram. Due to the rule that a filter kernel must fit within the boundary of sampling image, no filtering will be applied on all four sides of the image in question. With special treatment, it can be done, but what is more important than making the basic work first? Enough talk, lets get to the implementation asap! 
(b)GAUSSIAN FILTER:

Brief Description

The Gaussian smoothing operator is a 2-D convolution operator that is used to `blur' images and remove detail and noise. In this sense it is similar to the mean filter, but it uses a different kernel that represents the shape of a Gaussian (`bell-shaped') hump. This kernel has some special properties which are detailed below.

                                
The idea of Gaussian smoothing is to use this 2-D distribution as a `point-spread' function, and this is achieved by convolution. Since the image is stored as a collection of discrete pixels we need to produce a discrete approximation to the Gaussian function before we can perform the convolution. In theory, the Gaussian distribution is non-zero everywhere, which would require an infinitely large convolution kernel, but in practice it is effectively zero more than about three standard deviations from the mean, and so we can truncate the kernel at this point. Figure 3 shows a suitable                                      
integer-valued convolution kernel that approximates a Gaussian with a  of 1.0. It is not obvious how to pick the values of the mask to approximate a Gaussian. One could use the value of the Gaussian at the centre of a pixel in the mask, but this is not accurate because the value of the Gaussian varies non-linearly across the pixel. We integrated the value of the Gaussian over the whole pixel (by summing the Gaussian at 0.001 increments). The integrals are not integers: we rescaled the array so that the corners had the value 1. Finally, the 273 is the sum of all the values in the mask.            

Once a suitable kernel has been calculated, then the Gaussian smoothing can be performed using standard convolution methods. The convolution can in fact be performed fairly quickly since the equation for the 2-D isotropic Gaussian shown above is separable into x and y components. Thus the 2-D convolution can be performed by first convolving with a 1-D Gaussian in the x direction, and then convolving with another 1-D Gaussian in the y direction. (The Gaussian is in fact the only completely circularly symmetric operator which can be decomposed in such a way.) Figure 6 shows the 1-D x component kernel that would be used to produce the full kernel shown in Figure 3 (after scaling by 273, rounding and truncating one row of pixels around the boundary because they mostly have the value 0. This reduces the 7x7 matrix to the 5x5 shown above.). The y component is exactly the same but is oriented vertically.

A further way to compute a Gaussian smoothing with a large standard deviation is to convolve an image several times with a smaller Gaussian. While this is computationally complex, it can have applicability if the processing is carried out using a hardware pipeline.
The Gaussian filter not only has utility in engineering applications. It is also attracting attention from computational biologists because it has been attributed with some amount of biological plausibility, e.g.some cells in the visual pathways of the brain often have an approximately Gaussian response.
II.NON-LINEAR:
The linear filter has disadvantage while smoothing, noise is reduced with the blurring effect on points, edges and lines that deteriorates the quality of the image. To remove this effect, Non-Linear filters are used. Minimum and Maximum Filter gives an appearance of white and black dots, superimposed on an image called “salt & pepper noise”. These noises are effectively removed by using median filter which often creates a small spot of flat intensity.

Post a Comment

0 Comments