×
STUDY HARD SO YOU CAN LIVE YOUR LIFE AS YOU WANT.
--Your friends at LectureNotes
Close

Digital Image Processing

by Amity KumarAmity Kumar
Type: NoteInstitute: Amity University Specialization: Computer Science EngineeringOffline Downloads: 292Views: 6100Uploaded: 11 months ago

Share it with your friends

Suggested Materials

Leave your Comments

Contributors

Amity Kumar
Amity Kumar
Digital Image Processing What Is Digital Image Processing? An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of at any pair of coordinates (x, y) is called the intensityor gray level of the image at that point. When x, y, and the amplitude values of f are all finite,discrete quantities, we call the image a digital image. The field of digital image processing refersto processing digital images by means of a digital computer. Note that a digital image iscomposed of a finite number of elements, each of which has a particular location and value.These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is theterm most widely used to denote the elements of a digital image. Vision is the most advanced of our senses, so it is not surprising that images playthe single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spectrum, imaging machines cover almost the entireEM spectrum, ranging from gamma to radio waves. They can operate on images generated bysources that humans are not accustomed to associating with images. These include ultra-sound,electron microscopy, and computer-generated images. Thus, digital image processingencompasses a wide and varied field of applications. There is no general agreement amongauthors regarding where image processing stops and other related areas, such as image analysisand computer vision, start. Sometimes a distinction is made by defining image processing as adiscipline in which both the input and output of a process are images. We believe this to be alimiting and somewhat artificial boundary. For example, under this definition, even the trivialtask of computing the average intensity of an image (which yields a single number) would not beconsidered an image processing operation. On the other hand, there are fields such as computervision whose ultimate goal is to use computers to
emulate human vision, including learning andbeing able to make inferences and take actions based on visual inputs. This area itself is a branchof artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI isin its earliest stages of infancy in terms of development, with progress having been much slowerthan originally anticipated. The area of image analysis (also called image understanding) is inbetween image processing and computer vision. There are no clear-cut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, and high-level processes. Lowlevelprocesses involve primitive operations such as image preprocessing to reduce noise,contrast enhancement, and image sharpening. A low-level process is characterized by the factthat both its inputs and outputs are images. Mid-level processing on images involves tasks suchas segmentation (partitioning an image into regions or objects), description of those objects toreduce them to a form suitable for computer processing, and classification (recognition) ofindividual objects. A mid-level process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and theidentity of individual objects). Finally, higher-level processing involves “making sense” of anensemble of recognized objects, as in image analysis, and, at the far end of the continuum,performing the cognitive functions normally associated with vision and, in addition,encompasses processes that extract attributes from images, up to and including the recognition ofindividual objects. As a simple illustration to clarify these concepts, consider the area ofautomated analysis of text. The processes of acquiring an image of the area containing the text,preprocessing that image, extracting (segmenting) the individual characters, describing thecharacters in a form suitable for computer processing, and recognizing those individualcharacters are in the scope of what we call digital image processing.
Fundamental Steps in Digital Image Processing Image acquisition is the first process shown in Fig. Note that acquisition could be as simple asbeing given an image that is already in digital form. Generally, the image acquisition stage involvespreprocessing, such as scaling.Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement iswhen we increase the contrast of an image because “it looks better.” It is important to keep inmind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sensethat restoration techniques tend to be based on mathematical or probabilistic models of
imagedegradation. Enhancement, on the other hand, is based on human subjective preferencesregarding what constitutes a “good” enhancement result. Color image processing is an area that has been gaining in importance because of the significantincrease in the use of digital images over the Internet. Wavelets are the foundation for representing images in various degrees of resolution. Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it. Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in theform of image file extensions, such as the jpg file extension used in the JPEG (JointPhotographic Experts Group) image compression standard.Morphological processing deals with tools for extracting image components that are useful in therepresentation and description of shape. Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A ruggedsegmentation procedure brings the process a long way toward successful solution of imagingproblems that require objects to be identified individually. On the other hand, weak or erraticsegmentation algorithms almost always guarantee eventual failure. In general, the more accuratethe segmentation, the more likely recognition is to succeed. Representation and description almost always follow the output of a segmentation stage, whichusually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixelsseparating one image region from another) or all the points in the region itself. In either case,converting the data to a form suitable for computer processing is necessary. The first decisionthat must be made is whether the data should be represented as a boundary or as a completeregion.

Lecture Notes