×
Every problem might not have a solution right now, but don’t forget that but every solution was once a problem.
--Your friends at LectureNotes
Close

Note for Digital Image Processing - DIP by Mohammad Akbal

  • Digital Image Processing - DIP
  • Note
  • Computer Science Engineering
  • B.Tech
  • 1223 Views
  • 25 Offline Downloads
  • Uploaded 5 months ago
0 User(s)
Download PDFOrder Printed Copy

Share it with your friends

Leave your Comments

Text from page-2

Lecture Notes | 10CS762- DIP | Unit 1: Digital Image and Its Properties 1. Introduction to Digital Image Processing 1.1. General • DIP refers to Improvement of pictorial information for human interpretation and Processing of image data for storage, transmission and representation for autonomous machine perception • Humans Vision allows us to perceive and understand the world surrounding us. Computer vision aims to duplicate the effect of human vision by electronically perceiving and understanding an image. 1.2. Image, Digital Image • Ideally, we think of an image as a 2D light intensity function f(x,y), where x and y are spatial coordinates. f at (x,y) is related to the brightness or color of the image at that point. Here x and y are to be continuous. • A digital image is a 2D representation of a scene as a finite set of digital values, called picture elements or pixels or pels. • A digital image can be considered a matrix whose row and column indices identify a point in the image and the corresponding matrix element value identifies the gray level at that point. • Actual image is continuous. Convolution with the Dirac functions is used to sample the image. • Digital image has a finite number of pixels and levels • • Define digital image processing. (2 Marks) Briefly explain digital image and its representation (4 Marks) 1.3. Levels of Image processing • In order to simplify the task of computer vision understanding, three levels are usually distinguished; low level, middle level and high level image understanding. 1. Low level methods usually use very little knowledge about the content of images. Low level processes involve primitive operations such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. www.techjourney.in Prof. Harivinod N, Dept. of CSE, VCET Puttur Page| 2

Text from page-3

Lecture Notes | 10CS762- DIP | Unit 1: Digital Image and Its Properties 2. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A mid-level process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). 3. High level processing is based on knowledge, goals, and plans of how to achieve those goals. Artificial intelligence (AI) methods are used in many cases. High level computer vision tries to imitate human cognition and the ability to make decisions according to the information contained in the image. Higher-level processing involves “making sense” of an ensemble of recognized objects, as in image analysis, and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. • This course (10CS762) deals almost exclusively with low/middle level image processing. Low level computer vision techniques overlap almost completely with digital image processing, which has been practiced for decades. • As a simple illustration to clarify these concepts, Consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing. • Explain 3 levels of Image processing (6 Marks) 1.4. Fundamental Stages • The following are the fundamental image processing steps. Prof. Harivinod N, Dept. of CSE, VCET Puttur Page| 3

Text from page-4

Lecture Notes | 10CS762- DIP | Unit 1: Digital Image and Its Properties i. Image Acquisition: This is the first fundamental steps of digital image processing. An image is captured by a sensor (such as digital camera) and digitized. The image that is acquired is completely unprocessed and is the result of whatever sensor was used to generate it. The sensors generally use electromagnetic energy spectrum, acoustic or ultrasonic signals. ii. Image Enhancement: The idea behind enhancement techniques is to bring out detail that is hidden, or simply to highlight certain features of interest in an image. Such as, changing brightness & contrast etc. Image enhancement is among the simplest and most appealing areas of digital image processing. iii. Image Restoration: Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. iv. Color Image Processing: This may include color modeling and processing in a digital domain etc. Color image processing is an area that has been gaining its importance because of the significant increase in the use of digital images over the Internet. v. Compression: Compression deals with techniques for reducing the storage required to save an image or the bandwidth to transmit it. Particularly in the uses of internet it is very much necessary to compress data. vi. Morphological Processing: Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. vii. Segmentation: Segmentation procedures partition an image into its constituent parts or objects. It extracts required potion of the image. In general, automatic segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. viii. Representation and Description: Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region or all the points in the region itself. Choosing a representation is only part of the solution for transforming raw data into a form suitable for subsequent computer processing. Description deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another. ix. Object recognition: Recognition is the process that assigns a label, such as “vehicle” to an object based on its descriptors. x. Knowledge Base: Knowledge may be as simple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in connection with change-detection applications. • With a neat block diagram describe the various phases or fundamental stages of typical image processing system. (10 Marks) Prof. Harivinod N, Dept. of CSE, VCET Puttur Page| 4

Text from page-5

Lecture Notes | 10CS762- DIP | Unit 1: Digital Image and Its Properties 2. Basic Concepts 2.1. Image functions • The image can be modeled by a continuous function of two or three variables; in the simple case arguments are co-ordinates x, y in a plane, while if images change in time a third variable t might be added. • The image function values correspond to the brightness at image points. The brightness integrates different optical quantities. • The image on the human eye retina or on a TV camera sensor is intrinsically 2D. We shall call such a 2D image bearing information about brightness points an intensity image. • The real world which surrounds us is intrinsically 3D. The 2D intensity image is the result of a perspective projection of the 3D scene, which is modelled by the image captured by a pin-hole camera. Perspective projection geometry • When 3D objects are mapped into the camera plane by perspective projection a lot of information disappears as such a transformation is not one-to-one. • How to understand image brightness? The only information available in an intensity image is brightness of the appropriate pixel, which is dependent on a number of independent factors such as object surface reflectance properties (given by the surface material, microstructure and marking), illumination properties, and object surface orientation with respect to a viewer and light source. • Some scientific and technical disciplines work with 2D images directly; for example, an image of the flat specimen viewed by a microscope with transparent illumination, a character drawn on a sheet of paper, the image of a fingerprint, etc. • Many basic and useful methods used in digital image analysis do not depend on whether the object was originally 2D or 3D. • Image processing often deals with static images, in which time t is constant. • A monochromatic static image is represented by a continuous image function f(x,y) whose arguments are two co-ordinates in the plane. • The domain of the image function is a region R in the plane where xm , yn represent the maximal image co-ordinates. Prof. Harivinod N, Dept. of CSE, VCET Puttur Page| 5

Lecture Notes