×
Every great story on the planet happened when someone decided not to give up, but kept going no matter what.
--Your friends at LectureNotes
Close

Note for Digital Image Processing - DIP by Abhishek Apoorv

  • Digital Image Processing - DIP
  • Note
  • 5 Topics
  • 21811 Views
  • 321 Offline Downloads
  • Uploaded 1 year ago
0 User(s)
Download PDFOrder Printed Copy

Share it with your friends

Leave your Comments

Text from page-2

MODULE-1 DIGITAL IMAGE INTRODUCTION The digital image processing deals with developing a digital system that performs operations on a digital image. An image is nothing more than a two dimensional signal. It is defined by the mathematical function f(x,y) where x and y are the two co-ordinates horizontally and vertically and the amplitude of f at any pair of coordinate (x, y) is called the intensity or gray level of the image at that point. When x, y and the amplitude values of f are all finite discrete quantities, we call the image a digital image. The field of image digital image processing refers to the processing of digital image by means of a digital computer. A digital image is composed of a finite number of elements, each of which has a particular location and values of these elements are referred to as picture elements, image elements and pixels. Motivation and Perspective Digital image processing deals with manipulation of digital images through a digital computer. It is a subfield of signals and systems but focus particularly on images. DIP focuses on developing a computer system that is able to perform processing on an image. The input of that system is a digital image and the system process that image using efficient algorithms, and gives an image as an output. The most common example is Adobe Photoshop. It is one of the widely used applications for processing digital images. Applications Some of the major fields in which digital image processing is widely used are 1. Gamma Ray Imaging- Nuclear medicine and astronomical observations. 2. X-Ray imaging – X-rays of body. 3. Ultraviolet Band –Lithography, industrial inspection, microscopy, lasers. 4. Visual And Infrared Band – Remote sensing. 5. Microwave Band – Radar imaging.

Text from page-3

Components of Image Processing System i) Image Sensors With reference to sensing, two elements are required to acquire digital image. The first is a physical device that is sensitive to the energy radiated by the object we wish to image and second is specialized image processing hardware. ii) Specialize image processing hardware – It consists of the digitizer just mentioned, plus hardware that performs other primitive operations such as an arithmetic logic unit, which performs arithmetic such addition and subtraction and logical operations in parallel on images. iii) Computer It is a general purpose computer and can range from a PC to a supercomputer depending on the application. In dedicated applications, sometimes specially designed computer are used to achieve a required level of performance iv) Software It consist of specialized modules that perform specific tasks a well designed package also includes capability for the user to write code, as a minimum, utilizes the specialized module. More sophisticated so ft ware packages allow the integration of these modules. v) Mass storage This capability is a must in image processing applications. An image of size 1024 x1024 pixels, in which the intensity of each pixel is an 8- bit quantity requires one megabytes of storage space if the image is not compressed. Image processing applications falls into three principal categories of storage i) Short term storage for use during processing ii) On line storage for relatively fast retrieval iii) Archival storage such as magnetic tapes and disks

Text from page-4

vi) Image displays Image displays in use today are mainly color TV monitors. These monitors are driven by the outputs of image and graphics displays cards that are an integral part of computer system vii) Hardcopy devices The devices for recording image includes laser printers, film cameras, heat sensitive devices inkjet units and digital units such as optical and CD ROM disk. Films provide the highest possible resolution, but paper is the obvious medium of choice for written applications. viii) Networking It is almost a default function in any computer system in use today because of the large amount of data inherent in image processing applications. The key consideration in image transmission bandwidth. Elements of Visual Perception Structure of the human Eye The eye is nearly a sphere with average approximately 20 mm diameter. The eye is enclosed with three membranes a) b) c) The cornea and sclera: it is a tough, transparent tissue that covers the anterior surface of the eye. Rest of the optic globe is covered by the sclera The choroid: It contains a network of blood vessels that serve as the major source of nutrition to the eyes. It helps to reduce extraneous light entering in the eye It has two parts (1) Iris Diaphragms- it contracts or expands to control the amount of light that enters the eyes. (2) Ciliary body Retina – it is innermost membrane of the eye. When the eye is properly focused, light from an object outside the eye is imaged on the retina. There are various light receptors over the surface of the retina The two major classes of the receptors are1) cones- it is in the number about 6 to 7 million. These are located in the central portion of the retina called the fovea. These are highly sensitive to color. Human can resolve fine details with these cones because each one is connected to its own nerve end. Cone vision is called photopic or bright light vision

Text from page-5

Rods – these are very much in number from 75 to 150 million and are distributed over the entire retinal surface. The large area of distribution and the fact that several roads are connected to a single nerve give a general overall picture of the field of view.They are not involved in the color vision and are sensitive to low level of illumination. Rod vision is called is scotopic or dim light vision. The absent of reciprocators is called blind spot 2) Image Formation in the Eye The major difference between the lens of the eye and an ordinary optical lens in that the former is flexible. The shape of the lens of the eye is controlled by tension in the fiber of the ciliary body. To focus on the distant object the controlling muscles allow the lens to become thicker in order to focus on object near the eye it becomes relatively flattened. The distance between the center of the lens and the retina is called the focal length and it varies from 17mm to 14mm as the refractive power of the lens increases from its minimum to its maximum. When the eye focuses on an object farther away than about 3m.the lens exhibits its lowest refractive power. When the eye focuses on a nearly object. The lens is most strongly refractive. The retinal image is reflected primarily in the area of the fovea. Perception then takes place by the relative excitation of light receptors, which transform radiant energy into electrical impulses that are ultimately decoded by the brain.

Lecture Notes