0% found this document useful (0 votes)
134 views27 pages

Dip Notes Module-1 Part-1 17ec72

Uploaded by

Jahnavi Ramkumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views27 pages

Dip Notes Module-1 Part-1 17ec72

Uploaded by

Jahnavi Ramkumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

DIGITAL IMAGE PROCESSING[17EC72]

DIGITAL IMAGE PROCESSING[17EC72]

MODULE-1

1. What is Digital Image Processing


An image is a two dimensional signal. It is defined by the mathematical function
f(x,y) where x and y are the two co-ordinates horizontally and vertically and the amplitude of f at
any pair of coordinate (x, y) is called the intensity or gray level of the image at that point.

When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image
a Digital image.

A digital image is composed of a finite number of elements, each of which has a particular location
and value. These elements are referred to as picture elements, image elements, pels, and pixels.
2. Write a note on origins(History) of Digital Image Processing
a) Early 1920s: One of the first applications of digital imaging was in the newspaper industry
– The Bartlane cable picture transmission service
– Images were transferred by submarine cable between London and New York
– Pictures were coded for cable transfer and reconstructed at the receiving end on a telegraph printer
Mid to late 1920s: Improvements to the Bartlane system resulted in higher quality images
– New reproduction processes based on photographic techniques
– Increased number of tones in reproduced images

Page 11
DIGITAL IMAGE PROCESSING[17EC72]

b) 1960s: Improvements in computing technology and the onset of the space race led to a surge of
work in digital image processing
– 1964: Computers used to improve the quality of images of the moon taken by the Ranger 7 probe
– Such techniques were used in other space missions including the Apollo landings

c) 1970s: Digital image processing begins to be used in medical applications


– 1979: Sir Godfrey N. Hounsfield & Prof. Allan M. Cormack share the Nobel Prize in medicine for
the invention of tomography, the technology behind Computerised Axial Tomography (CAT)
scans

d) 1980s - Today: The use of digital image processing techniques has exploded and they are now used
for all kinds of tasks in all kinds of areas – Image enhancement/restoration – Artistic
effects – Medical visualisation – Industrial inspection – Law enforcement – Human
computer interfaces

3. Applications in Digital Image Processing


i) Gamma-ray imaging: nuclear medicine and astronomical observations
ii) X-rays: medical diagnostics, industry, and astronomy, etc.
iii) Ultraviolet: lithography, industrial inspection, microscopy, lasers, biological imaging,
iv)and astronomical observations
v) Visible and infrared bands: light microscopy, astronomy, remote sensing, industry,
vi)and law enforcement
vii) Microwave band: radar
viii) Radio band: medicine (such as MRI) and astronomy
i) Gammaray imaging
Major uses of imaging based on gamma rays include nuclear medicine and astronomical observations. In
nuclear medicine, the approach is to inject a patient with a radioactive isotope that emits gamma rays as it
decays. Images are produced from emissions collected by gamma ray detectors.
Positron emission tomography(PET)
The patient is given a radioactive isotope that emits positrons as it decays. When a positron meets a electron,
both are annihilated and two gamma rays are given off. These are detected and a tomographic image is
created using the basic principles of tomography.
ii) X-ray Imaging (oldest source of EM radiation)

X-rays for medical and industrial imaging are generated using an x-ray tube, which is a vacuum tube with
a cathode and anode. The cathode is heated, causing free electrons to be released. These electrons flow at
high speed to the positively charged anode. W en the electron strike a nucleus, energy is released in the
form x-ray radiation.
h
Angiography is another major application in an area called contrast enhancement radiography. The
procedure is used to obtain images of blodd vessels.
Page 12
DIGITAL IMAGE PROCESSING[17EC72]
iii) Imaging in the microwave band

Dominant application in microwave band is radar. The unique feature of imaging radar is its ability to
collect data over virtually any region at any time, regardless of weather or ambient lighting conditions. Some
radar waves can penetrate clouds and under certain conditions can also see through vegetation, ice and
extremely dry sand.

iv) Imaging in the radio band

Major applications of imaging in the radio band are in medicine and astronomy. In medicine radio waves
are used in magnetic resonance imaging (MRI). This techniques places a patient in a powerful magnet and passes
radio waves through his or her body in short pulses. Each pulse causes a responding pulse of radio waves to be
emitted by patient’s tissues. The location from which theses signals orginate and their strength are
determined by a computer which produces a two-dimensional picture of a section of the patient.
v) Other Imaging Modalities Acoustic images, electron microscopy and synthetic (computer –
generated images)

Imaging using sound finds application in geological exploration, industry and medicine. The most
important commercial applications of image processing in geology are in mineral and oil exploration.
Ultrasound imaging is used routinely in manufacturing; the best known applications of this technique
are in medicine, especially in obsterics, where unborn babies are imaged to determine the health of
their development.
4. FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING

Block diagram of Fundamental Steps in Digital Image Processing


1. Image Acquisition
This is the first step or process of the fundamental steps of digital image processing. Image acquisition could
Page 13
DIGITAL IMAGE PROCESSING[17EC72]
be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves
preprocessing, such as scaling etc.
The image is captured by a sensor (eg. Camera), and digitized if the output of the camera or sensor is not
already in digital form, using analogue-to-digital convertor
2. Image Enhancement
Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the
idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features
of interest in an image. Such as, changing brightness & contrast etc.

3. Image Restoration
Image restoration is an area that also deals with improving the appearance of an image. However, unlike
enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to
be based on mathematical or probabilistic models of image degradation.

4. Color Image Processing


Color image processing is an area that has been gaining its importance because of the significant increase in
the use of digital images over the Internet. This may include color modeling and processing in a digital
domain etc.

5. Wavelets and Multiresolution Processing


Wavelets are the foundation for representing images in various degrees of resolution. Images subdivision
successively into smaller regions for data compression and for pyramidal representation.

6. Compression
Compression deals with techniques for reducing the storage required to save an image or the bandwidth to
transmit it. Particularly in the uses of internet it is very much necessary to compress data.

7. Morphological Processing
Morphological processing deals with tools for extracting image components that are useful in the
representation and description of shape.
8. Segmentation
Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous
segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation
procedure brings the process a long way toward successful solution of imaging problems that require objects to
be identified individually.

9. Representation and Description


Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel
data, constituting either the boundary of a region or all the points in the region itself. Choosing a
representation is only part of the solution for transforming raw data into a form suitable for subsequent
computer processing. Description deals with extracting attributes that result in some quantitative
information of interest or are basic for differentiating one class of objects from another.
Page 14
DIGITAL IMAGE PROCESSING[17EC72]
10.Object recognition
Recognition is the process that assigns a label, such as, “vehicle” to an object based on its descriptors.
11. Knowledge Base
Knowledge may be as simple as detailing regions of an image where the information of interest is known to
be located, thus limiting the search that has to be conducted in seeking that information. The knowledge
base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection
problem or an image database containing high-resolution satellite images of a region in connection with
change-detection applications.

5. COMPONENTS OF AN IMAGE PROCESSING SYSTEM

Block Diagram of Components Of An Image Processing System


1. Image Sensors
Two elements are required to acquire digital images. The first is the physical device that is sensitive to the
energy radiated by the object we wish to image (Sensor). The second, called a digitizer, is a device for
converting the output of the physical sensing device into digital form.
2. Specialized Image Processing Hardware
Usually consists of the digitizer, mentioned before, plus hardware that performs other primitive
operations, such as an arithmetic logic unit (ALU), which performs arithmetic and logical operations in
parallel on entire images. This type of hardware sometimes is called a frontend subsystem, and its most
distinguishing characteristic is speed. In other words, this unit performs functions that require fast data
throughputs that the typical main computer cannot handle.

3. Computer

Page 15
DIGITAL IMAGE PROCESSING[17EC72]
The computer in an image processing system is a general-purpose computer and can range from a PC to a
supercomputer. In dedicated applications, sometimes specially designed computers are used to achieve a
required level of performance

4. Image Processing Software


Software for image processing consists of specialized modules that perform specific tasks. A well-designed
package also includes the capability for the user to write code that, as a minimum, utilizes the specialized
modules

5. Mass Storage Capability


Mass storage capability is a must in a image processing applications. And image of sized 1024 * 1024 pixels
requires one megabyte of storage space if the image is not compressed. Digital storage for image
processing applications falls into three principal categories:
1. Short-term storage for use during processing.
2. on line storage for relatively fast recall
3. Archival storage, characterized by infrequent access
6. Image Displays
The displays in use today are mainly color (preferably flat screen) TV monitors. Monitors are driven by
the outputs of the image and graphics display cards that are an integral part of a computer system.
7. Hardcopy devices
Used for recording images, include laser printers, film cameras, heat-sensitive devices, inkjet units and
digital units, such as optical and CD-Rom disks.
8. Networking
Is almost a default function in any computer system, in use today. Because of the large amount of data
inherent in image processing applications the key consideration in image transmission is bandwidth.
In dedicated networks, this typically is not a problem, but communications with remote sites via the
internet are not always as efficient.

6. ELEMENTS OF VISUAL PERCEPTION


i). Structure of the human Eye
The eye is nearly a sphere with average approximately 20 mm diameter. The eye is enclosed with three
membranes
a)The cornea and sclera: it is a tough, transparent tissue that covers the anterior surface of the
eye. Rest of the optic globe is covered by the sclera
b)The choroid: It contains a network of blood vessels that serve as the major source of nutrition to the
eyes. It helps to reduce extraneous light entering in the eye
It has two parts
(1) Iris Diaphragms- it contracts or expands to control the amount of light that enters
the eyes.
(2) Ciliary body

Page 16
DIGITAL IMAGE PROCESSING[17EC72]

c)Retina – it is innermost membrane of the eye. When the eye is properly focused, light from an object outside the
eye is imaged on the retina. There are various light receptors over the surface of the retina
The two major classes of the receptors are-
1) cones- it is in the number about 6 to 7 million. These are located in the central portion of the retina
called the fovea. These are highly sensitive to color. Human can resolve fine details with these cones because
each one is connected to its own nerve end. Cone vision is called photopic or bright light vision
2) Rods – these are very much in number from 75 to 150 million and are distributed over the entire
retinal surface. The large area of distribution and the fact that several roads are connected to a single
nerve give a general overall picture of the field of view. They are not involved in the color vision and are
sensitive to low level of illumination. Rod vision is called is scotopic or dim light vision.
The absent of reciprocators is called blind spot
ii. Distribution of rods and cones in Retina

Distribution of rods and cones in Retina

Page 17
The absence of receptors in this area results in the so-called blindspot. Fig.shows that cones are most dense in
the center of the retina(in the center area of the fovea) receptors Density is measured in degrees from fovea.
Cones are more dense in the centre of the retina (fovea) & distributed lightly in the remaining part of eye. Rods
are distributed equally over the surface of eye.
iii. Image Formation in the Eye
The major difference between the lens of the eye and an ordinary optical lens in that the former is flexible. The
shape of the lens of the eye is controlled by tension in the fiber of the ciliary body. To focus on the distant object
the controlling muscles allow the lens to become thicker in order to focus on object near the eye it becomes relatively
flattened.
The distance between the center of the lens and the retina is called the focal length and it
varies from 17mm to 14mm as the refractive power of the lens increases from its minimum to its maximum.
When the eye focuses on an object farther away than about 3m.the lens exhibits its lowest
refractive power. When the eye focuses on a nearly object. The lens is most strongly refractive.
The retinal image is reflected primarily in the area of the fovea. Perception then takes place by the relative
excitation of light receptors, which transform radiant energy into electrical impulses that are ultimately
decoded by the brain.
DIGITAL IMAGE PROCESSING[17EC72]
iv. Brightness Adaption and Discrimination
Digital image are displayed as a discrete set of intensities. The range of light intensity levels to which the
human visual system can adopt is enormous- on the order of 1010 from scotopic threshold to the glare limit.
Experimental evidences indicate that subjective brightness is a logarithmic function of the light intensity
incident on the eye.

The curve represents the range of intensities to which the visual system can adopt. But the visual system cannot
operate over such a dynamic range simultaneously. Rather, it is accomplished by change in its overcall sensitivity
called brightness adaptation.
For any given set of conditions, the current sensitivity level to which of the visual system
is called brightness adoption level , Ba in the curve. The small intersecting curve represents the range of
subjective brightness that the eye can perceive when adapted to this level. It is restricted at level Bb , at and
below which all stimuli are perceived as
indistinguishable blacks. The upper portion of the curve is not actually restricted. whole simply raise the adaptation
level higher than Ba .
The ability of the eye to discriminate between change in light intensity at any specific adaptation level is also
of considerable interest.

Faculty: Prof.Sunitha.R
Page 13
DIGITAL IMAGE PROCESSING[17EC72]

Take a flat, uniformly illuminated area large enough to occupy the entire field of view of the subject. It may
be a diffuser such as an opaque glass, that is illuminated from behind by a light source whose intensity, I can be
varied. To this field is added an increment of illumination ΔI in the form of a short duration flash that appears
as circle in the center of the uniformly illuminated field. If ΔI is not bright enough, the subject cannot
see any perceivable changes.
As ΔI gets stronger the subject may indicate of a perceived change. ΔIc is the increment of illumination
discernible 50% of the time with background illumination I. Now, ΔIc /I is called the Weber ratio.
Small value means that small percentage change in intensity is discernible representing “good” brightness
discrimination.
Large value of Weber ratio means large percentage change in intensity is required representing “poor
brightness discrimination”.

At low levels of illumination, vision is carried out by the rods

• At high levels vision is the function of cones (better discrimination)

v. Perceived brightness is not a simple function of Intensity


Simultaneous Contrast

Optical illusion

7. IMAGE SENSING AND ACQUISITION


DIGITAL IMAGE PROCESSING[17EC72]

a) Single image
sensor
b) Line sensor
c) Array sensor

*The incoming energy is transformed into a voltage by the combination of input electrical power
and sensor material.
*Output voltage waveform is response of the sensor(s)
*A digital quantity is obtained from each sensor by digitizing its response
a. Image acquisition using a single sensor

Page 16
DIGITAL IMAGE PROCESSING[17EC72]

i) Photodiode is used as sensor


ii)Made up of silicon
iii)Output voltage waveform is proportional to light
iv)Filter in front of sensor is to increase selectivity
This mechanical digitizer is also called as Microdensitometer
Above arrangement is for high precision scanning. A film negative is mounted onto a drum whose
mechanical rotation provides displacement in one dimension. The single sensor is mounted on a lead screw that
provides motion in the perpendicular direction.
b. Image Acquisition Using Sensor Strips

A geometry that is used much more frequently than single sensors consists of an in-line arrangement of sensors
in the form of a sensor strip. The strip provides imaging elements in one direction. Motion perpendicular to the
strip provides imaging in the other direction This is the type of arrangement used in most flat bed scanners. Sensing
devices with 4000 or more in-line sensors are possible. In-line sensors are used routinely in airborne imaging
applications, in which the imaging system is mounted on an aircraft that flies at a constant altitude and
speed over the geographical area to be imaged.
One-dimensional imaging sensor strips that respond to various bands of the electromagnetic spectrum
are mounted perpendicular to the direction of flight. The imaging strip gives one line of an image at a time, and
the motion of the strip completes the other dimension of a two-dimensional image. Lenses or other focusing
schemes are used to project the area to be scanned onto the sensors. Sensor strips mounted in a ring
configuration are used in medical and industrial imaging
c. Image Acquisition Using Sensor Arrays
Individual sensors can be arranged in the form of a 2-D array. Numerous electromagnetic and
some ultrasonic sensing devices are arranged frequently in an array format. This is also the predominant
arrangement found in digital cameras. A typical sensor for these cameras is a CCD array, which can be
manufactured with a broad range of sensing properties and can be packaged in rugged arrays of 4000 *
4000 elements or more.

CCD sensors are used widely in digital cameras and other light sensing instruments. The response
of each sensor is proportional to the integral of the light energy projected onto the surface of the sensor, a property
that is used in astronomical and other applications requiring low noise images.
8. A Simple Image Formation Model

Images denoted by two-dimensional functions f(x,y)


Value of amplitude of f at (x,y): positive scalar quantity
Image generated by physical process: intensity values are proportional to the energy radiated by a physical
source =>
0 < f(x,y) < ∞

f(x,y) may be characterized by 2 components:


(1) The amount of source illumination incident on the scene: illumination i(x,y)
(2) The amount of illumination reflected by the objects of the scene: reflectance r(x,y) f(x,y) =
i(x,y) r(x,y),
where 0 < i(x,y) < ∞ and 0 < r(x,y) < 1
Reflectance is bounded by 0 is total absorption and 1 is total reflectance

9. IMAGE SAMPLING AND QUANTIZATION

• The output of many sensors is continuous voltage.


• To get a digital image, we need to convert this voltage into digital form.
• But this involves 2 processes, namely sampling and quantization.
• An image is continuous with respect to x and y coordinates and in amplitude.
* Digitizing the coordinates is called sampling. Digitizing the amplitude is called
quantization

• To create a digital image, we need to convert the continuous sensed data into digital form. This involves
two processes: sampling and quantization.
• Figure 2.16(a) shows a continuous image, f(x, y) , that we want to convert to digital form.
• An image may be continuous with respect to the x - and y – coordinates, and also in amplitude.
• To convert it to digital form, we have to sample the function in both coordinates and in amplitude.
• Digitizing the coordinate values is called sampling • Digitizing the amplitude values is called
quantization.
• In order to form a digital function, the gray-level values must be converted (quantized) into
discrete quantities.
• Sampling means that the values of the continuous function f(x,y) are retained only in specific
positions (i,j) where 0≤i≤Nx and 0≤j≤Ny, where Nx and Ny are integer values. The sampling
topology depends on the spatial arrangement and size of the sensors that are used to acquire the image.
• Clearly, the quality of a digital image is determined to a large degree by the number of samples and
discrete gray levels used in sampling and quantization.

10. REPRESENTATION OF DIGITAL IMAGES

The Digital Image f(x,y) is represented as M rows and N


columns numerical array as

 f (0, f (0, N 1) 


0) f (0,1) ...
 f (1, f (1,1) ... f (1, N 1)
0)
f (x, y)   
 ... ... ... ... 
f (M 1, 0) f (M
 1,1) ...
f (M 1, N 1)
 

• The result of sampling and quantization is a matrix of real numbers.


• Thus a sampled digital image has M rows and N columns.
• The values of the coordinates (x,y) also has become discrete due to quantization.
• Thus we can write a M X N digital image as:

 f (1,1)
 f (1, 2) ... f (1, N) 

f f (2, 2) ... f (2, N)
(2,1)
f (x, y)   

 ... ... ... ... 
f (M ,1) f (M , 2) ... f (M , N)
 
• The number of intensity levels is an integer power of 2.
DIGITAL IMAGE PROCESSING[17EC72]

L = 2k (L=Gray level, K=No. of bits)


Ex: If an image has 256 gray levels that it is an 8 bit image

i.e L = 2k
256 = 28
k=8
• The range of values spanned by the gray scale is called dynamic range of an image. The upper limit is
determined by saturation and the lower limit by noise.
• An image with high dynamic range is said to be a high contrast image.
• An image with low dynamic range leads to a dull, washed out look.
• The number of bits required to store a digitized image is
bMNk

when M = N, b =N2k
For example, a 256 X 256 image represented with 8 bits takes 5,24,288 bits.
11. RELATIONSHIPS BETWEEN PIXELS

In this topic, several important relationships between pixels in a digital image are considered.
• Neighbors
• Adjacency
• Connectivity
• Paths
• Regions and boundaries
• Distance measures

i.Neighbors of a Pixel
• A pixel p at coordinates (x,y) has four horizontal and vertical neighbors whose coordinates are given
by: (x+1,y), (x-1, y), (x, y+1), (x,y-1)

(x, y-1)
(x-1, y) P (x,y) (x+1, y)
DIGITAL IMAGE PROCESSING[17EC72]

(x, y+1)

This set of pixels, called the 4-neighbors or p, is denoted by N4(p). Each pixel is one unit distance from (x,y) and
some of the neighbors of p lie outside the digital image if (x,y) is on the border of the image.

• The four diagonal neighbors of p have coordinates:


(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1) and are denoted by ND (p).

(x-1, y+1) (x+1, y-1)

P (x,y)
(x-1, y-1) (x+1, y+1)

These points, together with the 4-neighbors, are called the 8-neighbors of p, denoted by N8 (p). As before, some of
the points in ND (p) and N8 (p) fall outside the image if (x,y) is on the border of the image.

(x-1, y+1) (x, y-1) (x+1, y-1)

(x-1, y) P (x,y) (x+1, y)


(x-1, y-1) (x, y+1) (x+1, y+1)

ii. Adjacency

Image boundary and regions are defined by set of connected pixels. To determine if the two pixels are
connected/adjacent or not, there are two conditions
a) Two pixels should be neighbors
b) Their gray levels should be similar or should be in the set

Ex-1: V = {1, 2}
4- Adjacency – two pixel P and Q with value from V are 4–adjacency if A is in the set n4(P)
Ex.1
DIGITAL IMAGE PROCESSING[17EC72]

0 q1 1 q2 1

0 2P 0
0 0 1q3
 p and q1 are not 4-adjacent because q1 is not N4(p), q1 is diagonal to p
 p and q2 are neighbors and are 4-adjacent because q1 is not N4(p) and p=2 & q2 = 1 are in set V
 p and q3 are not 4-adjacent because q3 is not N4(p), q3 is diagonal to p
8-Adjacency – two pixel P and Q with value from V are 8–adjacency if A is in the set N8(P)
Ex.1

0 q1 1 q2 1

0 2P 0
0 0 1q3
 p and q1 are not 8- adjacent, q1 is ND(p) i.e q1 is diagonal to p but q1=0 is not in the set V
 p and q2 are 8- adjacent because p & q2 are 8- neighbors and because p=2 & q2 = 1 are in the set V
 p and q3 are 8- adjacent because p & q2 are 8- neighbors and because p=2 & q2 = 1 are in the set V
M-adjacency –two pixel P and Q with value from V are m– adjacency if
· Q is in n4 (p) or
· Q is in nd (q) and the set N4(p) ∩ N4(p) has no pixel whose values are from set V

0 q1 0 1 q2
0 2P 0
0 0 1q3
1 0 1
 p and q1 are not m- adjacent, but q1=0 is not in the set V
 q2 is diagonal to ‘p’ and
N4(p) ∩ N4(q2)
{0,0,0,0}∩{0,0}
{0} here 0 is not belongs to set V. therefore p and q2 are m-adjacent.
 q3 is diagonal to ‘p’ and
DIGITAL IMAGE PROCESSING[17EC72]

N4(p) ∩ N4(q3)
{0,0,0,0}∩{0,0,1}
{0,1} here 0 is not belongs to set V and 1 is belongs to set V therefore p and q3
are not m-adjacent.

NOTE: In 8-adjacency sometimes ambiguity exists between pixels , to avoid this m-adjacency is used.

iii. Connectivity

Two pixels are connected


• Let two pixels p and q are said to be connected ,if they are adjacent. Similarly, two subsets are
connected or adjacent if some pixel in S1 is adjacent to some pixel in S2
• Consider two image subset S1 and S2 in fig. (a) and (b) v={ 1 }. As pixel p in sub image S1 and pixel q in
sub image S2 have value 1 and 8-adjacent,thus S1 and S2 are 8-adjacent but not4-adjacent, because p and q
are not are not adjacent for V={ 1 }

iv. Path
A (digital) path (or curve) from pixel p with coordinates (x0, y0) to pixel q with coordinates (xn, yn) is a
sequence of distinct pixels with coordinates
(x0, y0), (x1, y1), …, (xn, yn)

Where (xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n. Here n is the length of the path.
If (x0, y0) = (xn, yn), the path is closed path. We can define 4-, 8-, and m-paths

based on the type of adjacency used.


v. Boundary (or border)
 The boundary of the region R is the set of pixels in the region that have one or more neighbors that are
not in R.
 If R happens to be an entire image, then its boundary is defined as the set of pixels in the first and
last rows and columns of the image.
vi. Region
 R is a subset of pixels in an image. If every pixel in R is connected to other pixels in R, then R is called
a region. Fig. Illustrates a region in an image. Boundary of a region is defined as a set of pixels in the
region that have one or more neighbors that are not in R. Boundary is the edge of a region.

vii. Distance Measures


Given pixels p, q and z with coordinates (x, y), (s, t), (u, v) respectively, the distance function D has
following properties:
a. Distance should be a non negative number. D(p, q) ≥ 0 [D(p,
q) = 0, iff p = q]
b. Distance between p to q should be same as distance between q & p D(p, q) = D(q, p)
c. Distance between ant two points p and z should be less than equal to sum of distance between p & q and
between q & p.
D(p, z) ≤ D(p, q) + D(q, z)

The following are the different Distance measures:


a. Euclidean Distance :

De(p, q) = [(x-s)2 + (y-t)2]1/2 All the pixels that have Euclidean distance less than or equal to a
value r from pixel p(x,y) are contained in a disk of radius r centered at (x,y).

b. D4 Distance or City Block Distance:

D4(p, q) = |x-s| + |y-t| .All thepixels having D4 distance less than or equal to a value r are contained in a
diamond centered at (x,y). Also, all the pixels with D4 =1 are the 4- neighbors of (x,y).

c. D8 distance or Chess Board Distance:

D8(p, q) = max(|x-s|, |y-t|). All the pixels with D8 distance less than or equal to some value r from (x,y) are
contained in a square centered at (x,y). All pixels with D8 =1 are the 8-neigbors of (x,y).
12. Linear and Non-linear operations

H is said to be a nonlinear operator if it does not meet the above qualification.


Here ∑ (sigma) operation is linear

Therefore MAX operator is non-linear

You might also like