4. Fundamental Theories
4.1 Image Definition
An image may be defined as a two-dimensional
function, f(x,y) where x and y are spatial coordinates, and the amplitude of any
pair of the coordinates is called the intensity or gray level of the image at
that point.When x, y, and the amplitude values of are all finite, discrete
quantities, the image is called digital image. The field of digital image
processing refers to processing digital images by digital computer. Note that a
digital image is composed of a finite number of elements, each of which has a
particular location and value. These elements are referred to as picture
elements, image elements, pels, and pixels. Pixel is the term most widely used
to denote the elements of a digital image [6].
Vision is the most advanced of human senses, so images
play the most important role in human perception. However, unlike humans, who
are limited to the visual band of the electromagnetic (EM) spectrum, imaging
machines cover almost the entire EM spectrum, ranging from gamma to radio
waves.They can operate also on images generated by sources that humans are not
accustomed to associating with images. These include ultrasound, electron
microscopy, and computer-generated images. Thus, digital image processing
encompasses a wide and varied field of applications [7].
4.1.2 Digital Image Processing
Digital image processing is electronic data
processing on a 2-D array of numbers. The array is a numeric representation of
an image. A real image is formed on a sensor when an energy emission strikes
the sensor with sufficient intensity to create a sensor output. The energy
emission can have numerous possible sources (e.g., acoustic, optic, etc.). When
the energy emission is in the form of electromagnetic radiation within the band
limits of the human eye, it is called visible light. Some objects will reflect
only electromagnetic radiation. Others produce their own, using a phenomenon
called radiancy. Radiancy occurs in an object that has been heated sufficiently
to cause it to glow visibly. Visible light images are a special case, yet they
appear with great frequency in the image processing literature. Another source
of images includes the synthetic images of computer graphics. These images can
provide controls on the illumination and material properties that are generally
unavailable in the real image domain [8].
4.1.3 Image Processing Operations
An image is
digitalized to convert analog data in to digital data which can be stored in a
computer's memory or on some form of storage media such as a hard disk or
USB-Flash. This digititalization procedure can be done by a scanner, or by a
video camera connected to a frame capturer board in a computer. Once the image
digitalized, it can be operated by various image processing operations [9]. Image
processing operations can be roughly divided into three major categories : (1)
Image Compression, (2) Image Enhancement and Restoration, and (3)
Measurement Extraction. Image
compression is familiar to most people. It involves reducing the amount of
memory needed to store a digital image .
Image defects
which could be caused by the digitization process or by faults in the imaging
set-up (for example, bad lighting) can be corrected using Image Enhancement
techniques. Once the image is in good condition, the Measurement Extraction
operations can be used to obtain useful information from the image. Some
examples of Image Enhancement and Measurement Extraction are given below. The
examples shown all operate on 256 grey-scale images. This means that each pixel
in the image is stored as a number between 0 to 255, where 0 represents a black
pixel, 255 represents a white pixel and values in-between represent shades of
grey. These operations can be extended to operate on colour images [10]. Some
basic operations in digital image processing are described below [11,12] :
1. Image enhancement
Image enhancement is used to enhance an image by
manipulating image parameters. From this operation, special characteristic of
an image can be highlighted. Some examples of image enhancement operations are
:
a. Contrast enhancement
b. Edge enhancement
c. Sharpening
d. Pseudocoloring
e. Noise filtering
2. Image
restoration
The objective of image restoration is to improve
an image in some predefined sense. Although there are overlaped area between
image enhancement and image restoration, image restoration is very important in
objective process. The Restoration attempts to reconstruct or recover degraded
image by using basic theories of the degradation phenomenon. Then, the restoration
operation are concerned in modeling the degradation phenomenon and applying the
inverse process in order to restore the real image. Some examples of image
restoration are:
a. Deblurring
b. Noise removing
3. Image
compression
The purpose of image compression operations is to
solve the problems in reducing the amount of data required to represent a
digital image. Compression is achieved by removing one or more data
redundancies e.g. (1) coding redundancy which is present when less than optimal
code words are used, (2) interpixel redundancy which is results from
correlations between the pixel of an image, and (3) psychovisual redundancy
which is due to data that is ignored by the human visual systems
4. Image
segmentation
Segmentation is operations divide images into its
constituens regions. The level to which subdivision is carried depends on the
problem being solved. Therefore, segmentation would stop when the object of
interest in operations have been isolated. One example is in the automated
inspection of ectronic assemblies, interest lies in analyzing image of the
products with the purpose in determine the specific anomalies in a product,
such as missing components or broken connection paths. There is no point in
carrying segmentation past the level of detail required to identify those
elements
5. Image
Analysis
Objective of image analysis is to measure an image quantitatively and present its descriptions. This techniques
extracts some characteristics similiar to the purpose in object identifications.
Sometimes, segmentation process is
neccesary in localize the object. Examples of image analysis :
a. Edge detection
b. Boundary extraction
c. Region representations
4.2 Lung Cancer
Lung cancer is a disease characterised by uncontrolled cell growth
in tissues of the lung. It is also the most preventable cancer. Cure rate and prognosis depend on the early
detection and diagnosis of the disease. Lung cancer symptoms usually do not
appear until the disease has progressed. Thus, early detection is not easy. Many early lung cancers
were diagnosed incidentally, after doctor found symstomps as a results of test
performed for an unrelated medical condition [13].
There are
two major types of lung cancer: non-small cell and small cell. Non-small cell
lung cancer (NSCLC) comes from epithelial cells and is the most common type.
Small cell lung cancer begins in the nerve cells or hormone-producing cells of
the lung. The term “small cell” refers to the size and shape of the cancer
cells as seen under a microscope. It is important for doctors to distinguish
NSCLC from small cell lung cancer because the two types of cancer are usually
treated in different ways. Lung cancer begins when cells in the lung change and
grow uncontrollably to form a mass called a tumor (or a lesion or nodule). A
tumor can be benign (noncancerous) or malignant (cancerous). A cancerous tumor
is a collection of a large number of cancer cells that have the ability to
spread to other parts of the body. A lung tumor can begin anywhere in the lung
[13].
(a)
(b)
Figure 1 X-Ray image of
(a) normal lungs and (b) lung cancer
Once a cancerous lung tumor
grows, it may or may not shed cancer cells. These cells can be carried away in
blood or float away in the natural fluid, called lymph, that surrounds lung
tissue. Lymph flows through tubes called lymphatic vessels that drain into
collecting stations called lymph nodes, the tiny, bean-shaped organs that help
fight infection. Lymph nodes are located in the lungs, the center of the chest,
and elsewhere in the body. The natural flow of lymph out of the lungs is toward
the center of the chest, which explains why lung cancer often spreads there.
When a cancer cell leaves its site of origin and moves into a lymph node or to
a faraway part of the body through the bloodstream, it is called metastasis
[14].
The stage of lung cancer is determined by the location and
size of the initial lung tumor and whether it has spread to lymph nodes or more
distant sites. The type of lung cancer (NSCLC versus small cell) and stage of
the disease determine what type of treatment is needed.
4.2.1 Lung Cancer Classification
1. Non-small
cell lung cancer
About 85% to 90% of lung cancers are non-small cell lung
cancer (NSCLC). There are 3 main subtypes of NSCLC. The cells in these subtypes
differ in size, shape, and chemical make-up when looked at under a microscope.
But they are grouped together because the approach to treatment and prognosis
(outlook) are very similar [15].
a. Adenocarcinoma
About 40% of lung cancers are adenocarcinomas. These cancers
start in early versions of the cells that would normally secrete substances
such as mucus. This type of lung cancer occurs mainly in people who smoke (or
have smoked), but it is also the most common type of lung cancer seen in
non-smokers. It is more common in women than in men, and it is more likely to
occur in younger people than other types of lung cancer. Adenocarcinoma is
usually found in the outer region of the lung. It tends to grow slower than
other types of lung cancer, and is more likely to be found before it has spread
outside of the lung. People with one type of adenocarcinoma, sometimes called bronchioloalveolar
carcinoma, tend to have a better outlook (prognosis) than those with other
types of lung cancer.
b. Large cell
(undifferentiated) carcinoma
This type of cancer accounts for about 10% to 15% of lung
cancers. It may appear in any part of the lung. It tends to grow and spread quickly,
which can make it harder to treat. A subtype of large cell carcinoma, known as large
cell neuroendocrine carcinoma, is a fast-growing cancer that is very similar to
small cell lung cancer
c. Other
subtypes
There are also a few other subtypes of non-small cell lung
cancer, such as adenosquamous carcinoma and sarcomatoid carcinoma. These are
much less common.
2. Small cell lung cancer
About 10% to 15% of all lung cancers are small cell lung
cancer (SCLC), named for the size of the cancer cells when seen under a
microscope. Other names for SCLC are oat cell cancer, oat cell carcinoma, and
small cell undifferentiated carcinoma. It is very rare for someone who has
never smoked to have small cell lung cancer. SCLC often starts in the bronchi
near the center of the chest, and it tends to spread widely through the body
fairly early in the course of the disease
4.2.2 Lung Cancer Risk Factor
1. Tobacco smoke
Smoking
is by far the leading risk factor for lung cancer. Tobacco smoke causes nearly
9 out of 10 cases of lung cancer. The
longer a person has been smoking and the more packs a day smoked, the greater the risk. If a person
stops smoking before lung cancer starts, the lung tissue slowly repairs itself.
Stopping smoking at any age may lower the risk of lung cancer. Cigar and pipe
smoking are almost as likely to cause lung cancer as is cigarette smoking. Smoking
low tar or "light" cigarettes increases lung cancer risk as much as
regular cigarettes. There is concern that menthol cigarettes may increase the
risk even more since the menthol allows smokers to inhale more deeply.
Secondhand
smoke: People who don't smoke but breathe the smoke of others may also be at a
higher risk for lung cancer. Non-smokers who live with a smoker, for instance,
have
about a 20% to 30%
greater risk of developing lung cancer. Non-smokers exposed to tobacco smoke in
the workplace are also more likely to get lung cancer [16].
2. Radon
Radon is
a radioactive gas made by the normal breakdown of uranium in soil and rocks.
Uranium is found at higher levels in the soil in some parts of the United
States. Radon can't be seen, tasted, or smelled. It can build up indoors and
create a possible risk for cancer. The lung cancer risk from radon is much
lower than that from tobacco smoke. But the risk from radon is much higher in
people who smoke than in those who don't [16].
3. Asbestos
Asbestos exposure is another risk factor for lung cancer.
People who work with asbestos have a
higher risk of getting lung cancer. If they also smoke, the risk is greatly
increased. Both smokers and non-smokers exposed to asbestos also have a greater
risk of developing a type of cancer that starts in the lining of the lungs (it
is called mesothelioma). Although asbestos was used for many years, many
countries has now nearly stopped its use in the workplace and in home products.
While it is still present in many buildings, it is not thought to be harmful as
long as it is not released into the air [16].
4.2.2 Lung
Cancer Staging
Lung cancer staging is the
process of finding out how far a cancer has spread. Patient’s tratment and
prognosis depend on the cancer stage. Lung
cancer staging can be described in TNM systems. The system used to describe the
growth and spread of non-small cell lung cancer (NSCLC) is the American Joint
Committee on Cancer (AJCC) TNM staging system. The TNM system is based on 3 key
pieces of information:
· T indicates the size of the
main (primary) tumor and whether it has grown into nearby
areas.
·
N describes the spread of cancer to nearby (regional) lymph nodes. Lymph
nodes are small bean-shaped collections
of immune system cells that help fight infections. Cancers often spread to the
lymph nodes before going to other parts of the body.
·
M indicates whether the cancer has spread (metastasized) to other organs
of the body. (The most common sites are the brain, bones, adrenal glands,
liver, kidneys, and the other lung.)
Numbers or letters appear after T, N, and M to provide more
details about each of these factors. The numbers 0 through 4 indicate
increasing severity [17].
4.3 ANFIS
The ANFIS is the abbreviation for adaptive neuro-fuzzy inference system.
Actually, this method is like a fuzzy inference system eith a back prpagation
that tries to minimize the error. The performance of this method is like both
Artificial Neural Network and Fuzzy Logic. In both ANN and FL case, the input
passes through the input layer (by input membership function) and the output
could be seen in output layer (by output membership functions). Since, in this
type of advanced fuzzy logic, neural network has been used. Therefore, by using
a learning algorithm the parameters have been changed until reach the optimal
solution. Actually, in this type the FL tries by using the neural network
advantages to adjust its parameters [18].
Several fuzzy
inference systems have
been described by
different researchers (Mamdani,
E.H., 1974; Sugeno, M. and G.T.
Kang, 1988; Sugeno, M. and K. Tanaka, 1991; Takagi, T. and M. Sugeno, 1985;
Zadeh, L.A., 1965). The
most commonly-used systems
are the Mamdani-type and Takagi–Sugeno type,
also known as Takagi–Sugeno–Kang type.
In the case
of a Mamdani-type fuzzy
inference system, both
premise (if) and consequent (then)
parts of a
fuzzy if-then rule
are fuzzy propositions.
In the case
of a Takagi–Sugeno-type fuzzy inference
system where the
premise part of
a fuzzy rule
is a fuzzy
proposition, the consequent
part is a mathematical
function, usually a
zero- or first-degree
polynomial function The
advantages of FL for
grade estimation is
clear because it
prepare a powerful
tool that is
flexible and in lack
of data with
its ability which is
if-then rules would able
to solve the problems.
As discussed, one of
the biggest problems
in FL application
is the shape
and location of
membership function for
each fuzzy variable which solve
by trial and
error method only. In
contrast, numerical computation
and learning are
the advantages of neural
network, however, it
is not easy
to obtain the
optimal structure (number
of hidden layer and
number of neuron
in each hidden
layer, momentum rate and
size) of constructed
neural network and
also this kind of artificial
intelligent is more based on
numerical computation rather that than
symbolic computation [19].
Both
FL and NN have their
advantages, therefore, it is
good idea to
combine their ability
and make an strong tool and also a tool which
improve their weak as well as lead to
least error. Jang (1992, 1993) combined both FL and NN
to produce a
powerful processing tool
named NFSs which is a powerful tool
that have both NN and
FL advantages and
the most common one is
ANFIS.
Figure 2 ANFIS structure
From Figure 2 above, neuro-fuzzy systems consist of five layers with
different function for each layer. One layer is constructed from several nodes
represented by square or circle. The Square symbolizes adaptive node. It means
that value of parameter can be changed by adaption. The cirlce is non-adaptive
node and has a constant value. [20] Equations for each value are described below :
a. First Layer :
All
nodes in first layer are adaptive node (changed parameter), node function for
first layer is :
O1,i = μAi 1/[(x-c)/a]^2b (x)
|
(1)
|
O1,i = μBi-2
1/[(x-c)/a]^2b (y)
|
(2)
|
Where
x and y are input of node i, Ai or Bi-2 are membership functions of each input
concerning fuzzy set of A and B. Used membership function is generalized bell
type (gbell).
b. Layer 2
All nodes in this layer are non-adaptive
(fixed parameter). Node function of second layer is :
O2,i = wi = μAi (x)
|
(3)
|
|
|
Each output stated
the firing strength of each fuzzly rule. This function can be expanded when the
premises consist more than two fuzzy sets.
c. Layer 3
All nodes in layer 3 are non-adaptive type which show normalized
firing strength function. output ratio at node-i from previous layer toward all
previous output. node function of layer 3 is :
O3,i = wibar
|
(4)
|
if
more than two membership functions are constructed, function can be expanded by
dividing with total number of w for all rules
d. Layer 4
Each node in layer 4 is adaptive node with node function as follows:
O4,i =
|
(5)
|
Where w is normalized firing strength from layer 3 and p, q, and r
parameters represent adaptive consequents paramaters
e. Layer 5
In this layer, there is only one fixed node for summing all input,
function of layer 5 is:
O5,i =Σwibar fi
= Σ
|
(6)
|
Adaptive network with five layers is equivalent with fuzzy inference
systems of Takagi Sugeno Kang
Tidak ada komentar:
Posting Komentar