Sunday, 30 September 2012

IMAGE PROCESSING-paper presentation

Abstract. This paper presents an approach based on morphological operators for application of biometric identification of individuals by segmentation and analysis of the structures of the human iris. Algorithms based on morphological operators are developed to segment the iris region from the eye image and also to highlight chosen iris patterns. To represent and characterize the iris are used the extracted features from the patterns. An algorithm is proposed to produce skeletons with unique paths among end-points and nodes, whose objective is to properly extract the desired patterns. After the morphological processing,

the representation obtained is stored for identification purposes. The efficiency of the morphological approach can be illustrated by some results presented. The proposed system was developed to present low storage requirements and low complexity implementation Introduction:- Image Processing and Analysis can be defined as the "act of examining images for the purpose of identifying objects and judging their significance" Image analyst study the remotely sensed data and attempt through logical process in detecting, identifying, classifying, measuring and evaluating the significance of physical and cultural objects, their patterns and spatial relationship.The processing of an image is of two types, they are:-  Analog image processing  Digital image processing In the paper being discussed we discuss about digital image processing and its advanced application strategy of s a new approach based on morphological operators for application of biometric identification of individuals by segmentation and analysis of the iris. Introduction to iris recognition Iris recognition is the process of recognizing a person by analyzing the random pattern of the iris (Figure 1). The automated method of iris recognition is relatively young, existing in patent only since 1994.The iris is a muscle within the eye that regulates the size of the pupil, controlling the amount of light that enters the eye. It is the colored portion of the eye with coloring based on the amount of melatonin pigment within the muscle (Figure 2). Figure 1: Iris Diagram2 Figure 2: Iris Structure.3 Although the coloration and structure of the iris is genetically linked, the details of the patterns are not. The iris develops during prenatal growth through a process of tight forming and folding of the tissue membrane. Prior to birth, degeneration occurs, resulting in the pupil opening and the random, unique patterns of the iris. Although genetically identical, an individual’s irides are unique and structurally distinct, which allows for it to be used for recognition purposes. History In 1936, ophthalmologist Frank Burch proposed the concept of using iris patterns as a method to recognize an individual. In 1985, Drs. Leonard Flom and Aran Safir, ophthalmologists, proposed the concept that no two irides are alike, and were awarded a patent for the iris identification concept in 1987. Dr. Iris Recognition Flom approached Dr. John Daugman to develop an algorithm to automate identification of the human iris. In 1993, the Defense Nuclear Agency began work to test and deliver a prototype unit, which was successfully completed by 1995 due to the combined efforts of Drs. Flom, Safir, and Daugman. In 1994, Dr. Daugman was awarded a patent for his automated iris recognition algorithms. In 1995, the first commercial products became available. In 2005, the broad patent covering the basic concept of iris recognition expired, providing marketing opportunities for other companies that have developed their own algorithms for iris recognition. The patent on the IrisCodes® implementation of iris recognition developed by Dr. Daugman (explained below) will not expire until 2011.8 Iris vs. Retina Recognition As discussed above, iris recognition utilizes the iris muscle to perform verification. Retinal recognition uses the unique pattern of blood vessels on an individual’s retina at the back of the eye. The figure below illustrates the structure of the eye. Figure 6: Structure of the Eye.11 Both techniques involve capturing a high quality picture of the iris or retina, using a digital camera. In the acquisition of these images, some form of illumination is necessary. Both techniques use NIR (near infrared) light. Although safe in a properly designed system, eye safety is a major concern for all systems that illuminate the eye. Because infrared has insufficient energy to cause photochemical effects, the principal potential damage modality is thermal. When NIR is produced using light emitting diodes, the resulting light is incoherent. Any risk for eye safety is remote with a single LED source using today's LED technology. Multiple LED illuminators can, however, produce eye damage if not carefully designed and used. In general, the process of iris recognition system includes the following four steps: 1. Capturing the image 2. Defining the location of the iris 3. Optimizing the image 4. Storing and comparing the image The image of the iris can be captured using a standard camera using both visible and infrared light and may be either a manual or automated procedure. The camera can be positioned between three and a half inches and one meter to capture the image. In the manual procedure, the user needs to adjust the camera to get the iris in focus and needs to be within six to twelve inches of the camera. This process is much more manually intensive and requires proper user training to be successful. The automatic procedure uses a set of cameras that locate the face and iris automatically thus making this process much more user friendly. Once the camera has located the eye, the iris recognition system then identifies the image that has the best focus and clarity of the iris. The image is then analyzed to identify the outer boundary of the iris where it meets the white sclera of the eye, the pupillary boundary and the centre of the pupil. This results in the precise location of the circular iris. The iris recognition system then identifies the areas of the iris image that are suitable for feature extraction and analysis. This involves removing areas that are covered by the eyelids, any deep shadows Iris recognition systems can be implemented using several types of approaches. Some of them are described as follows. The system proposed by Daugman uses anintegro-differential operator to locate the borders of theiris, based on the ascension of the gradient to adjust thecircular contours. The encoding (representation) of the iris it is done through the application of the 2D Garbor wavelet and to measure the dissimilarity between the irises, the Hamming Distance is computed between the corresponding pair of iris representations. Wildes’s system uses border detection based on the gradient and Hough Transform to locate the iris in the image. The representation makes use of a band-pass decomposition derived from application of Laplacian of Gaussian filters, implemented in the practice by the Laplacian Pyramid. The degree of similarity is evaluated with base on normalized correlation between the acquired and database representations The algorithm proposed by Li Ma et al. uses a bank of Gabor filters to capture both local and global iris characteristics to form a fixed length feature vector. Iris matching is based on the weighted Euclidean distance between the two corresponding iris vectors and is therefore very fast.. Shinyoung uses a approach to making a feature vector compact and efficient by using Haar wavelet transform, and two straightforward but efficient mechanisms for a competitive learning method such as a weight vector initialization and the winner selection.The system proposed by Tisse et al. uses gradient decomposed Hough transform / integro-differential operators combination for iris localization and the "analytic image" concept (2D Hilbert transform) to extract pertinent information from iris texture. Boles uses the Wavelet Transform zero crossings for extracting features from images of the iris and representing them, by fine-to-coarse approximations at different resolution levels, calculated on concentric circles in the iris, to generate a sign one dimensional (1D). These signs are compared with the model' s features using different dissimilarity functions. The extraction of features can be implemented through several different techniques . However, the choice of the feature, as well as of the technique to be used, should take into account the contribution in terms of information that can be obtained from it. In other words, the choice of a certain featuredepends on its capacity for separating patterns. With this objective, the approach based on morphological operators is used to identify existent patterns in the iris. The basic idea consists of highlighting these patterns, applying a certain sequence of these operators to obtain the structures and to arrive into a representation, from where the information will be extracted to characterize them. The proposed representation allows to storage the obtained information in a compact and efficient way, while the use of the morphological operators presents advantages in terms of low computational complexity (processing time) and integration hardware issues 2. Formulation of the Problem The process of automated iris recognition basicallyincludes the acquisition of the image, the localization of the region of interest (ROI), the extraction and the matching of patterns . Several factors can affect the quality of the image, and consequently, in the decision to be taken, which determines if the iris pattern submitted to the system matches or not to a previously stored pattern. Supposing that the acquisition was accomplished in controlled conditions (illumination, distance, framing,etc.), in order to obtain images with the best quality (resolution, clearness and contrast), a pre-processing stage is required. This stage is necessary to enhance certain structures of the iris, to eliminate undesirable effects (e.g., reflections), and still to determine the ROI (located in the portion inside of the limbus – border between the sclera and the iris - and outside the pupil –Figure 1), due to the fact that the acquired image does not include only the iris, but it also contains data from the surrounding region of the eye. Figure 2 presents the diagram of the proposed iris recognition system. Figure 1 - Human eye The term morphology in Biology refers to the study of the structure of plants and animals. Similarly, the Mathematical Morphology is based on the study of the geometric structure of the entities that compose an image, thus, being adequate to the proposed approach. The representation obtained through the morphological processing is based on connected components . Figure 2 - Diagram of the iris recognition system. Due to great amount of spatial characteristics of the human iris, which are manifested in a variety of scales , the choice of the representation affects directly the quantity of information to be stored. In the proposed approach, the representation is based on the information of the end-points (extreme points), of the nodes (points from where the ramifications start) and of the branches . Figure 3 - Representation: end-points and nodes ( ). With this representation the quantity of necessary information to characterize the iris is small, when compared to another representation types, generating a compact representation (of the order of hundreds of bytes) for easy storage. The image of the iris is not stored, only the representation is kept. 3 Morphological Operators 3.1 Mathematical Morphology In the Mathematical Morphology , information relative to the topology and geometry of an unknown set (e.g., an image), are extracted using another completely defined set called structuring element (SE). IMAGE ACQUISITION HISTOGRAM Then, the set theory is the base for the Mathematical Morphology. Opening removes any connected component with area less than λ of a binary image F. For gray-scale images (f: F→ R ), it is generalized by applying the binary operator successively on slices of the image F taken from higher threshold levels to lower threshold levels: Close-by-reconstruction Top-hat creates an image by subtracting the image F of its closing by reconstruction, defined by two SE: one to dilation (Bdil) and another to connectivity (Bc)… The image reconstruction can be made by an infinite sequence of dilation and intersection (called conditional dilation). The grayscale reconstruction ρG (F) of G from F is obtained by iterating grayscale geodesic dilations of F "under" G until the result reaches stability: The close top-hat [9, 13, 14] is the difference between the close image and the original. Thinning creates a binary image by performing a thinning of the binary image F. Each iteration is performed by subtracting the points that are detected in F by hit-or-miss operators , characterized by rotations of 45°. Threshold [9, 17] creates a binary image as the threshold of the image F by values t1 and t2. A pixel has the value 1 when the value of the corresponding pixel in F is between the values t1 and t2. 4. Pre-Processing The eye image acquired (in color) and converted to grayscale is submitted to a pre-processing for enhancement and improvement of the contrast using histogramequalization – Figure 4. An algorithm based on thresholding and on morphological operators is used to segment the eye image and obtain the ROI, the iris region. Initially one detects the inner border (iris / pupil), whose sequence of operators applied is: threshold, area opening and closing. Following one detects the external border (iris / sclera), whose sequence of operators applied is: threshold, closing, area opening. The result is presented in the Figure 5. Figure 4 - Histogram equalization With the information of the inner and external borders, that segments the region of the iris, the pixels of the image that are out of the ROI are discarded and the segmentation stage is ended. Figure 5 - Borders of the iris: inner border (stippled circle) and external border (continuous circle). 5. Morphological Processing 5.1 Processing The iris image, after the pre-processing, is submitted to a sequence of morphological operators, with the goal of identifying patterns in it. Operators as dilation, erosion, opening and closing, are associated to evidence these patterns . Initially operators are applied to the iris image (Figure 6-1), still in gray-scale, highlighting the existing structures as a whole (Figure 6-2 - Close-byreconstruction Top-hat and 6-3 - Opening). Then an operator that removes structures in according to its size (area) is applied, resulting in an image with the structures disposed in layers (Figure 6-4 - Area opening). Due to the structures disposition, a thresholding is applied to obtain a binary image, where only the relevant structures appear (Figure 6-5 - Threshold). Figure 6 - Sequence of morphological processing. This image is submitted to a normalization, that takes as reference an image containing pseudo-structures – Figure 7(b), reconstructed from the representation (coordinates of the end-points and of the nodes – Figure 7(a)) of the reference iris, which was previously stored in the database. The compensation of the effects caused by the translation, rotation and scaling, are made through an algorithm based on the affine motion transform.This movement compensation is necessary to adjust the image for the matching stage. Figure 7 - Reconstructed image: pseudo-structures In order to arrive at an appropriate representation the structures should still go through a thinning process ,because the structures present themselves as an agglomerate of pixels (Figure 6-6 - Thinning). However, after the thinning, the structures present a considerable amount of redundant pixels that hinders the task of identification of the end-points and of the nodes, which are the base of the adopted representation. Figure 8 shows part of the skeleton of a structure, where the redundant pixels, the end-points, and the nodes can be observed The principle, of the algorithm developed to eliminate redundant pixels, consists of determining paths, such that, for two adjacent pixels a single path that connects them exists. However, the elimination of the redundant pixels cannot cause any connection break (gap) in the structure of the pattern, fact this very common one in the algorithms of conventional skeletonizing. The developed algorithm makes a verification of the neighborhood of the pixel that is being analyzed, to guarantee that the existent connections will be preserved. Another important factor is substantial reduction of the error, provided by the use of the algorithm of elimination of the redundant pixels, without which the process of matching of the representation would not be possible. 5.2 Removal of redundant pixels An algorithm was developed to eliminate the redundantpixels and to avoid gaps in the structure connection.In relation to the disposition of the pixels in theneighborhood of p, the notation adopted (Figure 9(a)) torepresent them is the following: • Ni : pixel belonging to the 4-neighbors of the pixel p; • Di : pixel belonging to the diagonal neighbors of thepixel p. To eliminate the redundant pixels two types of SEsare used (SE-1 and SE-2) and their respective rotated versions of 90° - clockwise - (SE-1r and SE-2r) – Figure 9(b), 9(c), 9(d) and 9(e). The basic principle of the algorithm is similar to the operation hit-or-miss, which is calculated by translating the origin of the SE to each possible pixel position in the image, and at each position comparing it with the underlying image pixels. The difference is that in the algorithm, whenforeground and background pixels in SE exactly match with the pixels in the image, the pixel to be modified is not more the image pixel underneath the origin of SE. The pixel to be modified depends on the SE that is being used. In Figure 9, for each SE, these pixels are represented in highlight (gray background). Then, when there is a match of the pixels, the pixel being analysed (gray background) will receive the value 0. The algorithm for elimination of redundant pixels begins a scanning in the image to search pixels with value equal to 1. When a pixel (p) in this condition is found, the verification of the pixels located in their neighborhood begins. The Table 1 presents the steps of the verification sequence, showing the pixels of the eighborhood of p on which the origin of SEs must be positioned, the applied SEs, the modified pixels and the corresponding figures (Figure 10). After having finished the verification, the pixels that had their value substituted by 0 during the process are altered in the image and the scanning continues. In the Figure 10 the positioning of the SEs appears in highlight (borders in bold). The result of the application of the algorithm can be seen in the Figure 11, where the redundant pixels are eliminated without causing any connection break in the structure. Figure 11 - Structure after elimination of the redundant pixels, without connection breaks. 5.3 Representation and Matching After eliminating the redundant pixels of the image containing the skeletons of the structures, the next stage consists of identifying the end-points and the nodes. The identification process begins with the verification of the 8-neighbors of the pixel p, N8(p) – Figure 9(a). Since an end-point is a pixel located in one of the extremities of a branch, if one of the pixels of N8(p) is equal to 1, then the pixel is an end-point - Figure 12-1. However, to identify a node it is necessary that three or more pixels of N8(p) are equal to 1, and if in a radius of 3 pixels there are other nodes - Figure 12-2, the medium point among the nodes is calculated, and these coordinates will correspond to the medium node, that will substitute the others - Figure 12-3. Figure 12 - Identification: end-points (gray), nodes Then one makes a mapping of the coordinates of the end-points and of the nodes, and after that starts the matching stage, which is based on the proposed representation. The coordinates of the nodes are matching to the coordinates of the nodes of the reference Iris (database), in order to identify the coincident nodes Starting from the coordinates of the coincident nodes, its ramifications are verified to obtain the number of branches for coincident node. After the referring information to the coordinates of the coincident nodes and the number of branches for coincident node are analyzed, one verifies if the processed iris is the same as the one that was taken as reference, or not. 6. Experimental Results The proposed method was tested with real images, acquired in colors and from both eyes, and later converted to gray-scale, because the proposed approach is based on the extraction of structural patterns found in the iris and not on their color. To create the experimental database, used for the simulations, several images were selected (170) at random among hundreds of images of both eyes, of many individuals, and acquired in several occasions, without imposing restrictions to the user regarding to positioning (distance from the camera, rotation of the head and position of the eye). Since images are not stored in the database, but the information corresponding to its representation, all images are previously processed. The representation used to build the database takes into account only the iris patterns of an eye (right or left). Being like this, the database is divided in ID (iris of the right eye) and IE (iris of the left eye). Therefore, the images submitted to the processing must belong to the same eye of the correspondent iris in the database. The series of experiments were accomplished using for the alignment algorithms that operated with the binary image containing the structures, obtained after the application of the operator threshold. However, the image taken as reference (database), was the composed of pseudo-structures reconstructed starting from the representation of the iris. The Figure 13 presents the diagram of the adopted procedure and in highlight (dotted line) the content stored in the database. The experiments were accomplished in two stages. In both stages, several irises of four individuals (represented by the numbers 1, 2, 3 and 4 in Figure 14) were used. In the first stage the same individual' s irises were compared to each other, and the procedure was repeated for the irises of the other three individuals. In the second stage the comparisons were made among the four individuals' irises, taking an iris of an individual as reference and comparing to the irises of the others. Figure 14 presents the result of one of the series of experiments. In the first stage were made 26 comparisons and in the second stage 76 irises comparisons. In Figure 14(a) the comparisons are accomplished based on the information of the coincident nodes, while in 14(b) the information are originated from the number of branches for coincident node. In Figure 14(a) and 14(b), it is clear the distinction that exists when comparing the representation of the same iris (1st stage) to the different irises (2nd stage), where the transition separates the two stages of the experiment. Based on the analysis of this information, it can be inferred if the image of the processed iris belongs or not to the same individual whose iris was taken as reference. The representation of the structures based on the end-points and nodes showed to be adequate to characterize the existent patterns in the iris, allowing its distinguibility through the comparison of the information obtained from them, which was confirmed through the simulations accomplished with several images of iris. As to the size, the adopted representation is compact, on average 750 bytes for vector of information (coincident nodes and branches for coincident node). The size of the representation could still be reduced more, in case it is used some type of algorithm of data compression. Some tests showed that the size of the representation easily could be reduced to 1/3 of the original size. Due to the representation type adopted, the processing time of the matching stage is reduced, being necessary only one operation that makes the matching among the vectors that contain the coordinates of the nodes and coincident branches. The same happens with the use of algorithms based on morphological operators, which basically use six operators in the several stages of the process, which also contributes to a reduction of the totalprocessing time. Even though only two types of nformation (coincident nodes and number of branches for coincident node) were used in this experiment, others could be used, increasing the system reliability, as well as the robustness of the representation. To proceed to the statistical validation of the proposed approach, the used database should be composed by a larger number of iris images, of different individuals. For this, it would be necessary to dispose of an equipment to do the acquisition of the images, as well as of the associated apparatus. However, as this project didn' t dispose such resources, the validation of the approach was restricted to the available database, which was adequate to show the behavior of the used algorithms. 7. Conclusion The results obtained validate the approach applied to the proposed application, and also show its potential for other applications. The storage of the representation showed to be adequate in terms of size (low storage requirements for the data representation of the patterns extracted), being possible starting from the representation to accomplish as much the alignment as the comparison of the irises. The morphological approach proposed for this application, substituted with success other more usual techniques in the several stages of the processing (location of the iris, segmentation, features ex traction, etc.), presenting low computational complexity (processing time). The algorithm performed very well in terms of discrimination capacity for the set of images used. The proposed work provided an efficient morphological approach for application in biometric identification of iris. References:- 1. Image feature extraction of biometric identification of IRIS – A Morphological approach By JOCELLI MAYER 2.http://science.howstuffworks.com/biometrics4.html 3.http://ctl.ncsc.dni.us/biomet%20web/BMIris.html 4. H. Heijmans, Morphological Image Operators,Academic Press, 1994.

No comments:

Post a Comment