Locating the face and tracking the eyes allows for valuable information to be captured and used in a wide range of applications. Eye location can be tracked using commercial eye-gaze trackers, but additional constraints and expensive hardware would make these existing solutions unattractive and impossible to use on standard (wavelength that is visible), images of eyes with low-resolution. Our aim of the project is to detect the Iris Center with registered database and propose a system that makes the computer screen scroll as per eye gaze. Accuracy of the IC (iris center) localization is measured using Gaze tracking systems.
Keywords Used: Eye tracking using viola-jones method, Face detection, Estimation of gaze
Now days, electronic or optical sensors such as cameras and scanning devices are used to capture images, recordings or measurements of a person’s unique characteristics. This digital data is then encoded and can be stored and searched on demand, via a computer.
Such biometric search is not only very rapid (often taking place in real-time), it is also a process that is accepted globally in establishing forensic evidence in a law court. There are numerous forms of biometrics now being built into technology platforms. Biometric systems have been developed based on fingerprints, facial features, voice, hand geometry, handwriting, the retina, and the one presented in this thesis, the iris.
Figure 1.1- The front image of human eye.
The iris is a thin circular diaphragm, which lies between the cornea and the lens of the human eye. The iris is perforated close to its centre by a circular aperture known as the pupil. The function of the iris is to control the amount of light entering through the pupil, and this is done by the sphincter and the dilator muscles, which adjust the size of the pupil. The average diameter of the iris is 12 mm, and the pupil size can vary from 10% to 80% of the iris diameter.
Comparing with other biometric traits, iris recognition systems have many advantages. Iris is a protected internal organ whose random texture is stable throughout life, it can serve as a kind of living password that one need not remember but one always carries along. Since the degree of freedom of iris textures is extremely high, the probability of finding two identical irises is close to zero, and it cannot be tampered easily, therefore the iris recognition systems are very reliable. The technique for iris recognition consists of:
‘ Iris localization
In the first one, the eye image is processed in order to obtain a segmented and normalized eye image. In the second step, the iris image is analyzed, in the third step, Iris matching is done.
There are two components to the human visual line-of sight: pose of human head and the orientation of the eye within their sockets. The domain knowledge of the human face is important and essential for determining the head pose and eye gaze utilizing only minimal robust features and under real-time requirement. Several techniques are there to track eyes;
Some of them are Electro-Occulography, Limbus, Pupil and Eyelid tracking, Contact Lens Method, Corneal and Pupil Reflection Relationship, Purkinje Image Tracking , Artificial Neural Networks, Morphable Models and geometry.
In this paper, we propose a novel approach for measuring the eye gaze using a monocular image that zooms in on only one eye of a person.
This paper focuses on following factors:
‘ How to improve the performance of eye tracking using viola-jones algorithm.
‘ How to extract the components of the eye accurately.
Related work Section covers details of these studied papers. Next section describes eye tracker working in details, next section talks about proposed system design and last section is Conclusion.
II. RELATED WORK
All research work done in eye tracking area focuses on replacement of the conventional computer interfaces such as the mouse and keyboard, and how to make effective use of resources available. The latest high-tech gadgets provide fresh perspectives on human-device interaction (HDI) allowing consumers to handle electronics in more intuitive ways. Gaze tracking methods use the infrared (IR) camera and active IR illuminators to achieve the highly accurate performance of gaze estimation.
Unlike the IR image-based methods using the image coordinates of the pupil and corneal reflection -, the visible image-based methods directly map the eye’s iris center (IC) location to a target plane such as the monitor screen. Therefore, the accuracy and robustness of the IC localization significantly affects the performance of gaze tracking. Various IC localization methods, which are categorized into two classes:
1) the feature-based method and
2) the model-based method.
The feature-based method proposed by Valenti and Gevers ,  uses the isophote properties (i.e., curves connecting points of equal intensity) to locate the IC.
The methods by Wang et al.  and Zhang et al.  exploit the fact that the shape of the iris contour projected onto an image plane is an ellipse.
The model based method by Moriyama et al.  which uses the minutely-subdivided eye region templates can extract the components of the eye accurately. Daugman’s method  uses an integro-differential operator (IDO) which calculates the curve integral of gradient magnitudes under the target shape model in order to extract the circular shaped IB in the eye image. Nishino and Nayar  extended the IDO for the elliptical shaped IB detection.
Mardanbegi et al. proposed a head-mount-based gaze tracking system which interacts with several display devices .
Among several display devices, the display devices in which the gaze tracking will be performed is recognized by identifying the pattern on the screen using the frontal-viewing scene camera capturing the scene what the user sees. After the target screen is detected, a homographic mapping from corner positions of the screen in the image to the screen coordinates is calculated in each frame. Point of gaze (POG) is calculated by the mapping of the gaze point in the image using the homography. Head-mounted eye tracking system may be more accurate since they are less affected by external changes (head pose, lights, etc.) and the simplified geometry may allow for more constraints to be applied . Zhu et al. proposed the eye gaze tracking system which allows the user to move the head naturally . This method uses 3D model-based method and 2D mapping-based method to calculate POG. Hennessey et al. proposed the system-calibration-free eye gaze tracking when the user move in depth direction (z axis) . Even though this method does not need any system calibration information such as the relative locations among the screen, cameras, and IR illuminators, the tested range of the head movement is just 6 cm.
Zelinsky et al  presented an eye gaze estimation in which the eye corners are located using a stereo vision system. Then the eyeball position can be calculated from the pose of the head and 3D ‘offset’ vector from the mid-point of the corners of an eye to the center of the eye.
Consequently the radius of the eyeball can be obtained.
However the ‘offset vector’ and the radius of the iris have to be manually adjusted through a training sequence where the gaze point of the person is known.
III. PROPOSED ALGORITHM
‘ Provide details on the proposed method based on the assumption that the input eye image is captured under the frontal head pose.
‘ To present how the proposed method is applied to the eye image captured under the non-frontal pose.
B. GOALS AND OBJECTIVES
Main goals of proposed system are as follows:
‘ To locate the accuracy and robustness of the eye tracking using gaze tracking systems.
‘ To extract the components of the eye accurately with the help of model based method.
‘ Using Viola-Jones algorithm method face and eyes are being detected.
‘ A widely used method for real-time object detection. Training is slow, but detection is very fast.
C. SYSTEM ARCHITECTURE
Figure: 3.1System Architecture
System consists of following main components:
1. A hybrid scheme  is proposed to combine head pose and eye location information to obtain enhanced gaze estimation.
2. The transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure.
3. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates.
4. In the 2nd algorithm  the spherical model of the human eyeball is being used to estimate the radius of the iris from the frontal and upright view image of the eye.
5. By projecting the eyeball rotated in pitch and yaw onto the 2-D plane, a certain number of the elliptical shapes of the iris and their corresponding IC locations are generated and registered as a database (DB).
6. The location of IC is detected by matching the ES of the iris of the input eye image with the ES candidates in the DB.
D. VIOLA-JONESOBJECT DETECTION FRAMEWORK
The feature employed by the detection framework universally involves the sums of image pixels within rectangular areas.
Figure: 3.2 Feature types used by Viola and Jones
The integral image computes a value at each pixel (x,y) that is the sum of the pixel values above and to the left of (x,y), inclusive.
Value = ‘ (pixels in white area) ‘
‘ (pixels in black area)
The face and eye detection is implemented in the detectEye() function in MATLAB R20008B. Given an image, the function tries to detect human face in it. If success, it will continue with detecting the eye. If success, it will create and returns the eye template and its bounding box.
Figure: 4.1 face detection & eye tracking
This paper presents a review of the existing methods available for eye tracking & iris tracking.
Occlusions caused by hair, glasses or shadows make localization and tracking of eyes even more difficult. The tracking process becomes even more challenging when dealing with low resolution images derived from inexpensive imaging devices (e.g. webcams, pinhole cameras or mobile devices).
Future work would be
‘ To Design Eyeball Model-based Iris Center Localization for Visible Image-based Eye-Gaze Tracking Systems.
‘ To detect the Iris Center with registered database and propose a system that makes the computer screen scroll as per eye gaze.
‘ To find out pixel values and the gradient magnitudes of the input eye image exploited to locate the IC.
‘ To make an application of the eye Gaze tracking which scrolls the screen with the help of iris without using hands on keyboard.