Academic Projects

A Gaze Gesture-Based User Authentication System to Counter Shoulder-Surfing Attacks

Shoulder-surfing is the act of spying on an authorized user of a computer system with the malicious intent of gaining unauthorized access. Current solutions to address shoulder-surfing such as graphical passwords, gaze input, tactile interfaces, and so on are limited by low accuracy, lack of precise gaze-input, and susceptibility to video analysis attack. We present an intelligent gaze gesture-based system that authenticates users from their unique gaze patterns onto moving geometric shapes. The system authenticates the user by comparing their scan-path with each shapes' paths and recognizing the closest path. In a study with 15 users, authentication accuracy was found to be 99% with true calibration and 96% with disturbed calibration. Also, our system is 40% less susceptible and nearly nine times more time-consuming to video analysis attacks compared to a gaze- and PIN-based authentication system.

If a user has selected Square-Star-Pie as a password, then the user is authenticated by following each shapes' paths in their respective frames, as shown in the sequence of Figures below. The user first follows the square shape, then star, and finally pie. The user does not receive any feedback, since the gaze point and scan-path are hidden.

Demo - A Gaze Gesture-Based User Authentication System to Counter Shoulder-Surfing Attacks

Gaze Gesture-Based Interactions for Accessible HCI

Users with physical impairments are limited by their ability to work on computers using the conventional mouse- and keyboard-based interactions. Existing accessible technologies still have usability issues, need a lot of training, and are imprecise. We present a gaze gesture-based interaction paradigm for users with physical impairments to work on a computer by just using their eye movements. We use an eye tracker that tracks the user's eye movements. To perform an action like minimizing and maximizing an application; opening a new tab, scrolling down, refresh on a browser, etc., the user moves their eyes to make a predefined gesture. The system recognizes the gesture performed and executes the corresponding action. Users with speech impairment can also use this system to speak quick phrases by performing gestures. This is crucial when a person with speech impairment is interacting with another person who does not know sign language.

Demo - Gaze Typing Through Foot-Operated Wearable Device

Gaze Typing Through Foot-Operated Wearable Device

Gaze Typing, a gaze-assisted text entry method, allows individuals with motor (arm, spine) impairments to enter text on a computer using a virtual keyboard and their gaze. Though gaze typing is widely accepted, this method is limited by its lower typing speed, higher error rate, and the resulting visual fatigue, since dwell-based key selection is used. In this research, we present a gaze-assisted, wearable-supplemented, foot interaction framework for dwell-free gaze typing. The framework consists of a custom-built virtual keyboard, an eye tracker, and a wearable device attached to the user's foot. To enter a character, the user looks at the character and selects it by pressing the pressure pad, attached to the wearable device, with the foot. Results from a preliminary user study involving two participants with motor impairments show that the participants achieved a mean gaze typing speed of 6.23 Words Per Minute (WPM). In addition, the mean value of Key Strokes Per Character (KPSC) was 1.07 (ideal 1.0), and the mean value of Rate of Backspace Activation (RBA) was 0.07 (ideal 0.0). Furthermore, we present our findings from multiple usability studies and design iterations, through which we created appropriate affordances and experience design of our gaze typing system.

Demo - Gaze Typing Through Foot-Operated Wearable Device

Gaze-Assisted User Authentication to Counter Shoulder-surfing Attacks

A highly secured, foolproof user authentication is still a primary focus of research in the field of User Privacy and Security. Shoulder-surfing is an act of spying when an authorized user is logging into a system; it is promoted by a malicious intent of gaining unauthorized access. We present a gaze-assisted user authentication system as a potential solution counter shoulder-surfing attacks. The system comprises of an eye tracker and an authentication interface with 12 pre-defined shapes (e.g., triangle, circle, etc.) that move on the screen. A user chooses a set of three shapes as a password. To authenticate, the user follows paths of the three shapes as they move, one on each frame, over three consecutive frames. The system uses a template matching algorithms to compare the scan-path of the user's gaze with the path traversed by the shape. The system evaluation involving seven users showed that the template matching algorithm achieves an accuracy of 95%. Our study also shows that Gaze-driven authentication is a foolproof system against shoulder-surfing attacks; the unique pattern of eye movements for each individual makes the system hard to break into.
Demo - Gaze Typing Through Foot-Operated Wearable Device

GAWSCHI: Gaze-Augmented, Wearable-Supplemented Computer-Human Interaction

Recent developments in eye tracking technology are paving the way for gaze-driven interaction as the primary interaction modality. Despite successful efforts, existing solutions to the ``Midas Touch" problem have two inherent issues: 1) lower accuracy, and 2) visual fatigue that are yet to be addressed. In this work we present GAWSCHI: a Gaze-Augmented, Wearable-Supplemented Computer-Human Interaction framework that enables accurate and quick gaze-driven interactions, while being completely immersive and hands-free. GAWSCHI uses an eye tracker and a wearable device (quasi-mouse) that is operated with the user's foot, specifically the big toe. The system was evaluated with a comparative user study involving 30 participants, with each participant performing eleven predefined interaction tasks (on MS Windows 10) using both mouse and gaze-driven interactions. We found that gaze-driven interaction using GAWSCHI is as good (time and precision) as mouse-based interaction as long as the dimensions of the interface element are above a threshold (0.60" x 0.51"). In addition, an analysis of NASA Task Load Index post-study survey showed that the participants experienced low mental, physical, and temporal demand; also achieved a high performance. We foresee GAWSCHI as the primary interaction modality for the physically challenged and a means of enriched interaction modality for the able-bodied demographics.

Gaze-Assisted Human-Computer Interaction

Exploring Users' Perceived Activities in a Sketch-based Intelligent Tutoring System Through Eye Movement Data

Intelligent tutoring systems (ITS) empower instructors to make teaching more engaging by providing a platform to tutor, deliver learning material, and to assess students' progress. Despite the advantages, existing ITS do not automatically assess how students engage in problem solving? How do they perceive various activities? and How much time they spend on each activity leading to the solution? In this research, we present an eye tracking framework that, based on eye movement data, can assess students' perceived activities and overall engagement in a sketch based Intelligent tutoring system, "Mechanix." Based on an evaluation involving 21 participants, we present the key eye movement features, and demonstrate the potential of leveraging eye movement data to recognize students' perceived activities, "reading, gazing at an image, and problem solving," with an accuracy of 97.12%.
First Author: Purnendu Kaul

KinoHaptics: An Automated, Haptic Assisted, Physio-therapeutic System for Post-surgery Rehabilitation and Self-care

Problem Statement: 
A carefully planned, structured, and supervised physiotherapy program, following a surgery, is crucial for the successful diagnosis of physical injuries. Nearly 50% of the surgeries fail due to unsupervised and erroneous physiotherapy. The demand for a physiotherapist for an extended period is expensive to afford, and sometimes inaccessible. With the advancements in wearable sensors and motion tracking, researchers have tried to build affordable, automated, physio-therapeutic systems, which direct a physiotherapy session by providing audio-visual feedback on patient’s performance. There are many aspects of automated physiotherapy program which are yet to be addressed by the existing systems: wide variety of patients’ physiological conditions to be diagnosed, demographics of the patients (blind, deaf, etc.,), and pursuing them to adopt the system for an extended period for self-care

Objectives and Solution:
In our research, we have tried to address these aspects by building a health behavior change support system called KinoHaptics, for post-surgery rehabilitation. KinoHaptics is an automated, persuasive, haptic assisted, physio-therapeutic system that can be used by a wide variety of demographics and for various patients’ physiological conditions. The system provides rich and accurate vibro-haptic feedback that can be felt by any user irrespective of the physiological limitations. KinoHaptics is built to ensure that no injuries are induced during the rehabilitation period. The persuasive nature of the system allows for personal goal-setting, progress tracking, and most importantly lifestyle compatibility.

Evaluation and Results:
The system was evaluated under laboratory conditions, involving 14 users. Results show that KinoHaptics is highly convenient to use, and the vibro-haptic feedback is intuitive, accurate, and definitely prevents accidental injuries. Also, results show that KinoHaptics is persuasive in nature as it supports behavior change and habit building.

The successful acceptance of KinoHaptics, an automated, haptic assisted, physio-therapeutic system proves the need and future scope of automated physio-therapeutic systems for self-care and behavior change. It also proves that such systems incorporated with vibro-haptic feedback encourage strong adherence to the physiotherapy program; can have a profound impact on the physiotherapy experience resulting in higher acceptance rate.

Healthy Leap: An Intelligent Context-Aware Fitness System for Alleviating Sedentary Lifestyles

As people in industrialized countries enjoy modern conveniences that lead to greater sedentary lifestyles and decreased involvement in physical activities, they also increase their risk of acquiring hypokinetic diseases such as obesity and heart disease that negatively impact their long-term health. While emerging wearable computing technologies are encouraging researchers and developers to create fitness-based mobile user interfaces for combating the effects of sedentary lifestyles, existing solutions instead primarily cater to fitness-minded users who wish to take advantage of technology to enhance their self-motivated physical exercises.

In this work, we propose our mobile-based fitness system called Healthy Leap that provides an intelligent context-aware user interface for encouraging users to adopt healthier and more active lifestyles through contextually appropriate physical exercises. Our system consists of an Android-enabled smartphone app that leverages a Pebble Smartwatch, in order to actively monitor users' situational context and activity for identifying their current sedentary state. From this sedentary state information, Healthy Leap responds with physical activity reminders to the user based on user's physical constraints through contextual information such as location, personal preference, calendar events, current time, and weather forecasts. From our evaluations of Healthy Leap, we observed that users not only benefited from sedentary state notifications that intelligently responded to their situation context, but were also more encouraged to engage in physical exercises for alleviating their sedentary lifestyles.

Let Me Relax: Toward Automated Sedentary State Recognition and Ubiquitous Mental Wellness Solutions

Advances in ubiquitous computing technology improve workplace productivity, reduce physical exertion, but ultimately result in a sedentary work style. Sedentary behavior is associated with an increased risk of stress, obesity, and other health complications. Let Me Relax is a fully automated sedentary-state recognition framework using a smartwatch and smartphone, which encourages mental wellness through interventions in the form of simple relaxation techniques. The system was evaluated through a comparative user study of 22 participants split into a test and a control group. An analysis of NASA Task Load Index pre- and post- study survey revealed that test subjects who followed relaxation methods, showed a trend of both increased activity as well as reduced mental stress. Reduced mental stress was found even in those test subjects that had increased inactivity. These results suggest that repeated interventions, driven by an intelligent activity recognition system, is an effective strategy for promoting healthy habits, which reduce stress, anxiety, and other health risks associated with sedentary workplaces.

Framework for Accelerometer Based Gesture Recognition and Integration with Desktop Applications

Master of Science,  Birla Institute of Technology and Science, Pilani, India

This research demonstrates simplified, alternative ways of interacting with desktop applications through natural hand based gestures. In general most of the desktop based applications are presumed to receive user inputs through traditional input devices like keyboard and mouse. Gesture recognition framework implemented in this work leverages accelerometer data from a smart-phone held in a user's hand to identify user gestures.

A short distance communication protocol, Bluetooth, is used to transmit the accelerometer data from a smartphone to a desktop at a constant rate, making the whole system wireless. Accelerometer data received at the desktop computer is analyzed to identify the most appropriate gesture it encodes, and further, is transformed corresponding key press and mouse events. The key-press and mouse events thus generated control various applications and games on a desktop computer. This framework enriches interaction with desktop applications and games, and enhances user experience through intuitive and lively gestures. The framework also enables the development of more creative games and applications, which is an exciting way of being engaged. 

Demo - Accelerometer Based Gaming - Integration with NFS on Desktop

Demo - Accelerometer Based Painting - Integration with Windows Paint

Multi-threaded Download Accelerator With Resume Support

Visvesvaraya Technological University, Karnataka, India

Issues with existing file transfer protocols
When a computer tries to transfer files over a network to another computer, typically it establishes a single connection with the server and transfers the files sequentially over this connection. This method slows down the speed of data transfer and does not utilize the available bandwidth effectively.

Using muti-threading it is possible for several threads to connect to the server independently over different sockets and transfer either different files simultaneously or different portions of a single file simultaneously. Also when data transfer with a system is terminated abruptly, generally the entire download operation needs to be re-instantiated from scratch again. This can be eliminated if the data transfer package can maintain the status of download for every file downloaded with resume support at the server side, thereby ensuring that the download can resume from the point of disconnection rather than all over from the beginning.

The main objectives of Multi-threaded Downloaded Accelerator with Resume Support:
  • To develop a server that can support file transfer transactions, with resume support
  • To develop a client that can provide an attractive graphical user interface to the user and help the user connect to specific systems and transfer files. Furthermore, the client must be able to maintain the status of all downloads.
  • To develop a protocol that ensures that the client can communicate with the server.
  • To incorporate multi-threading in order to improve bandwidth utilization, with proper communication amongst threads so that there is no synchronization problems or race conditions.
  • To introduce resume support by incorporating CRC checks, so that incomplete downloads can be resumed from the point where it was left off.

Graphics Editor

Graphics editor is a utility software that enables a user to carry out graphical operations like drawing geometrical figures and text. The system "Graphics Editor" is developed completely in "C" programming language. Various geometrical shapes that can be drawn using this editor are Rectangle, Circle, Ellipse, line, and Spiral etc. Graphics Editor also provides provisions for transformations of the geometrical figures. Various other functions like Save, Load, Clip, Rotate, and Scale etc., are also provided.

Graphics Editor is mouse driven with different functions represented as icons. The GUI is user friendly, as anyone can easily use the editor without any learning prerequisites. In addition, the system provides different colors that can be applied to geometrical figures and different patterns that can be used to fill shapes (rectangle, circle).

Linux Shell

This project is a custom implementation of "Linux Shell" that enhances the basic functionalities provided by default Linux shells like Bourne, Korn, C, and Bash. The custom Linux shell thus developed is able to run executable statements with command line arguments.

This shell is an intermediary program interpreter, which interprets commands that are entered at the terminal, and translates them into commands that are understood by the kernel. Myshell, thus acts as a blanket around the kernel and eliminates the need for a programmer to communicate directly with the kernel.

A unique feature of the Linux operating system is that all Linux commands exist as utility programs. These programs are located in individual files in one of the systems directories, such as /bin, /etc, or /usr/bin. The shell can be considered a master utility program, which enables a user to gain access to all the other utilities and resources of the computer.

The shell reads the first word of a command line and tries to identify if it is an alias, a function, or an internal command. If there is a command to be executed, the shell then searches through the directories specified in the path for the command files, and executes the command.

How does it work ?
  • On logging into the terminal, the custom shell displays the Linux prompt  indicating that it is  ready to receive a command from the user.
  • The user issues a command, for example: ls <directory-name>
  • The custom shell then,
    • Reads the command, 
    • Searches for and locates the file with that name in the directories containing utilities, 
    • Loads the utility into memory and executes the utility.
  • After the execution is complete, the shell once again displays the prompt, conveying that it is ready for the next command.

    Lex Yaac

    This project involved understanding and further modifying the existing implementation of Lexical analyzer and YAAC parser to implement a custom interpreter on UNIX system. This work enables a user to specify customized "Patterns" to generate various "Tokens" through Lexical Analyzer.

    Further these "Tokens" are fed to customized implementation of "YAAC" parser that enables a user to specify "Grammar"  to suit the requirements of custom interpreter.