Publications

Papers

Josh Cherian; Vijay Rajanna; Daniel Goldberg; Tracy Hammond. Did you Remember To Brush? : A Noninvasive Wearable Approach to Recognizing Brushing Teeth for Elderly Care.11th EAI International Conference on Pervasive Computing Technologies for Healthcare. ACM, New York, USA. MAY 23–26, 2017 |Barcelona, Spain
Failing to brush one's teeth regularly can have surprisingly serious health consequences, from periodontal disease to coronary heart disease to pancreatic cancer. This problem is especially worrying when caring for the elderly and/or individuals with dementia, as they often forget or are unable to perform standard health activities such as brushing their teeth, washing their hands, and taking medication. To ensure that such individuals are correctly looked after they are placed under the supervision of caretakers or family members, simultaneously limiting their independence and placing an immense burden on their family members and caretakers. To address this problem we developed a non-invasive wearable system based on a wrist-mounted accelerometer to accurately identify when a person brushed their teeth. We tested the efficacy of our system with a month-long in-the-wild study and achieved an accuracy of 94% and an F-measure of 0.82.
TBA
Vijay Rajanna; Paul Taele; Seth Polsley; Tracy Hammond. A Gaze Gesture-Based User Authentication System to Counter Shoulder-Surfing Attacks. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, USA. May 06-11, 2017 | Denver, Colorado, USA.
ACM DL URL
Shoulder-surfing is the act of spying on an authorized user of a computer system with the malicious intent of gaining unauthorized access. Current solutions to address shoulder-surfing such as graphical passwords, gaze input, tactile interfaces, and so on are limited by low accuracy, lack of precise gaze-input, and susceptibility to video analysis attack. We present an intelligent gaze gesture-based system that authenticates users from their unique gaze patterns onto moving geometric shapes. The system authenticates the user by comparing their scan-path with each shapes' paths and recognizing the closest path. In a study with 15 users, authentication accuracy was found to be 99% with true calibration and 96% with disturbed calibration. Also, our system is 40% less susceptible and nearly nine times more time-consuming to video analysis attacks compared to a gaze- and PIN-based authentication system.
@inproceedings{Rajanna:2017:GGU:3027063.3053070,
author = {Rajanna, Vijay and Polsley, Seth and Taele, Paul and Hammond, Tracy},
title = {A Gaze Gesture-Based User Authentication System to Counter Shoulder-Surfing Attacks},
booktitle = {Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems},
series = {CHI EA '17},
year = {2017},
isbn = {978-1-4503-4656-6},
location = {Denver, Colorado, USA},
pages = {1978--1986},
numpages = {9},
url = {http://doi.acm.org/10.1145/3027063.3053070},
doi = {10.1145/3027063.3053070},
acmid = {3053070},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {gaze authentication, gaze gestures, pattern matching}, }
Received "3rd Place" in student research competition.
Vijay Rajanna; Dr. Tracy Hammond. Gaze Typing Through Foot-Operated Wearable Device. The 18th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '16). ACM, New York, USA. October 24–26, 2016 | Reno, Nevada, USA.
Gaze Typing, a gaze-assisted text entry method, allows individuals with motor (arm, spine) impairments to enter text on a computer using a virtual keyboard and their gaze. Though gaze typing is widely accepted, this method is limited by its lower typing speed, higher error rate, and the resulting visual fatigue, since dwell-based key selection is used. In this research, we present a gaze-assisted, wearable-supplemented, foot interaction framework for dwell-free gaze typing. The framework consists of a custom-built virtual keyboard, an eye tracker, and a wearable device attached to the user's foot. To enter a character, the user looks at the character and selects it by pressing the pressure pad, attached to the wearable device, with the foot. Results from a preliminary user study involving two participants with motor impairments show that the participants achieved a mean gaze typing speed of 6.23 Words Per Minute (WPM). In addition, the mean value of Key Strokes Per Character (KPSC) was 1.07 (ideal 1.0), and the mean value of Rate of Backspace Activation (RBA) was 0.07 (ideal 0.0). Furthermore, we present our findings from multiple usability studies and design iterations, through which we created appropriate affordances and experience design of our gaze typing system.
@inproceedings{Rajanna:2016:GTT:2982142.2982145,
author = {Rajanna, Vijay},
title = {Gaze Typing Through Foot-Operated Wearable Device},
booktitle = {Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility},
series = {ASSETS '16},
year = {2016},
isbn = {978-1-4503-4124-0},
location = {Reno, Nevada, USA},
pages = {345--346},
numpages = {2},
url = {http://doi.acm.org/10.1145/2982142.2982145},
doi = {10.1145/2982142.2982145},
acmid = {2982145},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {foot-operated devices, gaze typing, wearable devices},
} }
Received "1st Place" in graduate poster competition.
Vijay Rajanna; Dr. Tracy Hammond. Gaze-Assisted User Authentication to Counter Shoulder-surfing Attacks. ACM Richard Tapia Celebration of Diversity in Computing (TAPIA '16). ACM, New York, USA. September 14–17, 2016 | Austin, Texas, USA.
A highly secured, foolproof user authentication is still a primary focus of research in the field of User Privacy and Security. Shoulder-surfing is an act of spying when an authorized user is logging into a system; it is promoted by a malicious intent of gaining unauthorized access. We present a gaze-assisted user authentication system as a potential solution counter shoulder-surfing attacks. The system comprises of an eye tracker and an authentication interface with 12 pre-defined shapes (e.g., triangle, circle, etc.) that move on the screen. A user chooses a set of three shapes as a password. To authenticate, the user follows paths of the three shapes as they move, one on each frame, over three consecutive frames. The system uses a template matching algorithms to compare the scan-path of the user's gaze with the path traversed by the shape. The system evaluation involving seven users showed that the template matching algorithm achieves an accuracy of 95%. Our study also shows that Gaze-driven authentication is a foolproof system against shoulder-surfing attacks; the unique pattern of eye movements for each individual makes the system hard to break into.
Purnendu Kaul; Vijay Rajanna; Dr. Tracy Hammond. Exploring Users' Perceived Activities in a Sketch-based Intelligent Tutoring System Through Eye Movement Data. ACM Symposium on Applied Perception (SAP '16). ACM, New York, USA. July 22–23, 2016 | Anaheim, California, USA.
Intelligent tutoring systems (ITS) empower instructors to make teaching more engaging by providing a platform to tutor, deliver learning material, and to assess students' progress. Despite the advantages, existing ITS do not automatically assess how students engage in problem solving? How do they perceive various activities? and How much time they spend on each discrete activity leading to the solution? In this research, we present an eye tracking framework that, based on eye movement data, can assess students' perceived activities and overall engagement in a sketch based Intelligent tutoring system, "Mechanix." Through an evaluation involving 21 participants, we present the key eye movement features, and demonstrate the potential of leveraging eye movement data to recognize students' perceived activities, "reading, gazing at an image, and problem solving," with an accuracy of 97.12%.
@inproceedings{Kaul:2016:EUP:2931002.2948727,
author = {Kaul, Purnendu and Rajanna, Vijay and Hammond, Tracy},
title = {Exploring Users' Perceived Activities in a Sketch-based Intelligent Tutoring System Through Eye Movement Data},
booktitle = {Proceedings of the ACM Symposium on Applied Perception},
series = {SAP '16},
year = {2016},
isbn = {978-1-4503-4383-1},
location = {Anaheim, California},
pages = {134--134},
numpages = {1},
url = {http://doi.acm.org/10.1145/2931002.2948727},
doi = {10.1145/2931002.2948727},
acmid = {2948727},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {eye tracking, intelligent tutoring systems, perception},
}

Seth Polsley; Vijay Rajanna; Larry Powell; Kodi Tapi; Dr. Tracy Hammond. CANE: A Wearable Computer-Assisted Navigation Engine for the Visually Impaired. A joint workshop on Smart Connected and Wearable Things (SCWT'2016). 21st international conference on Intelligent User Interfaces (IUI '16). ACM, New York, USA. March 7–10, 2016 | Sonoma, California, USA.
Navigating unfamiliar environments can be difficult for the visually impaired, so many assistive technologies have been developed to augment these users’ spatial awareness. Existing technologies are limited in their adoption because of various reasons like size, cost, and reduction of situational awareness. In this paper, we present CANE: “Computer Assisted Navigation Engine,” a low cost, wearable, and hapticassisted navigation system for the visually impaired. CANE is a “smart belt,” providing feedback through vibration units lining the inside of the belt so that it does not interfere with the user’s other senses. CANE was evaluated by both visually impaired and sighted users who simulated visual impairment using blindfolds, and the feedback shows that it improved their spatial awareness allowing the users to successfully navigate the course without any additional aids. CANE as a comprehensive navigation assistant has high potential for wide adoption because it is inexpensive, reliable, convenient, and compact.
@inproceedings{polsley2016cane,
title={CANE: A Wearable Computer-Assisted Navigation Engine for the Visually Impaired},
author={Polsley, Seth and Rajanna, Vijay and Powell, Larry and Tapie, Kodi and Hammond, Tracy},
booktitle={Workshop on Smart Connected and Wearable Things 2016},
pages={13}
} }
Vijay Rajanna; Tracy Hammond. GAWSCHI: Gaze-Augmented, Wearable-Supplemented Computer-Human Interaction. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '16). ACM, New York, USA. March 14–17, 2016 | Charleston, South Carolina, USA.
Recent developments in eye tracking technology are paving the way for gaze-driven interaction as the primary interaction modality. Despite successful efforts, existing solutions to the ``Midas Touch" problem have two inherent issues: 1) lower accuracy, and 2) visual fatigue that are yet to be addressed. In this work we present GAWSCHI: a Gaze-Augmented, Wearable-Supplemented Computer-Human Interaction framework that enables accurate and quick gaze-driven interactions, while being completely immersive and hands-free. GAWSCHI uses an eye tracker and a wearable device (quasi-mouse) that is operated with the user's foot, specifically the big toe. The system was evaluated with a comparative user study involving 30 participants, with each participant performing eleven predefined interaction tasks (on MS Windows 10) using both mouse and gaze-driven interactions. We found that gaze-driven interaction using GAWSCHI is as good (time and precision) as mouse-based interaction as long as the dimensions of the interface element are above a threshold (0.60" x 0.51"). In addition, an analysis of NASA Task Load Index post-study survey showed that the participants experienced low mental, physical, and temporal demand; also achieved a high performance. We foresee GAWSCHI as the primary interaction modality for the physically challenged and a means of enriched interaction modality for the able-bodied demographics.
@inproceedings{Rajanna:2016:GGW:2857491.2857499,
author = {Rajanna, Vijay and Hammond, Tracy},
title = {GAWSCHI: Gaze-augmented, Wearable-supplemented Computer-human Interaction},
booktitle = {Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research and Applications},
series = {ETRA '16},
year = {2016},
isbn = {978-1-4503-4125-7},
location = {Charleston, South Carolina},
pages = {233--236},
numpages = {4},
url = {http://doi.acm.org/10.1145/2857491.2857499},
doi = {10.1145/2857491.2857499},
acmid = {2857499},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {eye tracking, foot-operated device, gaze interaction, midas touch, multi-modal interaction, quasi-mouse, wearable devices},
} }
Vijay Rajanna; Dr. Tracy Hammond. Gaze and Foot Input: Toward a Rich and Assistive Interaction Modality. In Proceedings of the 21st international conference on Intelligent User Interfaces (IUI '16). ACM, New York, USA. March 7–10, 2016 | Sonoma, California, USA.
Transforming gaze input into a rich and assistive interaction modality is one of the primary interests in eye tracking research. Gaze input in conjunction with traditional solutions to the ``Midas Touch" problem, dwell time or a blink, is not matured enough to be widely adopted. In this regard, we present our preliminary work, a framework that achieves precise ``point and click" interactions in a desktop environment through combining the gaze and foot interaction modalities. The framework comprises of an eye tracker and a foot-operated quasi-mouse that is wearable. The system evaluation shows that our gaze and foot interaction framework performs as good as a mouse (time and precision) in the majority of tasks. Furthermore, this dissertation work focuses on the goal of realizing gaze-assisted interaction as a primary interaction modality to substitute conventional mouse and keyboard-based interaction methods. In addition, we consider some of the challenges that need to be addressed, and also present the possible solutions toward achieving our goal. We present a framework that combines the gaze and foot interaction modalities to achieve precise ``point and click" interactions in a desktop environment. The framework comprises of an eye tracker and a foot-operated quasi-mouse that is wearable. The system evaluation shows that our gaze and foot interaction framework performs as good as a mouse (time and precision) in the majority of tasks.
@inproceedings{Rajanna:2016:GFI:2876456.2876462,
author = {Rajanna, Vijay Dandur},
title = {Gaze and Foot Input: Toward a Rich and Assistive Interaction Modality},
booktitle = {Companion Publication of the 21st International Conference on Intelligent User Interfaces},
series = {IUI '16 Companion},
year = {2016},
isbn = {978-1-4503-4140-0},
location = {Sonoma, California, USA},
pages = {126--129},
numpages = {4},
url = {http://doi.acm.org/10.1145/2876456.2876462},
doi = {10.1145/2876456.2876462},
acmid = {2876462},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {authentication, eye tracking, foot input, gaze and foot interaction, tabletop interaction},
} }
Rajanna, Vijay; Vo, Patrick; Barth, Jerry; Mjelde, Matthew; Grey, Trevor; Hammond, Tracy. KinoHaptics: An Automated, Wearable, Haptic Assisted, Physio-therapeutic System for Post-surgery Rehabilitation and Self-care. "Journal of medical systems 40, no. 3 (2016): 1-12."
Problem Statement: A carefully planned, structured, and supervised physiotherapy program, following a surgery, is crucial for the successful diagnosis of physical injuries. Nearly 50% of the surgeries fail due to unsupervised, and erroneous physiotherapy. The demand for a physiotherapist for an extended period is expensive to afford, and sometimes inaccessible. With the advancements in wearable sensors and motion tracking, researchers have tried to build affordable, automated, physio-therapeutic systems, which direct a physiotherapy session by providing audio-visual feedback on patient's performance. There are many aspects of automated physiotherapy program which are yet to be addressed by the existing systems: wide variety of patients' physiological conditions to be diagnosed, demographics of the patients (blind, deaf, etc.,), and pursuing them to adopt the system for an extended period for self-care. Objectives and Solution: In our research, we have tried to address these aspects by building a health behavior change support system called KinoHaptics, for post-surgery rehabilitation. KinoHaptics is an automated, wearable, haptic assisted, physio-therapeutic system that can be used by wide variety of demographics and for various patients' physiological conditions. The system provides rich and accurate vibro-haptic feedback that can be felt by any user irrespective of the physiological limitations. KinoHaptics is built to ensure that no injuries are induced during the rehabilitation period. The persuasive nature of the system allows for personal goal-setting, progress tracking, and most importantly life-style compatibility. Evaluation and Results: The system was evaluated under laboratory conditions, involving 14 users. Results show that KinoHaptics is highly convenient to use, and the vibro-haptic feedback is intuitive, accurate, and definitely prevents accidental injuries. Also, results show that KinoHaptics is persuasive in nature as it supports behavior change, and habit building. Conclusion: The successful acceptance of KinoHaptics, an automated, wearable, haptic assisted, physio-therapeutic system proves the need and future-scope of automated physio-therapeutic systems for self-care and behavior change. It also proves that such systems incorporated with vibro-haptic feedback encourage strong adherence to the physiotherapy program; can have profound impact on the physiotherapy experience resulting in a higher acceptance rate.
@Article{Rajanna2015, author="Rajanna, Vijay and Vo, Patrick and Barth, Jerry and Mjelde, Matthew and Grey, Trevor and Oduola, Cassandra and Hammond, Tracy",
title="KinoHaptics: An Automated, Wearable, Haptic Assisted, Physio-therapeutic System for Post-surgery Rehabilitation and Self-care",
journal="Journal of Medical Systems",
year="2015",
volume="40",
number="3",
pages="60",
issn="1573-689X",
doi="10.1007/s10916-015-0391-3",
url="http://dx.doi.org/10.1007/s10916-015-0391-3"
} }
Received the "Best Student Paper" award
Vijay Rajanna; Folami Alamudun; Dr. Daniel Goldberg; Dr. Tracy Hammond. Let Me Relax: Toward Automated Sedentary State Recognition and Ubiquitous Mental Wellness Solutions. MobiHealth 2015 - 5th EAI International Conference on Wireless Mobile Communication and Healthcare - "Transforming healthcare through innovations in mobile and wireless technologies". OCTOBER 14–16, 2015 | LONDON, GREAT BRITAIN.
Advances in ubiquitous computing technology improve workplace productivity, reduce physical exertion, but ultimately result in a sedentary work style. Sedentary behavior is associated with an increased risk of stress, obesity, and other health complications. Let Me Relax is a fully automated sedentary-state recognition framework using a smartwatch and smartphone, which encourages mental wellness through interventions in the form of simple relaxation techniques. The system was evaluated through a comparative user study of 22 participants split into a test and a control group. An analysis of NASA Task Load Index pre- and post- study survey revealed that test subjects who followed relaxation methods, showed a trend of both increased activity as well as reduced mental stress. Reduced mental stress was found even in those test subjects that had increased inactivity. These results suggest that repeated interventions, driven by an intelligent activity recognition system, is an effective strategy for promoting healthy habits, which reduce stress, anxiety, and other health risks associated with sedentary workplaces.
@inproceedings{Rajanna:2015:LMR:2897442.2897461,
author = {Rajanna, Vijay and Alamudun, Folami and Goldberg, Daniel and Hammond, Tracy},
title = {Let Me Relax: Toward Automated Sedentary State Recognition and Ubiquitous Mental Wellness Solutions},
booktitle = {Proceedings of the 5th EAI International Conference on Wireless Mobile Communication and Healthcare},
series = {MOBIHEALTH'15},
year = {2015},
isbn = {978-1-63190-088-4},
location = {London, Great Britain},
pages = {28--33},
numpages = {6},
url = {http://dx.doi.org/10.4108/eai.14-10-2015.2261900},
doi = {10.4108/eai.14-10-2015.2261900},
acmid = {2897461},
publisher = {ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering)},
address = {ICST, Brussels, Belgium, Belgium},
keywords = {anxiety, cognitive reappraisal, intervention techniques, mental wellness, personal health assistant, relaxation, sedentary state recognition, stress, ubiquitous computing},
} }
Rajanna, Vijay; Lara-Garduno, Raniero; Jyoti Behera, Dev; Madanagopal, Karthic; Goldberg, Daniel; Hammond, Tracy. Step Up Life: A Context Aware Health Assistant. Proceedings of the Third ACM SIGSPATIAL International Workshop on the Use of GIS in Public Health. Dallas, Texas. ACM, November 4-7, 2014.
A recent trend in the popular health news is, reporting the dangers of prolonged inactivity in one’s daily routine. The claims are wide in variety and aggressive in nature, linking a sedentary lifestyle with obesity and shortened lifespans. Rather than enforcing an individual to perform a physical exercise for a predefined interval of time, we propose a design, implementation, and evaluation of a context aware health assistant system (called Step Up Life) that encourages a user to adopt a healthy life style by performing simple, and contextually suitable physical exercises. Step Up Life is a smart phone application which provides physical activity reminders to the user considering the practical constraints of the user by exploiting the context information like the user location, personal preferences, calendar events, time of the day and the weather. A fully functional implementation of Step Up Life is evaluated by user studies.
@inproceedings{Rajanna:2014:SUL:2676629.2676636,
author = {Rajanna, Vijay and Lara-Garduno, Raniero and Behera, Dev Jyoti and Madanagopal, Karthic and Goldberg, Daniel and Hammond, Tracy},
title = {Step Up Life: A Context Aware Health Assistant},
booktitle = {Proceedings of the Third ACM SIGSPATIAL International Workshop on the Use of GIS in Public Health},
series = {HealthGIS '14},
year = {2014},
isbn = {978-1-4503-3136-4},
location = {Dallas, Texas},
pages = {21--30},
numpages = {10},
url = {http://doi.acm.org/10.1145/2676629.2676636},
doi = {10.1145/2676629.2676636},
acmid = {2676636},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {context aware systems, environmental monitoring, geographic information systems, healthgis, individual health, personal health assistant, public health, sensors},
} }
Rajanna, VijayFramework for Accelerometer Based Gesture Recognition and Seamless Integration with Desktop ApplicationsInternational Journal of Scientific and Research Publications 3.1 (2013).
Accelerometer is one of the prominent sensors which are commonly embedded in new age handheld devices. Accelerometer measures acceleration forces in three orthogonal axes X, Y, Z. The raw acceleration values obtained due to the movement of device in 3D space which is hosting accelerometer can be used to interact and control wide range of applications running on the device and can also be integrated with desktop applications to enable intuitive ways of interaction. The goal of the project is to build a generic and economic, gesture recognition framework based on accelerometer sensor and enable seamless integration with desktop applications by providing natural ways of interaction with desktop applications based on the gesture information obtained from accelerometer sensor embedded in Smartphone device held in user's hand. This framework provides an alternative to the conventional interface devices like mouse, keyboard and joystick. With the integration of gesture recognition framework with desktop applications user can remotely play games, create drawings, control key and mouse event based applications. And since this is a generic framework, it can be integrated with any of the existing desktop applications irrespective of whether the application exposes APIs or not, or whether it is a legacy or a newly programmed application. A communication protocol is required to transfer Accelerometer data from handheld device to desktop computer, and this can be achieved either through Wi-Fi or Bluetooth communication protocol. The project achieves data transmission between handheld device and desktop computer through "Bluetooth" protocol. Once the Accelerometer data is received at desktop computer, the raw data is initially filtered and processed into appropriate gesture information after many computations through multiple algorithms. The key event publisher will take the processed gestures as input and converts them into appropriate events and publishes them on to the target applications to be controlled. This framework makes interaction with desktop applications very natural and intuitive. And it also enables game and application developers to build creative games and applications which are highly engaging.
@article{rajanna2013framework,
title={Framework for accelerometer based gesture recognition and seamless integration with desktop applications},
author={Rajanna, Vijay D},
journal={International Journal of Scientific and Research Publications},
volume={3},
number={1},
pages={1--5},
year={2013}
} }

GAWSCHI: Gaze-Augmented, Wearable-Supplemented Computer-Human Interaction.

`

An eye tracking framework to improve cognition in education.

TAMU Industrial Affiliates Program (IAP). College Station, TX. 2014-04-24.

Step Up Life - a context aware health assistant.

TAMU Industrial Affiliates Program (IAP). College Station, TX. 2014-09-09.