We hypothesize that an individual computer user has a unique and consistent habitual pattern of Typing Behavior, independent of the text, while typing on a keyboard. This habit manifests itself visually through the shapes and motions of the hands, as well as audibly through the sounds produced when pressing and releasing keys. Given a webcam pointing towards a keyboard, we develop real-time computer vision and acoustic algorithms to automatically extract the habitual patterns from the video and audio streams and continuously verify the identity of the active computer user.

Unlike conventional authentication schemes, continuous authentication has a number of advantages, such as longer time for sensing, ability to rectify authentication decisions, and persistent verification of a user's identity, which are critical in applications demanding enhanced security. Traditional biometric modalities such as face and fingerprint have various drawbacks when used in continuous authentication scenarios such as privacy concerns and interference with normal computer operation. We propose typing behavior as a non-intrusive privacy-aware biometric modality that utilizes standard interactions with the keyboard peripheral.

Visual Typing Behavior

To capture the unique and consistent hand movements from typing, we use a simple webcam pointed at the keyboard. Given the video, the proposed system segments the hands from the background and separates the left and right hand. A shape context based on the silhouette is extracted from each hand, which is combined with the hand position relative to the keyboard. We also propose a novel extension extension of Bag of Phrases, Bag of Multi-dimensional Phrases, where a probe video finds corresponding hand shapes across the temporal domain, independently for each hand.

Visual Concept

Acoustic Typing Behavior

Given the keyboard sound recorded by the webcam, our system extracts discriminative features and performs matching between a gallery and a probe sound stream. Motivated by the concept of digraphs used in modeling keystroke dynamics, we learn a virtual alphabet from keystroke sound segments, from which the digraph latency within pairs of virtual letters as well as other statistical features are used to generate match scores. The resultant multiple scores are indicative of the similarities between two sound streams, and are fused to make a final authentication decision.

Acoustic Concept

Typing Behavior Dataset

We collect a first-of-its kind keystroke database in two phases. Phase 1 includes 56 subjects typing multiple same day, fixed and free text, sessions. It includes the acoustics and video information. Phase 2 includes 30 subjects typing mutliple free text sessions on different days across months. It includes the video information as well as keystroke timing information for use with conventional keystroke dynamics.

This dataset is released in two different forms. Acoustics: 45 subjects from phase 1. Visual: Full dataset. To obtain a copy of this dataset, please specify which portion you want access to and email Joseph Roth at rothjos1[at]msu[dot]edu with the subject: "MSU Typing Behavior Database download".

This dataset is provided for non-commercial use. Any publications arising from the use of this database, including but not limited to academic journal and conference publications, technical reports and manuals, must cite the corresponding work for use of Acoustic or Visual.

Publications

  • Investigating the Discriminative Power of Keystroke Sound
    Joseph Roth, Xiaoming Liu, Arun Ross, Dimitris Metaxas
    IEEE Transactions on Information Forensics and Security, Vol. 10, No. 2, pp.333-345, Feb. 2015
    Bibtex | PDF | Project Webpage
  • @article{ investigating-the-discriminative-power-of-keystroke-sound,
      author = { Joseph Roth and Xiaoming Liu and Arun Ross and Dimitris Metaxas },
      title = { Investigating the Discriminative Power of Keystroke Sound },
      booktitle = { IEEE Transactions on Information Forensics and Security },
      volume = { 10 },
      number = { 2 },
      month = { February },
      year = { 2015 },
      pages = { 333--345 },
    }
  • On Continuous User Authentication via Typing Behavior
    Joseph Roth, Xiaoming Liu, Dimitris Metaxas
    IEEE Transactions on Image Processing, Vol. 23, No. 10, pp.4611-4624, Oct. 2014
    Bibtex | PDF | Project Webpage
  • @article{ on-continuous-user-authentication-via-typing-behavior,
      author = { Joseph Roth and Xiaoming Liu and Dimitris Metaxas },
      title = { On Continuous User Authentication via Typing Behavior },
      booktitle = { IEEE Transactions on Image Processing },
      volume = { 23 },
      number = { 10 },
      month = { October },
      year = { 2014 },
      pages = { 4611--4624 },
    }
  • Biometric Authentication via Keystroke Sound
    Joseph Roth, Xiaoming Liu, Arun Ross, Dimitris Metaxas
    Proc. 6th IAPR International Conference on Biometrics (ICB 2013), Madrid, Spain, Jun. 2013 (Oral presentation, acceptance rate 11%)
    Bibtex | PDF | Project Webpage
  • @inproceedings{ biometric-authentication-via-keystroke-sound,
      author = { Joseph Roth and Xiaoming Liu and Arun Ross and Dimitris Metaxas },
      title = { Biometric Authentication via Keystroke Sound },
      booktitle = { Proc. 6th IAPR International Conference on Biometrics },
      address = { Madrid, Spain },
      month = { June },
      year = { 2013 },
    }