To facilitate gait recognition research, we collect the Front-View Gait (FVG) database in the course of two years, 2017 and 2018. To better protect the privacy of the subjects, we create a version of the database called FVG-B where the face area is blurred to the extent that state-of-the-art face recognition algorithms fail to recognize the subject. Compared to other view angles in gait recognition, frontal-view walking is a more challenging problem since it contains minimal gait cues. FVG-B includes significant variations, e.g., walking speed, carrying, and clothing from frontal view angles.

Overview Description

FVG-B provides frontal walking videos from 226 subjects. In addition, 12 of them were collected twice from the year 2017 and 2018, in total 2,856 videos. The videos were captured by the camera Logitech C920 Pro Webcam or GoPro Hero 5 on a tripod at the height of 1.5 meters at 1, 080 × 1, 920 resolution with the average length of 10 seconds. More details can be found in our paper section 4.

In blurring the dataset, we apply Gaussian filter to the face of the subjects with the smallest kernel font-size such that the cosine similarity between the blurred face and the gallery face images output by the state-of-the-art AdaFace is below a threshold of 0.2.

Overview Example

Figure 1: Examples of FVG-B Dataset. (a) Samples of the near frontal middle, left and right walking view angles in session 1 (SE1) of the first subject (S1). SE3-S1 is the same subject in session 3. (b) Samples of slow and fast walking speed for another subject in session 1. Frames in top red boxes are slow and in the bottom red box are fast walking. Carrying bag sample is shown below. (c) samples of changing clothes and with cluttered background from one subject in session 2.

Blurred Faces

Figure 2: Comparison between the original faces (row 1&3) and their corresponding blurred faces (row 2&4).

Session Details

FVG-B is collected in three sessions. In session 1, in 2017, videos from 147 subjects(#1 to 147) are collected with four variations (normal walking, slow walking, fast walking, and carrying status). In session 2, in 2018, videos from additional 79 subjects(#148 to 226) are collected. Variations are normal, slow or fast walking speed, clothes or shoes change, and twilight or cluttered backgrounds. Finally, in session 3, we collect repeated 12 subjects(#1,2,4,7,8,12,13,17,31,40,48,77) in the year 2018 for the extreme challenging test with the same setup as section 1. The purpose is to test how time gaps affect gait, along with changes in clothes/shoes or walking speed.

Session Details

Figure 3: Session arrangement details of FVG-B.

Files and Naming

All the video files are released as PNG frames ordered by frame index, .e.g. a name ending with 0010.png represents frame number 10. The file structure of FVG-B is demonstrated below.

                        +-- README.txt
			+-- s  // Session, RGB frames
			|   +-- iii  // Subject ID
			|   |   +-- tt // Sequence number
			|   |   |   +-- ss_iii_x_y_0001.png
			|   |   |   +-- ss_iii_x_y_0002.png
			|   |   |   +-- ...
			|   |   +-- ...
			|   +-- ...
			+-- ss_SIL  // Silhouettes extracted using MASK R-CNN
			|   +-- iii  // Subject ID
			|   |   +-- tt // Sequence number
			|   |   |   +-- ss_iii_x_y_0001.png
			|   |   |   +-- ss_iii_x_y_0002.png
			|   |   |   +-- ...
			|   |   +-- ...
			|   +-- ...
			where
			- s: Session number, 1 or 2
			- iii: Subject ID, 001-147 for session 01 and 001-077 for session 02
			- tt: Sequence number, 01-12
			- x: Variations. 
			  - 1: normal
			  - 2: fast
			  - 3: slow
			  - 4: carrying bag/hat
			  - 5: change clothes
			  - 6: multiple people
			- y: view angle
			  - 1: -45 degrees
			  - 2: 0 degrees
			  - 3: 45 degrees.
                

Evaluation Protocols

Evaluation protocols can be found here.

Download

FVG-B can be requested by signing and returning this release agreement and by filling in this form.

Publications

  • Gait Recognition via Disentangled Representation Learning
    Ziyuan Zhang, Luan Tran, Xi Yin, Yousef Atoum, Jian Wan, Nanxin Wang, Xiaoming Liu
    In Proceeding of IEEE Computer Vision and Pattern Recognition (CVPR 2019), Long Beach, CA, Jun. 2019 (Oral presentation)
    Bibtex | PDF | arXiv
  • @inproceedings{ gait-recognition-via-disentangled-representation-learning,
      author = { Ziyuan Zhang and Luan Tran and Xi Yin and Yousef Atoum and Jian Wan and Nanxin Wang and Xiaoming Liu },
      title = { Gait Recognition via Disentangled Representation Learning },
      booktitle = { In Proceeding of IEEE Computer Vision and Pattern Recognition },
      address = { Long Beach, CA },
      month = { June },
      year = { 2019 },
    }
  • On Learning Disentangled Representations for Gait Recognition
    Ziyuan Zhang, Luan Tran, Feng Liu, Xiaoming Liu
    IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2019
    Bibtex | PDF | arXiv
  • @inproceedings{ on-learning-disentangled-representations-for-gait-recognition,
      author = { Ziyuan Zhang and Luan Tran and Feng Liu and Xiaoming Liu },
      title = { On Learning Disentangled Representations for Gait Recognition },
      booktitle = { IEEE Transactions on Pattern Analysis and Machine Intelligence, Sep. 2019 },
      month = { June },
      year = { 2019 },
    }
  • Front View Gait (FVG-B) Database
    Yiyang Su, Xiaoming Liu
    PDF